Virtual World Interoperability?

Interop

VWRAP is in limbo. Even though I was not involved in it until recently, I feel bad that it may simply disappear. It feels like a massive waste of energy for those involved for longer than I was; but, more importantly, it feels like a missed opportunity for gathering a critical mass of virtual world / virtual reality engineers — a community that seems to be plagued by silos– together with a critical mass of people who have been driving some important protocols on the Web.

The Past

To recap,  here’s a brief history of that working group, from pieces that I can gather from memory and from Google. The IETF WG was spearheaded by people in Second Life’s Architecture Working Group, a group that seems to have had its first official meeting in September 2007 and that still exists today despite the severe lay offs at Linden Lab. The IETF working group started in March 2009 as MMOX, with the goal of discussing virtual worlds interoperability. They had their first BoF that month. Linden Lab was the dominant leader, but at that time they were able to attract a few respected engineers who had nothing to do with Second Life, such as Jon Watte (of Forterra, now at IMVU), Heiner Wolf, and others. Even Larry Masinter, a prominent figure in the W3C, was there. There seems to have been lots of heated discussions about the meaning of interoperability. By October 2009, MMOX changed its name to OGPX — the original Second Life / IBM “Open Grid Protocol” — and the non-SL engineers stopped contributing. The group started discussing what seemed like a generalization of OGP that they termed Virtual World Region Agent Protocol. In April 2010, the group changed its name again, this time to VWRAP. At that time Linden Lab started laying people off, with the big layoff in June 2010. Soon after, Joshua Bell, Linden Lab employee and one of the co-Chairs of the working group, announced that Linden Lab was suspending its involvement in virtual world interoperability. At that time, there were several working drafts submitted, and people chose to continue the discussions.

That’s the story, in a nutshell. Here is my take on it.

The original idea behind MMOX was very exciting. Could they really come up with bridges of some sort between many different virtual worlds that use completely different technologies? Well, at least OLIVE and Second Life — that would have been a major technical win. Unfortunately, the discussion quickly converged to no — “quickly” as in 2 days after the MMOX mailing list started in March 2009. There seemed to be a disconnect between the people pushing for Linden Lab’s drafts and the people who submitted the other drafts (unfortunately, none of those drafts mentioned in the ML archives  are available anymore). According to this,  at the first BoF, a suggestion to split the group into several groups seems to have touched a nerve — perhaps by hinting at the technical or social or political impossibility of satisfying the different notions of interoperability floating around.

Compared to MMOX, OGPX/VWRAP was a lot less exciting. The original goal had failed. Nevertheless, Linden Lab and the community around it proceeded by focusing on a possible variation of Second Life that would be decentralizable, supposedly with OpenSimulator-based worlds playing an important role in that. But with the departure of Linden Lab from the interop scene in June 2010, the existing VWRAP drafts and discussions were left hanging in air; the only viable system to apply them to was… OpenSimulator itself. But from all the core developers of OpenSimulator, only John Hurliman had made significant contributions to VWRAP. With this impending scenario of having OpenSimulator be the lonely target of VWRAP, that’s when I decided to take a closer look at what that group had been discussing, at the end of August 2010.

I started by reading all the active drafts: the intro,  the authentication, and the type system. I confess I was completely puzzled by what I read. I made a journal-style review to the intro document, and sent it for discussion to the ML. I could have also commented on the other drafts, which were equally puzzling in their own right, but the intro seemed to be the root of the puzzle. There was silence for a couple of weeks, and then hell broke loose.

The Options

Perhaps it’s a good thing that VWRAP dies. But it really is a shame that the original idea behind MMOX didn’t happen. Eventual drafts coming from that group would be secondary to the value of bringing a critical mass of VW engineers together. So here is a post-mortem of what I think the disconnect was in the very beginning of MMOX. Maybe there will come a time when people realize what went on and start talking again.

A group like that would benefit tremendously from thinking of interoperability within the REST style of client-server applications, even if many VW engineers look at REST and the Web browser as a threatening alien invasion. There are lots of misunderstandings about REST; suggested readings: this and this. Once you grok it, think in REST for conceptual clarity regarding the importance of independent evolution and variability.

Think of the VW client-server protocols as something opaque that no standards body should interfere with; like Flash for example. So SL would be application/sl, OpenWonderland would be application/wonderland, WoW would be application/wow, IMVU would be application/imvu, etc. Imagine that there are browser plugins for all of these MIME types, just like there are for Flash and Unity3D. These MIME types may or may not be publicly documented, which is to say that these protocols may be public or proprietary — and that’s ok. I stress that this REST/Web-browser model is just for conceptual clarity; in reality, there can be fat clients for each of these applications.

As much fun as it may be to design these protocols and UIs, there’s nothing particularly standardizable in them. Diversity here is a good thing, because there are many different ways of doing these protocols and UIs, and companies have competitive advantages in rolling their own. Their solutions may bomb; but diversity is very much needed.

Rather than trying to force everyone into using exactly one and the same client-server application protocol, the pertinent question for interoperability is the one that Jon Watte was pushing for in the very beginning of MMOX, but now using my suggested conceptual model: how can a server-side that serves application/sl to its clients interoperate with a server-side that serves application/wonderland and with another that serves application/vnd.unity? The answer is in 4 parts:

  1. If you have an application-specific client (e.g. the SL client or the Wonderland client, non-REST style systems), then, by default, interoperation is limited to offline data exchanges, because of the extreme coupling between the client and the server side. Maybe we can all agree on a common representation of 3D objects, so that objects created elsewhere can be imported by all these applications. COLLADA seems like a perfectly good solution. With luck, interoperation can also happen for certain capabilities of these worlds, such as IM, if the different applications happen to have been designed for that.
  2. We can design decentralized systems within each application type. So, for example, OpenWonderland has its own federation mechanism. The OpenSimulator Hypergrid is a federation for SL-style VWs. The OGP protocol, before it derailed into VWRAP, was headed in the same direction. This is a shallow concept of interoperability, because the decentralization mechanisms are bound to the specific application types. We can bring one of these up for a standard, but it will be a shallow standard, worsened by the fact that each application type has its own ‘natural’ design for its federation, largely influenced by the already extremely high coupling between clients and servers. So for example, the Hypergrid federation works for the Linden Lab client as-is. But if the Linden Lab client is modified to account for the existence of a federation, then the design of such Federation may be quite different. Coupling galore!
  3. We can design protocol bridges ala LESS. In this case, we extend our server-side to peer up with servers of different kinds for the purpose of extending the space in co-simulation style. This is an interesting concept that allows true interoperation between VWs of different types while allowing everyone to keep their own fat clients. An advantage of this approach is that the simulated entities never leave their home worlds, therefore being a good basis for protecting IP (only rendered data needs to flow). The main disadvantage is that it requires server-side computational duplication. That is, if I want my world’s users to connect to someone else’s worlds, I need to allocate computational resources for the co-simulation in my world. Another disadvantage is the engineering nightmare of developing those bridges.
  4. If you have an application-agnostic client like the Web browser (REST style), true interoperation can happen without duplication of computational resources and without having to build protocol bridges: as the user moves from one server-side to another, the browser simply loads the corresponding application client program. It becomes possible for these different server-sides to engage in interactions that negotiate all sorts of things as the user moves around (i.e. OpenID and beyond). Here we’re not limited by the client speaking only one application protocol anymore, because the protocol comes to the client dynamically — code-on-demand is one of the principles of REST style applications.

#1 is a solved problem; it’s just a matter of will on the part of VW implementers. The other 3 are more interesting as they support online, RT interoperability. I highly recommend anyone interested in interoperability to roll their own federation first. But please don’t bring it up for a standard — that’s quite an arrogant move! If you want to document it, do what Google recently did with VP8 — submit an IETF Independent RFC at most, or simply document it outside of the IETF (example). #3 is a good short-term solution for true interoperability between different types of VWs, but presents engineering challenges that do not scale. Only #4 scales — not surprising if you understand the goals of REST.

13 replies on “Virtual World Interoperability?”

  1. MaggieL says:

    Actually, there’s been some significant work done outside the traditional standards organizations towards this sort of goal. Aaron Walsh visited a recent AWG meeting to talk about the work being done within the MediaGrid Open File Formats Technology Working Group (OFF.TWG)

    http://mediagrid.org/news/2010-11_iED_Create_Once_Experience_Everywhere.html

    And in the OpenWonderland project (http://openwonderland.org) we’ve been experimenting with RESTful interfaces; our asset servers use WebDAV. Here’s a blogpost about some recent early steps:

    http://blogs.openwonderland.org/2010/11/16/curl-in-to-restful-posters/

  2. Diva Canto says:

    Thanks for the pointers, Maggie!

  3. Awesome post, Diva; also, the referenced mailing list posts (and your answers to some of them) are required reading for anyone wanting to get the deeper VW interoperability picture.

    As you know, ‘let a thousand worlds bloom’ was an early OpenSim meme; the meaning being “it’s too early to standardize – we haven’t even begun to agree where the value of VW lies, how would we be able to agree on protocols to enable that value? let’s create a platform to enable people to experiment!”

  4. I attended the meeting where Walsh discussed the “Create Once Experience Everywhere” work that is part of the Open File Formats Technology Working Group (OFF.TWG), which seemed to have several VWRAP and IETF participants based on the text and audio chat questions and answers. Here is the audio and text transcript of that meeting http://members.ImmersiveEducation.org/node/293

    What I found most interesting is that they have already established virtual world interoperability and are now working on more advanced features around “avatar portability” and “global-scale virtual worlds” all of which are open and royalty free standards according to Walsh (in the text transcript you can find links to their open royalty free policy that someone in the VWRAP community asked early on in that meeting). The video showing Wonderland, realXtend, Cobalt and even a very early built of OpenSim all rendering the same 3D objects is incredible, really. How long have we waited for something like this?

    I haven’t seen anything about it yet but during the meeting Walsh mentioned an upcoming announcement for content builders/creators that makes use of this work, where they are building complete virtual worlds on a variety of platforms including OpenSim. I thought that was happening this month, but haven’t heard anything about it yet (or maybe I did not understand what he said in voice since that part was not recorded in the text transcript).

    Richard

  5. Jon Watte says:

    Being the one that wrote “REST and games don’t mix,” I’d like to state for the record that I think REST has a fine place along transactional interoperability protocols. It’s just that REST doesn’t work for interactive real-time streaming. Nobody would pretend that the REST model works for, say, a VoIP data transfer protocol. It’s simply not within the design space. My argument is that the core of synchronous, online multiplayer interactions is at its core very similar to something like VoIP payload data, it’s not like SIP or like YouTube. SIP and YouTube could both be well served by REST APIs.

    Also, I think the “browser loads clients as I move around” solution to interoperability side-steps the bigger value proposition. Going back to MMOX: what if some city rescue workers want to have a virutal fire drill together with some chemical plant operators? If you can only have one technology “active” in the simulation at one time, then all of the city worker affordances (radio communications, rescue vehicles, medical equipment, etc) and all of the chemical company affordances (chemical processes, plume dispersion models, etc) must be ported into the same hosting environment for such a co-operation to have any chance of working technically. And this doesn’t even begin to address the problem of users needing to learn several different user interface mechanisms for interacting with a virutal world, something which could be quite costly, given that VM interactions are a lot richer than just clicking web links in a browser.

    I’m still highly interested in virtual world interoperability, as long as it can solve real-world problems (where the problem is something more than just “how can Linden Lab clients connect to more servers around the world.”) Maybe what we need a a bit of re-grouping, and letting the market mature to the point where virtual worlds actually have well defined meaning to the greater world, before we can make real progress.

  6. Diva Canto says:

    @Maggie, @Richard OFF.TWG seems to be doing #1, which is very much needed. COLLADA is the way to go. Most 3D editors/tools export/import COLLADA. I’m looking forward to see this import/export completed in OpenSimulator.

    @Jon thanks for your comments here! I agree that there are interesting use cases for server-to-server peering, and those should definitely be explored. I just don’t think that is a scalable, generalizable approach, in the sense that it wouldn’t be scalable or generalizable to try to peer all possible social network applications to each other — although that could also be done on a case by case basis. In any case, #3 and #4 is where the focus should be, once #1 is well established and after people gain cross-domain experience with #2.

  7. @Diva Yes, that’s true, OFF.TWG is doing #1 (interoperable content via their “Create Once Experience Everywhere” format that is based on COLLADA), but OFF.TWG is also involved in the others although the majority of the work is being done by The Education Grid Technology Working Group (TEG.TWG) in cooperation with OFF.TWG. They actually have cross-platform interoperability between Wonderland and realXtend working (at least some aspects of it) which is shown in the “Create Once Experience Everywhere” video http://youtube.com/user/ImmersiveED where, starting at about 6 minutes into the video (around 6:25, actually), they show how a 3D crab that is stored in the Wonderland asset server is accessed directly from realXtend (the drag-and-drop demo is interesting). I had to watch this part of the video several times before I “got it” and realized what’s going on, and then I was blown away (at first I thought they were just rendering the same object, with that object being stored in their own respective asset servers, but after watching a few times I realized that it’s much more than that because the two platforms are actually connecting to each other’s asset servers at the lowest level: they are actually interoperable virtual platforms here). I had no idea that virtual world platform developers would WANT to be interoperable, expecting that they would be more interested in being walled off to other platforms so they can be “the one” and not encourage users to even think of the other virtual worlds out there, but this is clearly not the case here. I have been tracking their twitter feed at http://twitter.com/immersive where it’s pretty clear that they’re working on deep interoperability across all of the virtual world platforms (starting with Wonderland, realXtend, Cobalt, and even OpenSim). These types of tweets are the ones that I have seen appear since late last year (these started to appear at about the same time that the OFF.TWG announced their cross-platform file format based on COLLADA):

    #iED TEG: Entry points for 1) Avatar Portability 2) Single sign-on 3) Global Hypergrid 4) Global-scale worlds http://bit.ly/hjmn7r
    #iED ACCESS: See http://bit.ly/eV2AGT to access draft requirements for 1) Avatar Portability 2) Single sign-on 3) Global-scale worlds
    #iED Drafting requirements for 1) Avatar Portability 2) Single sign-on 3) Global-scale worlds http://bit.ly/ibZq9f & http://bit.ly/f5zkIl

    I can’t get into these documents because I’m not a member of the technical groups doing this, but I have been watching with keen interest since they’re apparently tying the open file format and interoperable virtual worlds activities together in a way that has already produced some impressive results (to be quite honest I was stunned to see that part of the video where the two virtual worlds are sharing objects at a low level, directly connecting to each other’s asset servers). If anyone is a member of these groups and could comment on exactly how they’re doing this I would love to know!

    If I hear more I’ll let you know,
    Rich

  8. Hey Diva,

    Great post, but let me correct a few inaccuracies.

    1. a number of non-Lindens participated in MMOX, OGPX and VWRAP. John Hurliman (of intel) and David Lavine (of IBM) made significant contributions in terms of publishing drafts and presentations at various IETF F2F functions. significant contribution and guidance was made to all incarnations of the effort (MMOX,OGPX,VWRAP) by a number of non-Linden IETF participants including Larry Mastiner, Dave Crocker, Barry Lieba and others. Barry, an employee of IBM and later Huawei, later went on to co-chair the OGPX BoF and the VWRAP WG. you’ve frequently indicated you felt that the whole effort was a Linden marketing ploy; perhaps you’re looking at the proceedings through that gloss. whatever. i’m not your psychologist. but, unlike you, i did actually participate in the proceedings.

    if you will remember, this is when i approached you to ask if you wanted to participate and your response was… let’s just say you made it very clear you thought it was a media dog and pony show.

    2. you are correct, MMOX was an effort for broad-ranging interoperability, but OGPX and VWRAP definitely were not. how do i know? i was actually in the room when these decisions were being made. as the organizer and co-chair of the MMOX BoF, i invited everyone i could think of ranging from more traditional VW participants like metaplace (remember them?), Sirikata, OSGrid, OpenSim Project, darkstar/wonderland, project chainsaw, there.com, etc. to more game like experiences: Blizzard, CCP, Sony, EA, Microsoft. We began with an extremely wide and abstract charter: “make MMO experiences interoperable.” Our objective was to create a standardization regime that would allow enterprises, educational institutions and marketing organizations the ability to “mix and match” the best technology to build solutions. So, if you were a university and you wanted an authentication service from Microsoft, but using OpenSim with a Cable Beach front end to the unity asset service, you could do that.

    the advice we got from the IETF ‘grey beards’ was, “wow. that’s a lot of different solutions to push into one problem domain. it’s going to be rough to push all these specs together.” and they were right. at the end of the MMOX meeting, we could not even get agreement as to what a virtual world was. in fact, Jon Watte and i couldn’t agree to what the term “interoperability” meant. (and i think this is still an open issue in the community.)

    after a huge meeting where we couldn’t agree on a shared problem domain, lisa dussault (then the IETF apps area co-director) suggested we try to focus on smaller problem domains. Jon Watte and the OLIVE camp were given the opportunity to produce more specs for their “interop stream” concept, but they declined. those of us in the OGP camp were given the opportunity to “try again with a much smaller problem domain.” and we did that the next summer in Stockholm with the OGPX BoF. it’s interesting to note that MMOX was “left open” for use by people who did not like the direction OGPX was headed.

    OGPX was intentionally focused on OGP-like solutions because we found several people (myself, hurliman, levine, duffeyes) who were interested in standardizing the OGP/second-life-like protocols in an open manner. in the course of chartering the working group, several people voiced concern that the “Open Grid Protocol” was a confusing name. in fact, at the OGPX BoF, more than one person said, “why were you talking about virtual worlds? i thought you were going to talk about grid / cloud management.” so after lots of argument, the name VWRAP (Virtual Worlds Region-Agent Protocol) was the first name that none of the participants strenuously disagreed with. personally, i still like OVER (Open Virtual Experiences with Regions) and ABOVE (Avatar Based Open Virtual Environment.)

    3. i’m kind of surprised to find you seem upset that MMOX failed considering your non-participation. you should know, the MMOX list is (i believe) still active. if you wanted to take up the mantle of cross-world interoperability, you could actually do it. i’m happy to put you in touch with the IETF you would need to talk to about it.

    4. there was actually work done on VWRAP after Linden’s departure. after leaving Linden in February of last year, i worked with the corporate entity that became Smithee, Spelvin, Agnew and Plinge on technology to link augmented reality and existing virtual worlds technology. (and, in fact, that’s part of what i’m doing right now.)

    5. after you posted your comments re: the intro to the list, i emailed you back to get clarification on some points and offered to change bits of the intro you found confusing. you didn’t reply.

    6. i take issues with concept that REST and virtual worlds don’t mix. REST is an excellent model for some type of services used by MMOs and Virtual Worlds. i totally agree that if you did EVERYTHING with REST (including object updates, real-time voice, avatar commands and animation coordination) you would be sorely disappointed. which is why, in VWRAP, we recommended using RTP to carry those types of messages. had you been participating in the working group, you might have overheard that.

    in conclusion: YES! YES! YES! PLEASE TAKE UP THE MMOX MANTLE AND DO SOMETHING WITH IT! (or recharter VWRAP if you want to) you’ve been sniping at the VWRAP community for two years because we’re doing something different than what you wanted to do. but at least we did something in the interop space. you’ve done an EXCELLENT job making a virtual world solution, but HG protocols are not specified sufficiently to allow me (or my cloud of 8 bit devices) to inter-operate with it. HyperGrid as a protocol is specified only as a C# implementation that some of us are not allowed to touch.

    before you can throw stones in this glass house, you really should try to do the same things we did so you can appreciate the difficulties of getting the standards-body cat herd pulling the same way on the standards rope. i would be very happy if you did something akin to what we did: document HyperGrid, schedule a BoF and call for participation and then manage the working group to produce documents that would allow me, someone who cannot contribute to OpenSim core, the chance to make an implementation that would work with with a HyperGrid aware OpenSim instance.

    -cheers

  9. Diva Canto says:

    @Meadhbh I actually was paying attention to MMOX when it started, I was scanning through the archives. A few messages sent in those first days put me off from getting involved, as it was very clear that Linden Lab was after the IETF-stamped standardization of a federation for Second Life; I guess I wasn’t the only one reading that, taking from the historical turn of events. The Lindens and SL aficionados probably didn’t realize how arrogant that was; I believe everyone had good intentions — it’s just the landscape for VW interoperability wasn’t well defined at the time.

    Federations fall in #2; every fat-client virtual world type can/should have their own! But please… let’s not get the IETF involved in that. At least not until we can compare notes on all these possible federations, and then move on to #3 and #4.

    This is the reason why I haven’t submitted one single thing related to the Hypergrid — it just doesn’t belong in a standards track, at least not in its current fat-client form. Although the architecture is widely applicable, including for the Web, the protocol itself at the moment is *just* for the Linden client. I’ll be more than happy to document it if other projects / companies want to use it, but, as it stands, it’s limited to OpenSimulator and Second Life servers because of the X-rated coupling with the Linden client.

  10. @Diva “the landscape for VW interop wasn’t well defined at the time.” – no kidding. that’s why we asked you to participate.

  11. Diva Canto says:

    @Meadhbh it wasn’t well defined for me either at the time :-) I had to make the Hypergrid work first before I could associate concepts to the result, and before I could see the entire neighborhood as clearly as I am seeing it now…

    One thing I can tell you that helped immensely: staying away from the client. Having control of both the client and the server is a blessing and a curse: it gives you the power to make optimizations that usually end up pulling the entire solution even further away from being general. So yey for highly constrained design!

  12. Breen Whitman says:

    @ Meadhbh Hamrick as the organizer and co-chair of the MMOX BoF, i invited everyone i could think of ranging from more traditional VW participants like metaplace (remember them?), Sirikata, OSGrid, OpenSim Project, darkstar/wonderland, project chainsaw, there.com, etc. to more game like experiences: Blizzard, CCP, Sony, EA, Microsoft. We began with an extremely wide and abstract charter: “make MMO experiences interoperable.”

    Meadhbh, could you explain the current AWG situation, and the Facebook Like API that LL was/is working on. How long LL technical implementers have known about it, and when senior LL management passed on this strategy?

    Now that you are out of LL, are you at liberty to discuss these matters. For example, what was LL’s mantra re a two way standard? Or was it an SL API right from the start. I am talking for a high level strategy view here. I realize that “in the trenches” implementers may have been told differently.

    Additionaly, when LL and IBM achieved the intergrid teleport, do you have any perspective from a high level. For example, was it a genuine desire for open grids, or was it merely a by product of the uncomfortable feeling IBM had in placing their business model outside their own servers. As such they wished to bring things in under their own roof. Was the exercise just to realise an API between these two systems, rather that a true open standard?

    Many thanks in advance for any consideration you may give.

  13. Morgaine says:

    Diva is mostly right in her analysis as an external observer peering in at the AWG/OGP/MMOX/OGPX/VWRAP groups from the outside. Having been on the inside of all of them, perhaps I can shed a bit more light on the less well lit corners.

    There was indeed a total disconnect between the alleged goal of VW interop in all of these groups, and the actual intentions and direction of Linden Lab employees and ex-employees. Their rhetoric had always mentioned interop, but the documents that LL people actually produced never even hinted at interop between VWs, let alone supported that goal directly. When pressed, they eventually admitted that the intention had never been to interoperate BETWEEN virtual worlds.

    After a long and rather painful process, the anti-interop fox in VWRAP was finally chased out of the interop henhouse in recent months. It is to Diva’s credit that she gave us (that’s the pro-interop “us”) the ammunition to fight against LL’s anti-interop legacy, when she appeared in the VWRAP mailing list with a post that could be summarized as “These VWRAP documents do nothing for interop.” She was right, and she triggered an uprising in the group which resulted in the old documents being dismissed.

    So where does this leave VWRAP? That’s a harder question to answer. There is no lack of interest in continuing work on interop because that is clearly the future for virtual worlds, but sadly the *practical* work of testing and honing VWRAP ideas was being done by John Hurliman in his Cable Beach and SimianGrid projects. Unfortunately, John has switched careers and is no longer pursuing that work.

    realXtend has the only remaining implementation of a Cable Beach asset service (accessed through WebDAV), but they have not participated in protocol discussions so far. While the VWRAP concepts seem to be on the right track (excluding the bad legacy documents), interop standards cannot be created out of discussion alone, but require concrete implementations to drive and test the work. Inevitably this means that the work is somewhat on hold, beyond introductory documents of intent.

    The VWRAP model of interop is simple, yet powerful because of that simplicity. Assets are stored in asset services spread all over the Internet, some being run by VW providers, some by commercial third parties, some by open access community groups, some locally on power-user’s LANs, and some on the client machines of ordinary users. A VW region holds only the URIs of assets present in the region, and it sends those URIs to validly connected viewers so that they can fetch those assets from the asset services that hold them. This scales beautifully since the asset requests by clients may be spread all across the Internet. A given asset service may serve the same asset to clients connected to many different worlds, so asset-based interop is a natural outcome of this architecture.

    Of course, the devil is in the details, and that is where the group will need to continue exploring and testing ideas before anything like a standard can even be proposed, let alone agreed. Meanwhile, at this early stage of the process, in VWRAP all we are doing is documenting the general concepts to try to gather interest around this approach to interop between VWs. Needless to say, we need to find solutions that are flexible enough to meet the majority of people’s goals. We’ve barely begun, and we are very dependent on the interest and ideas of people who actively work in areas of interop, like Diva herself, the many other Opensim developers, those of realXtend, and others such as iED.

    There’s a great future ahead, but there’s a lot of work to be done before we get there.

Comments are closed.