Second Earth


Technology Review: Second Earth

Now that Linden Labs has open-sourced the Second Life client, if any Google Earth engineers chose to study it, I might no longer be the only person lucky enough to know both the Google Earth and Second Life internals well enough to make a bold statement on a mashup of the two. It would be great if others (besides me) could do so soberly. Because all I’m hearing lately is a lot of “wouldn’t it be great?” and not much “here’s how” and, better yet, “here’s why” practical discussion.

I’ve said before that I don’t think any direct integration between SL and GE is wise at this point, at least not in their present forms and with the present missions of each application. I’ll try to elaborate on that and see if I can convince you that the best thing overall is for each app to evolve along its own path. But I will point out a few areas where each could benefit from at least mimicking the other.

[Edit: See Wade’s comment below. I want to also make it clear that the issues of a SL/GE mashup are purely hypothetical, and are meant more to serve as a framework for understanding some important technical and usability issues in these two different kinds of virtual worlds.]

The issues:

Mirror Worlds vs. Virtual Worlds – Second Life is fictional, whimsical, experimental. Google Earth is a reflection of the real world, and according to its CTO, will remain so in the future. In SL, you can make or do almost anything. GE is meant to be a platform for delivering geo-referenced information that is strictly relevant and useful (as well as fun) for our first lives.

Now, GE bent those rules a little by adding custom layers and 3D import — there’s no clear rule that says your KML file has to represent a real object or building. And KML search will have a hard time distinguishing between real vs. fictional search results (even as they add pagerank-style features). The problem is that fictional results pollute real-world uses, and if they get out of hand, Google will have to somehow segregate those realms. I definitely don’t want to search for directions to the Home Depot and find that my path is blocked by a giant robot or a self-replicating pile of poo.

And for SL, the opposite problem exists — growth of the landscape was designed to be geometric and ad hoc, not mirroring any real-word geography. There’s some exceptions to that as well, one of which I suggested to a SL-oriented colleague a while back, and that is that a single SL sim could serve as a stand-in for the entire earth, in miniature, linking via teleports to any number of other 1:1 scale sims to flesh out the real-earth geography, albeit with giant gaps — meaning you’d have to pop back to the “hub” to get from place to place — so you can see some of the difficulties.

Direct Integration – In this pure hypothetical, we’re talking about the two companies working together (e.g., collaboration, or outright purchase/merger) or GE open sourcing their code so a third party can do the mashup. I don’t think that’ll happen anytime soon. Now, if Google can pay $1.7B for YouTube, I think anything is possible on the merger front. Money seems to be no object, but they’re not being irrational. For the price Linden might demand, they’ll need to show, say, 10-20M regular visitors and a revenue model that Google thinks will integrate with their core business and scale indefinitely.

I’d also guess, purely based on rumors and the engineering personality type, Google would be more likely to try rolling their own social VW before buying an established company like Linden. If Google does that, and they do as well as Google Video did against YouTube, that’s when I think buying Linden would make more sense. I don’t expect it to happen, if at all, for several years. An IBM-style purchase is more likely, IMO. But what do I know?

Mashups – This is a bit less speculative. What Google could do fairly easily in the near term is release a closed-source version of GE that is more like a toolkit or library, closer to the Microsoft model for their Virtual Earth offering, though designed for real-time rendering. A toolkit would allow someone to build a virtual environment that had both the real earth and whatever avatar systems they wanted to throw in. It wouldn’t necessarily be SL, but it would be the first social VW based on the real/mirror Earth. More likely than that, though, is some licensing specific deal for the GE technology if the price is right (I don’t know what that price is).

It would also be technically feasible for Google to do the reverse — embed Google Earth’s OpenGL rendering code into the open-sourced SL client, such that anyone using that new SL client could create an instance of a GE globe inside SL as an in-world application as opposed to the actual terrain you stand on. The only problem would be licensing, if that’s an issue at all. But without the analogous GE source or any license to use GE’s umpteen terabytes of data, neither Linden nor any 3rd parties would be in any position to do this from their end.

Client / Server – The one mashup that pundits seem to call for the most — a horde of real, active, chatting Second Life avatars inside one big shared Google Earth — is unlikely to happen in the near-term for several technical reasons. One is that SL is much more than the PC client you install. Without the requisite number of SL servers arranged into a lat/long map of the earth, or at least the parts we care about, none of your avatars would be able to interact. So while mashing-up the client you’d also need to handle both SL and GE servers at the same time, preferably in some coordinated fashion. For example, if GE holds the terrain and building data, SL servers would need to know about those to perform the physical part of the simulation, collision detection and so on. So there’s at least a three-way dialog going on, not just an integrated SL/GE client. And that makes us engineers very unhappy.

Putting just the bodies in GE is easier, although we’re talking the cheesy, lame, unsupported version with very little interactivity. You can capture your avatar in a static pose from SL using an OpenGL based interceptor, convert that to Collada format and then KML and import it into GE. Animating it is trickier. Forget about dancing or even moving your limbs. The best you can probably hope for without extensive kludge is using KML’s “network update” feature to fetch your avatar’s current location (and that of anyone else near you) from a special server you’d supply. In this case, you also won’t get collision detection with 3D buildings unless you do it yourself. And similarly, you’ll have to handle all interactions among avatars. Essentially, you’ll be recreating a kludgey version of SL’s simulators and using GE + KML pseudo-scripting as the client rendering engine. It’s not something I’d want to spend my time on.

Scale – Given the land area of the earth is reportedly 150,000,000km2, and each SL server currently handles 256×256m (16 servers per km2), a quick calculation reveals roughly 2.4 billion servers would be needed for the land alone. If we just did Manhattan (87.46 km2) , it would take about 1400 servers, or roughly 1/4th of their current total just for one densely populated island. And we’re not even talking about the limits on concurrent users yet.

Clearly, if GE and SL were ever to mate, SL would need to move from a rigid grid to something more adaptable, for example, a quad-tree that gets subdivided depending on where the people virtually are, or better yet, based on who is interacting with whom at any given time. I worked with a company briefly (they never got their funding) who was pursuing something like this to handle 1 million simultaneous users. It’s not a trivial problem, but would be necessary to scale SL up to a full planet, if they ever see the need.

The next issue of scale is that of “levels of detail.” GE was designed from the ground up to seamlessly zoom through 20 or more powers of two (220 : 1 scaling) when zooming from way out in space down to a spot on the ground. SL was designed to let you walk, teleport, or fly relatively close to the ground. It essentially has four levels of detail — near, far, and off, plus we’ll count the nice 2D overhead map. But SL isn’t designed to handle viewing the entire world in so many different scales, smoothly moving through all of them. Without big architectural changes, it would be very difficult to do the kind of flying and zipping around one does in GE.

One final note under the category of “scale.” The Collada file format may have some built-in features to properly convey the scale of objects, but it’s rarely used or even obeyed from what I can tell. All 3D models have built-in assumptions about scale. They’re just a bunch of numbers after all, not smart about their context or nature. If someone exports data in centimeters and another program assumes those are feet or meters, you’re going to have to write yet another program to convert everything to match up, or your avatar may be as tall as the Empire State Building, or too small to see.

Other issues – I haven’t even touched on the issues of converting from procedural or parametric objects to polygons, which is something I have a lot of experience with. That’s a whole other discussion which I could spend hours on. Suffice it to say, it’s a hard problem, but one in which there is an easy solution — if it’s designed in to the system.

But for now, let’s worry more about whether or not people want this and then get around to fixing the “nitty gritties.”

The How’s — What’s really possible?

1. If Google is so inclined, I’d love to see them create a free or licensed DLL that encapsulates the OpenGL rendering code from Google Earth in a form that could be invoked inside of someone else’s OpenGL-based 3D engine. It would at a minimum require functions to set the viewing position, layer state, etc… and probably most things you would already do from the UI. This would at least allow someone to put a Google Earth inside Second Life, as a 3D object one could poke at. It would not be a regular place in SL, but even that could be improved with time. If Google figures out how to monetize GE with ads, such a DLL might require showing those ads as well, or we could see it as a one-off license to some well-funded company.

2. If Linden is so inclined, they could work on a new kind of “grid” that could scale up to a full earth-sized geography, solving the problem of zooming in and out as a method of managing that new expansive scope. FWIW, I think it’s worth their time, because teleporting is a bad way work around the problems of walking and flying. GE’s zooming gets you there just as fast, but without losing your geo-spatial awareness. In other words, the continuous pan across the earth keeps your brain working on the relative positions of the places you visit, as in a big mental map. Teleporting loses the advantages of having one big shared space — you might as well have a bunch of small connected rooms at that point, which some people are working on. If Linden does this, they could be in a position to build an “Earth” app like in SnowCrash — one that they can even zoom into as a method of getting around. This need not be a literal mirror of the earth, but the technology I outlined is still important to make it work, regardless of the source data; satellites or users.

The Why’s — What will people actually want or need?

Here’s the big question, saved for the end. With all the “wouldn’t it be cool” talk, people seem miss the fundamental issue with new technology: people do the dumbest things. It’s not that people are necessarily dumb. It’s more an issue of finding the “lowest-energy” solutions rather than the most elegant ones.

MySpace is a good example of a “low-energy” solution, as compared to many more elegant ones that were presented. And there is no doubt that there will be a host of 3D MySpaces within the next 12 months to further test the theory (I’ve consulted for several but I’m not promoting any). All of them will face a fundamental problem: What is 3D good for? More specifically, why would I want to visit your virtual living room if you’re not there? Those questions have real answers, but perhaps not the most obvious ones.

Now, there are also some great reasons to want avatars in Google Earth. The more GE becomes a destination, rather than a tool to pick your destination, the more likely we are to say “meet me on 5th avenue for shopping,” and we mean the virtual version of each. This requires the avatars to interact. And if we go a little further to an application where I and a paid designer can work on my new house, you can see a need for GE to add in-world building tools to the mix.

As for a virtual economy and social networking tools, I don’t know. As long as Google doesn’t have to host your content themselves or care about bandwidth for user data, there really is no cost to them, and no big reason to quantify value for 3D objects. The “layer” approach makes it much easier to simply turn off content that’s just too slow. But for all of these, I see them being so tightly integrated with Google’s advertising economy and information delivery mission that I can only imagine them doing this in-house, slowly but surely adding more metaverse-like components that are more or less strictly tied to the real world.

Similarly, the path for SL is to make a bigger and bigger world and scale their way towards more and more simultaneous users. The “why” of making an entire earth in Second Life seems obvious — but the old joke “it’s a small world, but I wouldn’t want to paint it” comes to mind. If SL users really had 2.3 billion servers to populate, could 10 – 20 million users even do it in one second lifetime? Not by hand, I think.

GE at least has automated tools for capturing a world that’s already out there. Imagine if people really had to build a second version of the whole world by hand, with the level of detail SL desires?

It’s unlikely. But that’s where procedural modeling comes in. It’s not going to recreate the real world either, but there are known and waiting methods for building out entire planets full of detail. However, that’s clearly a third way — not user generated, and not mirroring the real world. And that discussion is best left for another day.

  1. #1 by Wade Roush on June 19, 2007 - 2:47 pm

    Hi Avi,

    Wow, this is the most detailed technical discussion I’ve ever seen of the “hows” and “whys” of a Google Earth – Second Life mashup. Thanks for linking to my Technology Review story on the subject.

    Unsurprisingly, neither the Linden Lab folks nor the Google folks wanted to talk with me much about how GE might take on SL-like features or vice versa. At Google, the play everything close to the vest, and at Linden Lab, they have enough trouble just keeping the existing grid running.

    But I just wanted to note that my story is not premised on the idea of a LITERAL Google Earth – Second Life mashup. The two services are simply the leading examples of mirror worlds and virtual worlds, which are bound to start overlapping. The easiest way to think and speculate about that merger is through the examples we already know.

    Great post, keep up the amazing work here at Brownian.

  2. #2 by Mark Roest on July 8, 2007 - 11:02 pm

    Glad to see this; would like to see a GIS overlay on GE that shows all the information we can gather about the environment and human activities and artifacts. Then bring in SL to support 2 simulations: 1 of where we are going with current behaviors, and 1 where those of us who want to change things can do so, plugging in new technologies that exist (or modeling R&D on those that could), and combining that with collaboration within and across cultures, guided by the community of scientists supporting communities, for the well-being of all people, all our relations, and the earth herself. The culture created would be highly communicative, altruistic, and creative, reaching into traditions, spiritual and cultural traditions, and the rest of what is important to people. Unity and diversity, and very place-based. Life imitating art – use a real-world digital earth portal for the reference material and the images and 3D reverse-engineering construction engines to model infrastructure, based on any assumptions or real information on the materials used to make the real things being portrayed. We could show how Captive Column bridges and wind turbines and buildings would use less materials, be quicker and cheaper to build, and last longer, including in earthquakes. We could calculate and show the social, economic and environmental impacts of switching to hypercars (Rocky Mountain Institute) run not on fuel cells, but new (existing) battery systems, or superflywheels (full electric), steam engines, or hydrogen-powered Wankels (from splitting water, not from methane), all driven by wind, sun, truly surplus biomass (no making people hungry to power your car!), etc., on a global basis. We could also model using biointensive gardening (growbiointensive.org) and permaculture and Yoruba heap gardening to produce abundant high-quality food for our health, and sequester carbon in the 6-8 inches of humus that also traps water, so you use 1/6 as much. And so forth! We can wind up with thousands or millions of people actually checking out what works, researching it to model it, and incorporating it into the simulation. Same thing for natural health care practices to complement the healthy diet and the exercise of everyone putting in some time to grow food (what a concept! Just like the whole history of humanity until the 1950s!)
    Where did this idea come from? Aside from over 40 years of research, it came from realizing that if children play to learn to be functional adults, we all can play to learn how, individually and collectively, to save our planet, all our relations, and humanity from the consequences of our actions in the last century.

  3. #3 by DEO DAS on October 15, 2009 - 2:59 am

    I want see all clear present moment.

  4. #4 by James on May 25, 2011 - 7:06 am

    “However, that’s clearly a third way — not user generated, and not mirroring the real world. And that discussion is best left for another day.”

    I think you should check out this project: http://www.outerra.com

    Very exciting.

(will not be published)