Google’s Virtual World, Redux

Rumors persist that Google is in the process of turning Google Earth into a virtual world. Well, I hate to burst anyone’s bubble, but GE already is a virtual world. It’s a virtual earth. It has all of the features of a virtual world (spatiality, point of view, presence, information modeling), minus a few we’ve come to expect from a game or socially-oriented space (seeing yourself, seeing other people, and directly interacting together).

First, a step back. Regular visitors know that I know a bit about the internal workings of Second Life and Google Earth, though, as I always repeat, I don’t know any of their current plans. What you don’t know is that I’ve also consulted for and even considered roles in some of the newer crop of social MMOs. I think I have a pretty good sense of the field, both good and bad. And I’ve also heard the rumor that Google wants to get into this space.

I can’t speak for anyone at Google, but I know this: they’re certainly capable of it if they invest the bucks. John Hanke was the business/marketing guy behind Meridian59, one of the first 3D online multiplayer games. He’s now in charge of Google Earth. So I’m sure he has a passion for this space and could find great designers and technologists to help him pull it off. But the big question I have is one of fit with Google’s overall mission to organize the world’s information, especially after their "better products, not more" mandate came down.

The thing about GE is that it’s a so-called "mirror world." The whole point was always for GE to accurately and compellingly reflect information about the real world. Opening up 3D content development via SketchUp and COLLADA import allows one to put virtually anything on the planet. That’s extremely useful, even if the information is speculative (like a new home plan or a proposed stadium). But the point is always to relate even the most speculative information back to the overall context: the real world.

So what happens if/when a purely fictional data layer is intentionally introduced? Does GE become a big open sandbox with a nice, but vestigial picture of the Earth on the floor? Is it SecondLife on a sphere? [edit: in case it’s not clear, I think mixing fictional and real content is a mistake, unless it gets its own distinct context, like a game. Right now this separation is regulated by what gets promoted to an official layer. In a free-form SL-like world, perhaps not so much.]

So people talk about the technical challenges a lot. But that’s the easy part. Adding avatars certainly wouldn’t be hard. It would require a new server intrastructure. It would require the client to be improved somewhat, mostly to hide communication latency and handle thousands or even millions of active objects (esp. those pesky moving avatars).

Some have said that "resolution" is the limiting factor — this is true for real-world imagery, though this is more a data-availability problem than a technical limitation (1mm pixels are not out of the question). The system could probably support very detailed 3D models for buildings as well. But relying on users to create these may not produce good results in the near term. It takes a lot of work and it’s not yet important to do so. Procedural tools, unlike today’s SketchUp, would be essential. Without paid artists making these things, competition and collaborative rating/filtering of content is also essential.

And so let’s say Google does add whatever is needed and suddenly you can see and even chat with all of the other users of GE in your virtual proximity. That’s cool. But what then? I mean, the key thing for any experience is that it must either be fun or useful (or, ideally, both). So what would make it fun or useful? That’s the hard part. And as history shows, simply having the implicit marketing muscle of Google is only enough to get people in the door.

Here’s a short list of some good (and bad) applications of this strictly potential technology:

Collaborative editing — work together on models in-world. Minimally requires SketchUp functionality to be merged into GE (which is possible, but not at all easy). Initially, it could just be used for guided tours, like for selling real estate. That’s something, but it’s still fairly niche.

Socializing — To talk to people, you first need to find them. Say you fly to NY and see a hot-looking avatar nearby. What do you say? "Hey, I see you’re also searching for French restaurants near 42nd st. Ooh, la la." Second Life (and others) offer the concept of personal spaces, or what I’d call HomeSpaces (like home pages on the web). Where is yours in a social GE? Is it tied to your real home? Do we invite people to come visit our HomeSpace, full of virtual furniture from Ikea and appliances from Sears? Yawn. Beyond the basic 3D MySpace everyone wants to do, the key to socializing is sharing context and doing (hopefully fun) things together.

Networking. It might be cool to discover likable real-world neighbors (assuming knocking on doors is too intimidating). But apart from the obvious privacy issues, I’m not sure you need access to a whole virtual planet to meet the kid next door. The "dating" angle could certainly be made to work, after the privacy issues are solved. Adding a social network like SL has might ultimately allow you to find new friends easily, if Google works on the profiling and discovery tools. But what then? Social Networking by itself is fickle. Again, people need something to do together, or at least a purpose for spending time and forking over their personal information to a big corporation, even Google.

Creative Exploration — Ah, here’s something. Say a group of people get together to turn a bit of empty virtual real estate into a hub of creativity, like Burning Man, where people of similar (or similarly altered) minds know to come. GE then becomes more of a showcase tool. Here, adding scripting to the client would be essential. Just looking at 3D models gets boring. They need to come alive, perhaps even with physical simulation. Go a little further and you have games (some of which are already done as mashups with Google Maps — but these could live inside the system, not outside). This is what SL seeks to do. So could Google do it better?

Alternate Reality — AR is usually about overlaying fictional worlds onto the real one. But why not add fictional places to GE’s map too? So a group of people take the NY skyline and turn it into a fantasy land (middle ages, futuristic, etc..). That might be fun to build and explore. And there could even be a few games built there. But apart from the interesting juxtaposition of an Elven Forest across the Hudson from Jersey City, why does this need to live on a map of the real world? Certainly, GE wouldn’t want people searching for French restaurants to wind up with unreal results. (I can just see the mapping directions now: turn left at the big oak tree, down the rabbit hole, and 1.2mi across the swamp of eternal tears…). They may need some better separation between these two products, without sacrificing fun accidental discoveries. This has always been an issue for the "layers" approach.

Frankly, the most profound thing Google could do with Google Earth right now is like what they did for maps: enable 3D mashups. Any and all of the ideas above would get developed, tried and tested by others. But for 3D applications like GE, this is probably the most difficult technical hurdle of any I mentioned. Had Intrinsic Graphics (the makers of the 3D rendering layer inside GE) survived, I imagine it would be easier to have a nice, free Google Earth Toolkit for building new 3D apps, using GE and its powerful servers under the hood. But that didn’t happen, at least not yet.

On the other hand, two of the founders of Intrinsic Graphics are now at Google. So rather than have Google try to solve all these virtual worldly problems (as they tried with Orkut for social networks), I’d much rather see them open up the system in the way Microsoft has for its VE offering, as a component that others can build on or integrate, for free, but with ad revenue flowing back to Google, of course.

The risk to Google is much lower. They can still make gobs of money. And the potential wins are much greater than going it alone. At least, that’s what I would do.


The iPhone

I watched Jobs Keynote today with great excitement, refreshing the live Engadget blog post about a hundred times. Apple has truly revolutionized the cell phone by including concepts no one has ever dreamed of — music, internet, even PIM-type applications, and all with a touch-screen interface that works seemingly like magic. And they’re doing it at the pocket friendly price-point of $500, after locking you into a 2-year unknown-price-for-voice-plus-data contract with the Empire itself, AT&T.

If you’re not sensing the sarcasm, you don’t know me. Look, I have no doubt that the user interface and form-factor will blow away everything currently on the market. I have no doubt this is ultimately good for everyone, as Nokia and others will be forced to in turn force the cell phone companies to finally drop the “less features for more money or else” mafia mentality. But I’m not yet drinking the koolaid. Why? Because my cell phone is not my life.

In the end, Cingular/AT&T will certainly get a boost from the usual Mac hordes who, like Emeril Lagasse’s audience, gasp with excitement if Steve Jobs puts sugar on his waffles. Apple will sell more music. And the other phone makers will get their improved products out, probably at better price points given the premium Apple fans will pay. But I have no intention of ever using AT&T service for anything given their track record on privacy. And I have no intention of paying dues to Apple simply to be part of its cult, especially given Apple’s “200 patent” threat and their own track record on DRM. I can respect good design. But that doesn’t make me buy.

Whenever Apple has the best product/features/prices/service, I’ll consider them. But next up in the decision chain is to hear anything about processors specs, RAM, stand-by time, (none of which were on the Apple site tech specs) and a whole host of user experience results, most importantly running generic OS X apps. Personally, I’d have no problem waiting a year or two before making such a leap. And by then, I think my choices will be quite diverse. So I’m happy that the log jam will finally break. But I’m not convinced that Apple will be the main beneficiary.

A Second Life for Old Code

Official Linden Blog » Blog Archive Embracing the Inevitable «

If you haven’t heard, Linden Labs, makers of Second Life, has “open sourced” its client code today (some of which I contributed to, way back when). This could potentially be a bigger move for openness than when they decided to revert content rights back to their in-world authors. On the other hand, it could potentially be a more disruptive move than when they opened free registration and swelled their numbers to two million virtual virtual inhabitants (the number of real virtual inhabitants, i.e., those who visit once or more and contribute to the economy, being much lower).

The first thing this does is it permits anyone with a C++ compiler (which is anyone who knows how to use a C++ compiler) to make cool new clients that do cool new things. The Linden protocols had already been reverse engineered, at least as of this version. There were hints that the next version would be entirely different anyway. But making cool new things is what innovation is all about, and the source makes that much, much easier. Companies like Electric Sheep are now much freer to customize, make new and better user interfaces, add special features and so on. It could be incredibly exciting.

But the danger is fracture. What happens when custom client A supports one cool new feature and B supports another? Ideally, one could pick and choose the best of both. But tell that to proponents of the various flavors of Linux in active development, or the user interfaces that vie for attention and acceptance. Camps emerge and later compromise becomes more difficult. And what happens when client A supports a cool new feature that’s totally invisible to anyone using client B? The world itself becomes more and more solipsistic. And that seemed to be the one thing Linden Labs was always trying to avoid.

What would actually do more good than the literal source is for Linden to create a system whereby improvements in all of the emergent clients can be aggregated and distributed to users in a straightforward manner — a kind of roll-your-own client kit, with an official channel for improvements to migrate back to the community.

Plugins do that for browsers like Mozilla. You can download the source for Mozilla and make changes, and ideally release them back to the wild. But if you want to drive adoption of your changes, the best route is to make a plugin for the common trunk that everyone runs. For Linux, Linus Torvalds does a lot of the work of picking and choosing which new developments make it into the core, blessed version. I’m not sure that Linden has signed up for that, or if they’re just dumping a big tarball and saying, “here you go.”

But I’m eager to see where it leads.