Google Earth 4.0


Google just released Google Earth 4.0

It’s still in beta, but I just downloaded it and played for a while. The new interface looks very good. And the other improvements are certainly worth noting. As usual, my comments are form the point of view of someone who knows the space fairly well, but no longer works in it, and doesn’t know of Google’s internal plans.

Picasa integration (gearth blog) —  [disclaimer: I wrote the original compositing and 3D rendering engine for a Picasa competitor, Picaboo, though I have no stake in it.] — I’ve only seen a screen-shot, but it looks like Picasa images can be linked in GE inside pop-up 2D bubbles, like placemarks. The goal seems to be to get people to georeference their photos. Excellent. That’s what integration is all about. It certainly adds a new dimension to showing off your travel pics if you can recreate the entire trip in 3D.

The next small step I’d love to see is the ability to orient the photos in 3D (or a 2D plane in 3D), mapped as closely as possible to the original POV. When that happens, with a little extra image magic, you could start to see the reciprocal of Quicklime VR panoramas on the globe, fine-image overlays created from thousands of digital cameras in true distributed fastion.

I’d also love to see this kind of thing linked to TV news, where a more accurate position/orientation sensor (6DOF) on a remote camera (esp. those helicopter-mounted ones) could feed into GE at home and on CNN to show the exact orientation of the live video, ideally placed on the earth. As the camera pans around, if you stitch the frames together just so, you could get an instant panorama of what it’s like on the ground, up to the minute. Neither of those are probably big money-makers, but they’d be well received.

Collada import (google) — Rather than just supporting SketchUp’s file format, Collada import is critical to bringing a bigger world of 3D objects into GE. Collada is set to be the next standard (better than X3D in many respects), so if you read my previous post on "semantic modeling," note that this tries to solve the first-order babel problem of too many file formats. Collada standardizes the verbs and nouns, though I’d still argue the set is too narrowly focused on rendering features, points and polygons. Perhaps it can be extended to semantics. [And note: when I mentioned that I was "working on something like that as part of something else," I should have added that I’d love for Google or someone else to beat me to it. It’s just a building block I need.]

Level of Detail (same) — Level of detail is critical to being able to handle a planet worth of 3D and other layer data. One reason GE is so fast is how cleverly it chops the world up and serves it predictably, at the right resolution for any given view. To better understand the problem, we’re talking about a difference in scale of around 1,000,000 : 1 between being zoomed out to space and standing on the ground. In other words, if the world is populated with 3D objects at their best resolution, when you zoom out, without any LOD, you’d render approximately a million times more polygons than you need or could handle. So simplifying objects and "culling" them out (removing them when they don’t contribute significantly to the rendered image) is critical.

It looks like the new KML goes a long way towards handling that, but it leaves most of the decisions to the end-user, for better or worse. I’m hoping that the 3rd party export tools will be as smart as GE developers with the actual slicing and dicing of objects and layers, but I expect a range of experiences. I’d hope to see more automatic LOD management tools from Google in the future, optimizing both number of polygons drawn in a given view, as well as the number of pixels over-drawn (which is what is more likely causing the 3D buildings layer to run slowly on my 3yo laptop).

Updating (same) — It looks to me like there’s enough power in the new KML spec to animate objects (such as avatars) with the <update> features. I don’t know how efficient that would be for more than a few, but it seems possible for those of you who want GE to approach the popular conception of the metaverse. As I mentioned, I think it takes more than 3D objects and avatars for a metaverse, but have at it.

The best thing GE could do to facilitate that near-term vision of seeing other GE users in a shared earth-centric space would be to add a dynamic update layer and a new parallel server system (if they don’t already have this in the enterprise stuff) for streaming thousands of 3D positional updates alongside the earth and layer data. It would certainly be useful for companies who want to use GE to track packages and trucks or for strapping a GPS collar on your teenager (just kidding). And as long as it’s anonymous, I think it would be amazing to see a hundred thousand dots representing the eye-points of all GE users at any given time. If the updates include orientation and you add some LOD (as above) to see only, say, 100 of the closest avatars as avatars, then it could easily be done with something more compelling than dots.

As for avatars interacting with each other and the world, that’d be a bit harder to do in the current streaming model (I imagine) without adding simulation and latency-reduction functionality. The rumors that Google hired a bunch of former There.com developers could be a step in this direction, or it could be more along the lines of 3D "avatars" for their Talk group or for mobile games. I would have expected Google to just buy the company-formerly-known-as-There or Linden Research for their people and technology if rushing to build a Metaverse was really Google’s goal. So I’d actually put my money on the idea that they’re moving into mobile 3D games, which is huge in and of itself.

But, as I mentioned to Jerry, I’m pretty sure that one of the reasons Neal Stephenson isn’t all gung ho about The Metaverse is that Snowcrash is a bit of a dystopian vision. I’m not sure why so many people are rushing out to copy it. But no one ever accused us engineers of having the best reading comprehension skills.

But enough of my opinions. Enjoy Google Earth as it truly comes into its own.

  1. #1 by Keith on November 28, 2006 - 12:46 pm

    Hi All Experts,
    Does anyone use google earth images as ground image planes for use in aerial scenes. I know how to stitch them together but are there any tools or tricks to make sure that the images are at the same height, angle and such to make sure they stitch well. I know in the pro version you can get bigger images but im not going to pay for the pro version when i could stitch multiple images together…

  2. #2 by avi on November 29, 2006 - 10:26 am

    Hey Keith,

    The terms of service for using GE won’t allow you to use it this way. You’ll have to use some of the free image sources out there, or make a mashup with GoogleMaps that uses their API to fetch the image tiles.

(will not be published)