Panoramas in Google Earth?


Ogle Earth: A blog about Google Earth. « 2007 Preview: Public-Private partnerships, 3D US civil war re-enactments »

Stefan has the story about the idea of adding gigapixel panoramas to Google Earth as part of a public private partnership, first with Pennsylvania for some Civil War re-enactments. I wanted to add a few technology notes to the story, not that I know what Google is up to or what they’re actually using tech-wise, but to explain why 360 panoramas and virtual earths are a natural technological fit, and where the problems lie.

As a exercise, imagine yourself inside the Google Earth, assuming it’s hollow and the imagery you usually see sits on a thin glass shell that can be viewed from inside as well as outside the earth. If you’re in the center of the sphere, what you’re seeing is essentially the same as a virtual panorama. The same mouse controls that let you spin the Earth now let you spin the panorama from the inside, grabbing that shell like you’d grab the ground. The main difference between this and flying up above the Earth, apart from the imagery and the source, is just the point of view of the observer.

[Aside: one of the first fictional virtual earths I dreamed up for my first novel (written in 1994, needs another draft) worked this way. The mirror world — the part of a virtual world with a direct correlation to reality — would appear as if flying above the Earth, just like GE. The more abstract part of virtual world, like a metaverse, would exist on the inside of the same sphere, our heads aimed inwards, with a totally different map to explore. Jumping back and forth would take a simple reorientation of one’s perspective, up/down, in/out…]

Anyway, it’s pretty easy to see how the same technology that can stream several terrabytes of sphere-mapped Earth imagery can be used to stream a sphere of imagery with you virtually inside it. It’s a natural fit. The only hard part is the transition from outside to inside, such that there is minimal distortion. That PhotoSynth software from Microsoft solves the ever-present distortion issue by turning their hundreds of separate images off, except for some ghost regitration points, showing an image only when its perspective closely matches the viewer’s. GE could probably do something siimilar, fading these panoramas away when their point of view is no longer correct.

What would be more interesting to me is if they can use some of the more sophisticated image-based rendering approaches, especially the idea of plenoptic warping of images — e.g., take a gigipixel sphere and warp it such that it’s image data could be used for a much larger set of POVs with a more correct 3D feel. I doubt they’ll do anything so fancy at first. But there’s plenty of room to grow. My guess is that the first version will show some iconic form of the gigapixel spheres that you can click on and fly “into,” taking over your mouse controls until you click out and back to the Earth. Even with just that much, it’ll make for a good experience if they handle the controls properly.

Speaking of good experiences, I noticed that NASA WorldWind put out a video showing the capabilities of the new version, especially with respect to realistic sun lighting of the Earth. It does look nice, especially using the new Earth images available. But I have some experience with these features, and I’d hesitate to call them “the next generation.” That’s not to diminish their other accomplishments, but I think there’s some “missed information” here.

The original version of Earthviewer, back in 1999/2000 had realistic sun lighting by default, including a nice atmospheric halo/corona that could be front or backlit, and even a simulated cloud layer at one point. We didn’t include both in the product for the same reason: usability.

Though it seems a nice idea to have a true day/night cycle, the difficulty is not in rendering. It’s a data problem, even worse than obscuring your view of the Earth with clouds (which, in the world of satellite imagery, people pay good money to avoid). When all you have available are daytime images, you can perhaps apply some nice color filters to make them look like night images. But not quite. Shadows naturally exist in the images, and of course street lights and headlights and houselights are not turned on. And while you might not care, forced night colors looks odd and offers little for usability.

To do it right, you’d really want to use images taken at night, blended cleverly into those night-time areas. But it looks even more odd in the daylight parts, when your new virtual dynamic sun comes from the east but the shadows captured in the imagery were cast from the west. So what you’d need to do is to start with images containing NO shadows (similarly NO clouds, for a good dynamic cloud layer), and then re-add the shadows on the fly based on the current sun angle. That’s quite hard. It generally requires having true 3D data for every pixel in the image, and then some. That work can be heavily pre-processed, similar to how we do fast self-shadowing on 3D hardware nowadays. But it still takes quite a bit of data and work.

The Microsoft product makes a good first attempt, but only for buildings that were added as separate 3D objects, and even then, I’m not sure how dynamic they currently allow the sun angle to be. However, at least you can see the difference a more correct shadow makes.

I suppose Keyhole could have compromised and used a realistic day/night Earth when you zoomed all the way out (there’s a nice “earth at night” image out there) and then switched to “day everywhere” when you zoomed in. But the point of the product was always to be more useful than pretty.

  1. No comments yet.
(will not be published)