Archive for December, 2006

How Do Panoramas in Google Earth Look?

Here’s a preview of what the next Google Earth feature could potentially look like (I don’t know).

I’m impressed with how good a simple sphere of imagery looks just floating there above the earth. The warping effect has a sort of natural bubbleness. And as the POV flies in towards the center of the sphere, you can see the distortion effect I mentioned last time. It’s not as jarring as I expected, but it’s certainly noticeable. Fading these spheres in as you approach may improve things a bit. The thing to consider is what happens when there are a thousand of these in close proximity.

What this really needs, though, is better controls. The video seems to be using a 6DOF controller like the 3DConnexion one we blogged about last month. I did purchase one, btw, but I don’t use it as much as I’d like just yet. BTW, the initial settings are bad, so beware that you’ll need to adjust the various sensitivities before feeling really comfortable with it. The company would do well to add a training app that lets you steer a cube around with on-screen sliders to adjust the action until it feels just right. But at least Sketchup has a way to invert the sense of the controls to flying vs. moving the model. This device suffers from the “volume problem” I blogged about a while back — it has sensitivity adjustments that multiply with those of an application, making universal tuning difficult.

Anyway, what these panoramas really need, especially for mouse users, is a way to link the normal earth-navigating controls to the viewing of the spheres. Here’s one way: click on a sphere and auto-pilot the viewer inside the sphere. The center is the sweet spot for seeing minimal distortion (zero, if your eyes are a little “fishy”). But there’s no good way with a mouse to spin your view within the sphere, like pan and tilt, as you do in QuicktimeVR.

That leads to the second, better, approach. Currently, when you click on the earth, you’re grabbing the spot you click and the rotation (of the whole earth, or you around the earth) is computed by seeing where you drag that spot to, such that the earth stays locked to the mouse, like your finger on a globe. GE could do the same thing for these spheres, but treating the center of the sphere as your new temporary center of rotation.

So, for example, you see a sphere nearby and click within it. GE figures out you clicked the sphere, not the ground. It sees how much you move the mouse after you click, and it computes a new rotation around the image sphere itself, as if the sphere was a kind of handle you could spin yourself around (like the teacups at Disneyland) or a mini-earth that’s glued down. Zooming would now zoom you in and out of the center of the sphere, right up to the sweet spot, and back out again. When you grab the earth, or perhaps when you release the mouse button, you’re back to the normal controls. Simple and effective, but something GE would have to implement for now.

As always, I have no knowledge of Google’s plans or what’s currently going on inside the app. Just my educated guesses from past experience.

Afterthought: using a literal sphere textured with panoramas is the simple approach, but there’s another one that would work quite well. Almost all 3D video cards support cubic environment maps. It’s just like a panorama sphere except the shape is a cube and that makes it more efficient for the hardware. Transforming a sphere to a cube is not hard. But once in cube map form, you could use any shape geometry and get similar results. For example, you could make a big magnifying glass that shows the panorama as if seen through the glass. You could make a mirror ball that reflects the panorama instead of the flat map. And you could more easily warp the geometry as you fly closer, going from a billboard (plane) to a full cube or sphere as a function of distance. But again, it’s something GE would have to support internally.

No Comments

Panoramas in Google Earth?

Ogle Earth: A blog about Google Earth. « 2007 Preview: Public-Private partnerships, 3D US civil war re-enactments »

Stefan has the story about the idea of adding gigapixel panoramas to Google Earth as part of a public private partnership, first with Pennsylvania for some Civil War re-enactments. I wanted to add a few technology notes to the story, not that I know what Google is up to or what they’re actually using tech-wise, but to explain why 360 panoramas and virtual earths are a natural technological fit, and where the problems lie.

As a exercise, imagine yourself inside the Google Earth, assuming it’s hollow and the imagery you usually see sits on a thin glass shell that can be viewed from inside as well as outside the earth. If you’re in the center of the sphere, what you’re seeing is essentially the same as a virtual panorama. The same mouse controls that let you spin the Earth now let you spin the panorama from the inside, grabbing that shell like you’d grab the ground. The main difference between this and flying up above the Earth, apart from the imagery and the source, is just the point of view of the observer.

[Aside: one of the first fictional virtual earths I dreamed up for my first novel (written in 1994, needs another draft) worked this way. The mirror world — the part of a virtual world with a direct correlation to reality — would appear as if flying above the Earth, just like GE. The more abstract part of virtual world, like a metaverse, would exist on the inside of the same sphere, our heads aimed inwards, with a totally different map to explore. Jumping back and forth would take a simple reorientation of one’s perspective, up/down, in/out…]

Anyway, it’s pretty easy to see how the same technology that can stream several terrabytes of sphere-mapped Earth imagery can be used to stream a sphere of imagery with you virtually inside it. It’s a natural fit. The only hard part is the transition from outside to inside, such that there is minimal distortion. That PhotoSynth software from Microsoft solves the ever-present distortion issue by turning their hundreds of separate images off, except for some ghost regitration points, showing an image only when its perspective closely matches the viewer’s. GE could probably do something siimilar, fading these panoramas away when their point of view is no longer correct.

What would be more interesting to me is if they can use some of the more sophisticated image-based rendering approaches, especially the idea of plenoptic warping of images — e.g., take a gigipixel sphere and warp it such that it’s image data could be used for a much larger set of POVs with a more correct 3D feel. I doubt they’ll do anything so fancy at first. But there’s plenty of room to grow. My guess is that the first version will show some iconic form of the gigapixel spheres that you can click on and fly “into,” taking over your mouse controls until you click out and back to the Earth. Even with just that much, it’ll make for a good experience if they handle the controls properly.

Speaking of good experiences, I noticed that NASA WorldWind put out a video showing the capabilities of the new version, especially with respect to realistic sun lighting of the Earth. It does look nice, especially using the new Earth images available. But I have some experience with these features, and I’d hesitate to call them “the next generation.” That’s not to diminish their other accomplishments, but I think there’s some “missed information” here.

The original version of Earthviewer, back in 1999/2000 had realistic sun lighting by default, including a nice atmospheric halo/corona that could be front or backlit, and even a simulated cloud layer at one point. We didn’t include both in the product for the same reason: usability.

Though it seems a nice idea to have a true day/night cycle, the difficulty is not in rendering. It’s a data problem, even worse than obscuring your view of the Earth with clouds (which, in the world of satellite imagery, people pay good money to avoid). When all you have available are daytime images, you can perhaps apply some nice color filters to make them look like night images. But not quite. Shadows naturally exist in the images, and of course street lights and headlights and houselights are not turned on. And while you might not care, forced night colors looks odd and offers little for usability.

To do it right, you’d really want to use images taken at night, blended cleverly into those night-time areas. But it looks even more odd in the daylight parts, when your new virtual dynamic sun comes from the east but the shadows captured in the imagery were cast from the west. So what you’d need to do is to start with images containing NO shadows (similarly NO clouds, for a good dynamic cloud layer), and then re-add the shadows on the fly based on the current sun angle. That’s quite hard. It generally requires having true 3D data for every pixel in the image, and then some. That work can be heavily pre-processed, similar to how we do fast self-shadowing on 3D hardware nowadays. But it still takes quite a bit of data and work.

The Microsoft product makes a good first attempt, but only for buildings that were added as separate 3D objects, and even then, I’m not sure how dynamic they currently allow the sun angle to be. However, at least you can see the difference a more correct shadow makes.

I suppose Keyhole could have compromised and used a realistic day/night Earth when you zoomed all the way out (there’s a nice “earth at night” image out there) and then switched to “day everywhere” when you zoomed in. But the point of the product was always to be more useful than pretty.

1 Comment