How Google Earth [Really] Works

Introduction

After reading an article called "How Google Earth Works" on the great site HowStuffWorks.com, it became apparent that the article was more of a "how cool it is" and "here’s how to use it" than a "how Google Earth [really] works."

So I thought there might be some interest, and despite some valid intellectual property concerns, here we are, explaining how at least part of Google Earth works.

Keep in mind, those IP issues are real. Keyhole (now known as Google Earth) was attacked once already with claims that they copied someone else’s inferior (IMO) technology. The suit was completely dismissed by a judge, but only after many years of pain. Still, it highlights one problem of even talking about this stuff. Anything one says could be fodder for some troll to claim he invented what you did because it "sounds similar." The judge in the Skyline v. Google case understood that "sounding similar" is not enough to prove infringement. Not all judges do.

Anyway, the solution to discussing "How Google Earth [Really] Works" is to stick to information that has already been disclosed in various forms, especially in Google’s own patents, of which there are relatively few. Fewer software patents is better for the world. But in this case, more patents would mean we could talk more openly about the technology, which, btw, was one of the original goals of patents — a trade of limited monopoly rights in exchange for a real public benefit: disclosure. But I digress…

For the more technically inclined, you may want to read these patents directly. Be warned: lawyers and technologists sometimes emulsify to form a sort of linguistic mayonnaise, a soul-deadening substance known as Patent English, or Painglish for short . If you’re brave, or masochistic, here you go:

1. Asynchronous Multilevel Texture Pipeline
2. Server for geospatially organized flat file data

There are also a few more loosely related Google patents. I don’t know why these are shouting, but perhaps because they’re very important to the field. I’ll hopefully get to these in more detail in future articles:

3. DIGITAL MAPPING SYSTEM
4. GENERATING AND SERVING TILES IN A DIGITAL MAPPING SYSTEM
5. DETERMINING ADVERTISEMENTS USING USER INTEREST INFORMATION AND MAP-BASED LOCATION INFORMATION
6. ENTITY DISPLAY PRIORITY IN A DISTRIBUTED GEOGRAPHIC INFORMATION SYSTEM (this one will be huge)

And there is this more informative technical paper from SGI (PDF) on hardware "clipmapping," which we’ll refer to later on. Michael Jones, btw, is one of the driving forces behind Google Earth, and as CTO, is still advancing the technology.

I’m going to stick closely to what’s been disclosed or is otherwise common technical knowledge. But I will hopefully explain it in a way that most humans can understand and maybe even appreciate. At least that’s my goal. You can let me know.

Big Caveat: the Google Earth code base has probably been rewritten several times since I was involved with Keyhole  and perhaps even after these patents were submitted. Suffice it to say, the latest implementations may have changed significantly. And even my explanations are going to be so broad (and potentially out-dated) that no one should use this article as the basis for anything except intellectual curiosity and understanding.

Also note: we’re going to proceed in reverse, strange as it may seem, from the instant the 3D Earth is drawn on your screen, and later trace back to the time the data is served. I believe this will help explain why things are done as they are and why some other approaches don’t work nearly as well.

Part 1, The Result: Drawing a 3D Virtual Globe

There are two principal differences between Google Maps and Earth that inform how things should ideally work under the hood. The first is the difference between fixed-view (often top-down) 2D & free-perspective 3D rendering. The second is between real-time and pre-rendered graphics. These two distinctions are fading away as the products improve and converge. But they highlight important differences, even today.

What both have in common is that they begin with traditional digital photography — lots of it — basically one giant high-resolution (or multi-resolution) picture of the Earth. How they differ is largely in how they render that data.

Consider: The Earth is approximately 40,000 km around the waist. Whoever says it’s a small world is being cute. If you stored only one pixel of color data for every square kilometer of surface, a whole-earth image (flattened out in, say, a mercator projection) would be about 40,000 pixels wide and roughly half as tall. That’s far more than most 3D graphics hardware can handle today. We’re talking about an image of 800 megapixels and 2.4 gigabytes at least. Many PCs today don’t even have 2GB of main memory. And in terms of video RAM, needed to render, a typical PC has maybe 128MB, with a high-end gaming rig having upwards of 512.

And remember, this is just your basic run-of-the-mill one-kilometer-per-pixel whole-earth image. The smallest feature you could resolve with such an image is about 2 kilometers wide (thank you, Mr. Nyquist) — no buildings, rivers, roads, or people would be apparent. But for most major US cities, Google Earth deals in resolutions that can resolve objects as small as half a meter or less, at least four thousand times denser, or sixteen million times more storage than the above example.

We’re talking about images that would (and do) literally take many terabytes to store. There is no way that such a thing could ever be drawn on today’s PCs, especially not in real-time.

And yet it happens every time you run Google Earth.

Consider: In a true 3D virtual globe, you can arbitrarily tilt and rotate your view to pretty much look anywhere (except perhaps underground — and even that’s possible if we had the data). In all 3D globes, there exists some source data, typically, a really high-resolution image of the whole earth’s surface, or at least the parts for which the company bought data. That source data needs to be delivered to your monitor, mapped onto some virtual sphere or ideally onto small 3D surfaces (triangles, etc..) that mimic the real terrain, mountains, rivers and so on.

If you, as a software designer, decide not allow your view of the Earth to ever tilt or rotate, then congrats, you’ve simplified the engineering problem and can take some time off. But then you don’t have Google Earth.

Now, various schemes exist to allow one to "roam" part of this ridiculously large texture. Other mapping applications solve this in their own way, and often with significant limitations or visual artifacts. Most of them simply cut their huge Earth up into small regular tiles, perhaps arranged in a quadtree, and draw a certain number of those tiles on your screen at any given time, either in 2D (like Google Maps) or in 3D, like Microsoft’s Virtual Earth apparently does.

But the way Google Earth solved the problem was truly novel, and worthy of a software patent (and I am generally opposed to software patents). To explain it, we’ll have to build up a few core concepts. A background in digital signal theory and computer graphics never hurts, but I hope this will be easy enough that that won’t be necessary.

I’m not going to explain how 3D rendering works — that’s covered elsewhere. But I am going to focus on texture mapping and texture filtering in particular, because the details are vital to making this work. The progression from basic concepts to the more advanced texture filtering will also help you understand why things work this way, and just how amazing this technology really is. If you have the patience, here’s a very quick lesson in texture filtering.

The Basics

The problem of scaling, rotating and warping basic 2D images was solved a long time ago. The most common solution is called Bilinear Filtering. All that really means is that for each new (rotated, scaled, etc..) pixel you want to compute, you take the four "best" pixels from your source image and blend them together. It’s "bilinear" because it linearly blends two pixels at a time (along one axis), and then linearly blends those two results (along the other axis) for the final answer.

[A "linear blend," in case it’s not clear, is butt simple: take 40% of color A, and 60% of color B and add them together. The 40/60 split is variable, depending on how "important" each contributor is, as long as the total adds up to 100%.]

That functionality is built into your 3D graphics hardware such that your computer can nowadays do literally billions of these calculations per second. Don’t ask me why your favorite paint program is so slow.

The problem being addressed can be visualized pretty easily — that’s what I love about computer graphics. It turns out, whenever we map some source pixels onto different (rotated, scaled, tilted, etc…) output pixels, visual information is lost.

The problem is called "aliasing" and it occurs because we digitally sampled the original image one way, at some given frequency (aka resolution), and now we’re re-sampling that digital data in some other way that doesn’t quite match up.

 

color4.png
1. A simple low-res (11×11 pixel) image is about to be rotated. (the grid lines are merely to delineate pixels)

colorrotzoom.png
3. Close up of one output pixel. Bilinear interpolation averages the "best" four source pixels for each new destination pixel (shown as black border with white dots) based on their relative importance (ideally: fractional area).


colorrot.png
2. Each pixel in the destination grid overlaps multiple pixels from the rotating original.

color-smallrot1.png
4. After bilinear interpolation, the resulting rotated image has some clear (or rather blurry) issues.


Now, when we talk about output pixels and destinations, it doesn’t much matter if the destination is a bitmap in a paint program or the 3D application window that shows the Earth. Aliasing happens whenever the output pixels do not line up with the sampling interval (frequency, resolution) of the source image. And aliasing makes for poor visual results. Dealing with aliasing is about half of what texture mapping is all about. The rest is mostly memory management. And the constraints of both inform how Google Earth works.

The mission then is to minimize aliasing through cleverness and good design. The best way to do this is to get as close as possible to a 1:1 correspondence between input and output pixels, or at least to generate so many extra pixels that we can safely down-sample the output to minimize aliasing (also known as "anti-aliasing"). We often do both.

Consider: for resizing images, it only gets worse — each pixel in your destination image might correspond to hundreds of pixels of source imagery, or vice-versa. Bilinear interpolation, remember, will only pick the best four source pixels and ignore the rest. So it can therefore skip right over important pixels, like edges, shadows, or highlights. If some such pixel is picked for blending during one frame and skipped over subsequently, you’ll get an ugly "pixel-popping" or scintillation effect. I’m sure you’ve seen it in some video games. Now you know why.

Tilting images (or any 3D transformation) is even more problematic, because now we have elements of scaling and rotation, but also a great variation in pixel density across rendered surfaces. For example, in the "near" part of a scene, your nice high-res ground image might be scaled up such that the pixels look blurry. In the "far" part of the scene, your image might appear scintillated (as above) because simple 2×2 bilinear interpolation is necessarily skipping important visual details from time to time.

blurry.png

Copyright, Microsoft Virtual Earth

Here’s an example of where a certain kind of texture filtering causes poor results. The text labels are hardly readable (why they’re painted into the terrain image at all is another issue).

 

Better Filtering, Revealed

Most consumer 3D hardware already supports what’s called "tri-linear" filtering. With tri-linear and a closely coupled technique called mip-mapping, the hardware computes and stores a series of lower resolution versions of your source image or texture map. Each mip-map is automatically down-sampled by a factor of 2, repeatedly, until we reach a 1×1 pixel image whose color is the average of all source image pixels.

So, for example, if you provided the hardware with a nice 512×512 source image, it would compute and store 8 extra mip-levels for you (256, 128, 64, 32, 16, 8, 4, 2, and 1 pixel square). If you stacked those vertically, you might more easily visualize the "mip-stack" as an upside down pyramid, where each mip-level (each horizontal slice) is always 1/2 the width of the one above.

 
Drawing of a mip-map
pyramid (not to scale).

The X depicts trilinear filtering sampling two mip-levels for an in-between pixel to reduce aliasing.

During 3D rendering, mip-mapping and tri-linear filtering take each destination pixel, pick the two most appropriate mip-levels, essentially do a bi-linear blend on both, and then blend those two results again (linearly) for the final tri-linear answer.

So for example, say the next pixel would have no aliasing if only the source image had a resolution of 47.5 pixels across. The system has stored power of two mip maps (16, 32, 64…). So the hardware will cleverly use the 64×64 and 32×32 pixel versions closest to the desired sampling of 47.5, compute a bilinear (4-sample) result for each, and then take those two results and blend them a third time.

That’s tri-linear filtering in a nutshell, and along with mip-mapping, it goes a great distance to minimizing aliasing for many common cases of 3D transformations.

Remember: so far, we’ve been talking about nice, small images, like 512×512 pixels. Our whole-earth image will need to be millions of pixels across. So one might consider making a giant mip-map of our whole-earth image, at say one meter resolution. No problem, right? But you’ll realize fairly soon that would require a mip-map pyramid 26 levels deep, where the highest resolution mip-level is some 66 million pixels across. That simply won’t fit on any 3D video card on the market, at least not in this decade.

I’m guessing Microsoft’s Virtual Earth gets around this limit by cutting their giant earth texture into many smaller distinct tiles of, say, 256 pixels square, where each gets mip-mapped individually. That approach would work to an extent, but it would be relatively slow and give some of the visual artifacts, like the blurring we see above, and a popping in and out of large square areas as you zoom in and out.

There’s one last concept about mip-maps to understand before we move on to the meat of the issue. Imagine for a moment that the pixels in the mip-map pyramid are actually color-coded as I’ve indicated above, with an entire layer colored red, another yellow, etc.. Drawing this on a tilted plane (like the Earth’s ground "plane") would then seem to "slice through" the pyramid at an interesting angle, using only those parts of the pyramid that are needed for this view.

It’s this characteristic of mip-mapping that allows Google Earth to exist, as we’ll see in a minute.

tour_driving4.jpg
A typical tilted Google Earth image (copyright & courtesy Google).

tour_driving3.jpg
The same view, using color to show which mip levels inform which pixels

The example on the left shows a normal 3D scene from Google Earth, as well as a rough diagram showing from where in the mip-stack a 3D hardware system might find the best source pixels, if they were so colorized.

The nearer area gets filled from the highest-resolution mip-level (red), dropping off to lower and lower resolutions as we get farther from the virtual point of view. This helps avoid the scintillation and other aliasing problems we talked about earlier, and looks quite nice. We get as close as possible to a 1:1 correspondence between source and destination, pixel for pixel, so aliasing is minimized.

Even better still, tri-linear filtering 3D graphics hardware has been improved with something called anisotropic filtering (a simple preference option in Google Earth) which is essentially the same core idea as the previous examples, but using non-square filters, beyond the basic 2×2. This is very important for visual quality, because even with fancy mip-mapping, if you tilt a textured polygon to a very oblique angle, the hardware must choose a low-resolution mip-level to avoid scintillation on the narrow axis. And that means the whole polygon is sampled at too-low a resolution, when it’s only one direction that needed to dip down to the low-res stuff. Suffice it to say, if your hardware supports anisotropic filtering, turn it on for best results. It’s worth every penny.

Now, to the meat of the issue

We still have to solve the problem of how to mip-map a texture with millions of pixels in either dimension. Universal Texture (in the Google Earth patent) solves the problem while still providing high quality texture filtering. It creates one giant multi-terabyte whole-earth virtual-texture in an extremely clever way. I can say that since I didn’t actually invent it. Chris Tanner figured out a way to do on your PC what had only ever been done on expensive graphics supercomputers with custom circuitry, called Clip Mapping (see SGI’s pdf paper, also by Chris, Michael, et al., for a lot more depth on the original hardware implementation). That technology is essentially what made Google Earth possible. And my very first job on this project was making that work over an internet connection, way back when.

So how does it actually work?

Well, instead of loading and drawing that giant whole-earth texture all at once — which is impossible on most current hardware — and instead of chopping it up into millions of tiles and thereby losing the better filtering and efficiency we want, recall from just above that we typically only ever use a narrow slice or column of our full mip-map pyramid at any given time. The angle and height of this virtual column changes quite a bit depending on our current 3D perspective. And this usage pattern is fairly straightforward for a clever algorithm to compute or infer, knowing where you are and what the application is trying to draw.

A Universal Texture is both a mip-map, plus a software emulated clip-stack, meaning it can mimic a mip-map of many more levels and greater ultimate resolution than can fit in any real hardware.


Note: though this diagram doesn’t depict it as precisely as the paper, the clip stack’s "angle" shifts around to best keep the column centered.

So this clever algorithm figures out which sections of the larger virtual texture it needs at any given time and pages only those from system memory to your graphics card’s dedicated texture memory, where it can be drawn very efficiently, even in real-time.

The main modification to basic mip-mapping, from a conceptual point of view, is that the upside down pyramid is no longer just a pyramid, but is now much, much taller, containing a  clipped stack of textures, called, oddly enough, a "clip stack," perhaps 16 to 30+ levels high. Conceptually, it’s as if you had a giant mip-map pyramid that’s 16-30 levels deep and millions to billions of pixels wide, but you clipped off the sides — i.e., the parts you don’t need right now.

Imagine the Washington monument, upside down and you’ll get the idea. In fact, imagine that tower leaning this way or that, like the one in Pisa, and you’ll be even closer. The tower leans in such a way that the pixels inside the tower are what you need for rendering right now. The rest is ignored.

Each clip-level is still twice the resolution of the one "below" it, like all mip-maps, and nice quality filtering still works as before. But since the clip stack is limited to a fixed but roaming footprint, say 512×512 pixels wide (another preference in Google Earth), that means that each clip-level is both twice the effective resolution and half the coverage area of the previous. That’s exactly what we want. We get all the benefits of a giant mip-map, with only the parts relevant to any given view.

Put another way, Google Earth cleverly and progressively loads high-res information for what’s at the focal "center" of your view (the red part above), and resolution drops off by powers of two from there. As you tilt and fly and watch the land run towards the horizon, Universal Texture is optimally sending only the best and most useful levels of detail to the hardware at any given time. What isn’t needed, isn’t even touched. That’s one thing that makes it ultra-efficient.

It’s also very memory-efficient. The total texture memory for an earth-sized texture is now (assuming this 512 wide base mip-map, and say 20 extra clip-levels of data) only about 17 megabytes, not the dozens to hundreds of terabytes we were threatened with before. It’s actually doable, and worked in 1999 on 3D hardware that had only 32 MB or less. Other techniques are only now becoming possible with bigger and bigger 3D cards.

In fact, with only 20 clip-levels (plus 9 mip levels for the base pyramid), we see that 229 yields a virtual texture capable of up to 536 million pixels in either dimension. Multiply that by 1/2 vertically, gives an virtual image of a few hundred terapixels in area, or enough excess capacity to represent features as small as 0.15 meters (about 5 inches), wherever the data is available. And that’s not the actual limit. I simply picked 20 clip levels as a reasonable number. And you thought the race for more megapixels on digital cameras was challenging. Multiply that by a million and you’re in the planetary ballpack.

Fortunately, for now, Google only really has to store a few dozen terapixels of imagery. The other beauty of the system is that the highest levels of resolution need not exist everywhere for this to work. Wherever the resolution is more limited, wherever there are gaps, missing data, etc.. the system only draws what it has. If there is higher resolution data available, it is fetched and drawn too. If not, the system uses the next lower resolution version of that data (see mip-mapping above) rather than drawing a blank. That’s exactly why you can zoom into some areas and see only a big blur, where other areas are nice and crisp. It’s all about data availability, not any hard limit on the 3D rendering. If the data were available, you could see centimeter resolution in the middle of the ocean.

The key then to making this all work is that, as you roam around the 3D Earth, the system can efficiently page new texture data from your local disk cache and system memory into your graphics texture memory. (We’ll cover some of how stuff gets into your local cache next time). You’ve literally been watching that texture uploading happen without necessarily realizing it. Hopefully, now you will appreciate all the hard work that went into making this all work so smoothly — like feeding an entire planet piecewise through a straw.

Finally, there’s one other item of interest before we move on. The reason this patent emphasizes asynchronous behavior is that these texture bits take some small but cumulative time to upload to your 3D hardware continuously, and that’s time taken away from drawing 3D images in a smooth, jitter-free fashion or handling easy user input — not to mention the hardware is typically busy with its own demanding schedule.

To achieve a steady 60 frames per second on most hardware, the texture uploading is divided into small, thin slices that very quickly update graphics video memory with the source data for whatever area you’re viewing, hopefully just before you need it, but at worst, just after. What’s really clever is that the system needs only upload the smallest parts of these textures that are needed and it does it without making anyone wait. That means rendering can be smooth and the user interface can be as fluid as possible. Without this asynchronicity, forget about those nice parabolic arcs from coast to coast.

Now, other virtual globes can also virtualize the whole-earth texture, perhaps they cut it into tiles, and even use multiple power-of-two resolutions like GE does. But without the Universal Texturing component or something better, they’ll either be limited to 2D top-down rendering, or they’ll do 3D rendering with unsatisfying results, blurring, scintillation, and not nearly as good performance for streaming the data from the cache into texture memory for rendering.

And that’s probably more than you ever wanted to know about how the whole Earth is drawn on your screen each frame.

93 thoughts on “How Google Earth [Really] Works

  1. I just want to thank you for all the fascinating information you’re passing along. In particular, I thought your article about combining Google Earth and Second Life a while back was superb. Yours is one of the few blogs I’ll be checking regularly. Have you ever inquired about doing an article for How Stuff Works? If you have any more time between your job and this blog, that is. I have only one tiny complaint; that blue ball floating in the background on this new site bothers me a little as I read. Otherwise it’s all fantastic. Keep up the great work.

  2. Thanks. We’ll see how this series goes and if it helps open any doors. I might like to write a non-fiction book actually.

    On the blue ball, I kind of like it, but would prefer it not ever overlap the article text. I’ll try leaving it to scroll with the page and see how I like it.

  3. Pingback: Digital Earth Blog » How Google Earth [Really] Works

  4. “2. Each pixel in the destination grid touches up to 4 pixels from the rotating original.”

    I may have missed something, but if pixels are square, each pixel in the destination grid touches up to 6 pixels from the original.

  5. Thanks, Sam. It gets complicated. But I changed the text to be more accurate. Any number of source pixels might overlap each destination pixel depending on the transformation, not 4 or 6 or 8. But only 4 are used in bilinear filtering. Often, these are the 4 source pixels directly hitting the 4 corners of the destination pixel.

  6. Pingback: Ogle Earth: Must-read: Avi's "How Google Earth [Really] Works"

  7. Great article. It´s amazing what can be done with such algorithms like clipmapping. There is a project out there which has the goal of generating a procedural universe. You can fly from deep space directly onto a planets surface and the whole terrain and textures are generated on the fly on your cpu/gpu. I think they use the same algorithm for rendering their planets.

    The projects web site: http://www.fl-tw.com/Infinity/
    Here´s a nice video: http://video.google.com/videoplay?docid=8426566575107987989

  8. Pingback: The BC Blog » News Digger

  9. Pingback: The Map Room: How Google Earth Really Works

  10. Pingback: A WeoGeo Blog by Dan » links for 2007-07-05

  11. Pingback: Inside Google Earth · New York Articles

  12. Pingback: microformats.dk » Hvordan virker Google Earth på din computer

  13. lol, thought it was going to explained in plain form so folks would understand, very informative but worse than buying a hd t.v…

  14. Pingback: geography blogging alltheway » Blog Archiv » links for 2007-07-05

  15. Pingback: Inside Google Earth · Articles

  16. Great article Avi. I’m curious whether you plan to discuss the choice of rendering the Earth on a perfect sphere. I realize the data itself is accurately mapped to the WGS84 reference, so why am I raising this issue? When displaying aerospace content at high altitudes (trajectories, line-of-sight, comm links, sensor intersections) I discovered the geometry across large distances did not account for Earth’s flattening. This concern applies to an infinitesimal fraction of virtual globe users but I thought it would be an interesting note whether this choice was ever debated (as opposed to an oblate spheroid). I discuss this issue in detail in a white paper at: http://www.agi.com/downloads/support/productSupport/literature/pdfs/whitePapers/2007-05-24_SpaceVehiclesinVirtualGlobes.pdf

  17. It’s an interesting issue, Benno. Originally, the UI code couldn’t handle anything but a sphere, for purposes of grabbing and spinning the Earth. But I’d be willing to bet that using the more accurate shape wouldn’t be too hard nowadays, at least when you have "terrain" turned on. Putting a continent at 500m is as easy as putting it at 5, except for issues of subdivision (which I probably will cover, but tangentially to terrain). I’ll see if I can somehow work it in to the next article, but no promises.

  18. Pingback: How Google Earth (Really) Works. at civil3d.com

  19. Patents do not prevent you from talking about them. Part of the "contract" for a patent is that you are making this information publicly available, in exchange for a limited monopoly. This supposed to be better for society than a company that simply keeps their technology secret, then dying with the people or the company.

    That’s why patent applications are publicly available.

    If you have not entered into a non-disclosure agreement with Google or anyone else, and you are not obtaining your information from illegal reverse engineering (assuming that EULAs can do that which is a whole different discussion), you are perfectly safe and shouldn’t feel like you need to hold back.

    [editorial: Yes, but who said the patents prevented me from talking? It’s just the opposite….   I emailed Jeremy before approving his comment, but didn’t get a response… ]

  20. Pingback: Daily Links | Akkam's Razor

  21. You lost me when you started talking about clip stacks.

    We only have a narrow slice of the mip-map pyramid in use at any given time; sure, fair enough. Presumably that slice could be in any position, slicing at any angle, depending on where the camera is.

    How exactly can the clip stack give you this flexibility? What does each of these 20 extra levels correspond to?

  22. I want to know use this technology, can Google Earth draw more than one texture like worldwind. World wind’s way is not good and the render result is blur, but it can draw many image source, so user can add ther big image very easily by a simple web application.

  23. Laurie, that’s good feedback. Check back in 20 minutes. I’ll add a sentence or two to that section to try to make it clearer. Let me know if that doesn’t help explain it.

    Mars, yes. GE added something called "overlays," which resemble what you’re talking about. They can go anywhere and be any size. Each overlay, I’m guessing, would be mip-mapped separately. But there’s something called "Super Overlays" that’s worth looking into.

    BTW, for any diggers, this is now back on digg, with the URL: http://digg.com/software/How_Google_Earth_Really_Works_2 If you want to digg this too.

  24. Pingback: End of Silence - Freedom hating on the internets since 2002 » links for 2007-07-06

  25. Pingback: » Inside Google Earth

  26. GOOGLE EARTH MAPPING IS NOT UP TO DATE. I FOUND MY HOME, THE PICTURE IS FROM THE 1980’S, BECAUSE CERTAIN HOMES HAVENT BEEN THERE IN OVER 20 YEARS AND NEW HOMES ARE’NT BEING SHOWN. SO THE GOOGLE EARTH MAP SHOWS AS FAR BACK AS 1983.

  27. Thanks,Avi, It is a Great article, but there are still lots of things you did not cover , like how does google earth handle the terrain data? How does the cache system work? I am curious….

    Thanks

  28. Pingback: links for 2007-07-06 « Donghai Ma

  29. Pingback: All in a days work…

  30. I had posted the following question question to Keyhole group with no success…i’m not so strong in math to feel myself sure about what the correct approch could be…
    Q:
    What’s the math trasformation used to map the 3d globe latlon into the (-1,1)(1,-1) vieport rect ?
    In other words ,for example, i want to moves the focus point drawing a circle on terrain in terms GetPointOnTerrainFromScreenCoords…

    Can you give us some hints about ? i dont pretend detail math formula explanation just trying to make sure i use the corrected math transform GoogleEarth uses
    Thanks

  31. Luca, it’s probably too complicated to answer in a comment.

    Generally speaking, what you would want to do is take your point in screen-space (mouse etc..) and convert that point to something on the virtual image plane — a plane in the current 3D viewing coordinates that, if drawn, would map to the boundaries of your window in the -1,1 range.

    Once you have that point, you shoot a ray from the eye point through that point and intersect it with the world, see what it hits first. There’s no 1:1 mathematical relationship, because you will often hit more than one object. And the result will depend on the tilt of the earth, the height of terrain, and a lot of other factors. To give the best answer, the code would need to test that ray against a number of on-screen triangles, even buildings of you wanted be totally correct.

    That said, I don’t know what GetPointOnTerrain… does under the hood or if it does the ideal thing. But what you really want is a function to cast a ray into the scene and see what it hits. Forming that ray should be straightforward, if you know the viewing parameters.

  32. Hi Folks,

    thanks a lot for Avi’s effort on clarifying these items. It did help me a lot understanding how the Google Earth engine works.

    a free GE equivalent earth viewing experience is under construction now
    with the TerraLib project (www.terralib.org) and should be available within
    a couple of weeks for download.

    thanks a lot,

    Mário.

  33. Pingback: How Google Earth [Really] Works… « Sleeping Mind

  34. Pingback: links for 2007-07-09 « /tmp

  35. Pingback: Day 680: How Google Earth Really Works : Maitri’s VatulBlog

  36. It’s very intelligent way drawing i mean that borrow the text upper document
    “So how does it actually work?

    Well, instead of loading and drawing that giant whole-earth texture all at once — which is impossible on most current hardware — and instead of chopping it up into millions of tiles and thereby losing the better filtering and efficiency we want, recall from just above that we typically only ever use a narrow slice or column of our full mip-map pyramid at any given time. The angle and height of this virtual column changes quite a bit depending on our current 3D perspective. And this usage pattern is fairly straightforward for a clever algorithm to compute or infer, knowing where you are and what the application is trying to draw.”

    Get on Same Way.. Br Mika.

  37. Pingback: GIS-Lab Blog » Blog Archive » Как закалялся Google Earth

  38. Pingback: Relations › links for 2007-07-20

  39. Pingback: Алёна C++: Текстурирование Google Earth

  40. Pingback: Google Earth Resources : GIS Lounge - Geographic Information Systems

  41. Pingback: How Google Earth [Really] Works : GIS Lounge - Geographic Information Systems

  42. Pingback: Jeff Barr’s Blog » Links for Thursday, October 4, 2007

  43. Pingback: RealityPrime » Google + Multiverse Announcement: Analysis

  44. Pingback: The Inner Workings of Google Earth » TRAVEL

  45. Pingback: Google Earth « C’Kee’s

  46. Although I didn’t understand much of your article, you simplified the concepts enough for me to grasp the basics. I am in awe of what great minds can accomplish. Thanks!

  47. Intresting, but, could anybody traslate it to spanish?
    Thanks!

    Interesante, pero, ¿podria alguien traducirlo a español?

    Gracias!

    Fernando from Argentina.

  48. İnsanlık için iyi bir hızmet sunulmuştur.

    İnsanların insanca yaşadığını görmediği yerleri görebilmeleri için çok güzel bir buluş.

    Her şey için teşekkürler.

  49. İnsanlık için iyi bir hızmet sunulmuştur.

    İnsanların insanca yaşadığını görmediği yerleri görebilmeleri için çok güzel bir buluş.

    Her şey için teşekkürler.

    Sami AKGÜL
    TÜRKİYE CUMHURİYETİ – Yıldırım /BURSA

  50. It is really fascinating to know these details…I would say this is a really knowledgeable resource for the guys dealing with GIS as well as who want to understand how really Google’s Google earth functions.

    Regards SBL- GIS Solutions

  51. Pingback: goole earth

  52. Pingback: Our World in 3D… « Digital Worlds - Interactive Media and Game Design

  53. Pingback: googlemaps com

  54. Pingback: www googlemaps com

  55. Pingback: Easy Trader Horse Racing Betting System. | 7Wins.eu

  56. I have question, are the pictures of the earths surface that are used to generate the digital maps always retained and are they
    time and date stamped to show when thet where took not just where they where took?

    regards bazza

  57. Barry, I imagine so. There are even some cool information layers you can turn on to show you some of that capture data.

  58. That’s a good question,, Vernon.

    Precision should be under 1mm when fully zoomed in. (The difference between accuracy and precision is that precision is how many decimal places you can discern, where accuracy is how correct your answer is. Saying PI = 3 is more accurate than saying PI = 4.13159, though the latter is more precise.)

    The accuracy will depend in any given area on how well registered (aligned) the imagery, terrain, and earth oblation are. The precision will depend on your current level of zoom, I imagine, since you’d be normally placing it on whole pixel boundaries.

  59. Hello everyone!

    with the address

    http://mavoe.de/GE_Mat.htm#GE_wirklich

    (same as in the heading) I published a German Translation of this article.

    In my translation I changed the following Links:
    Patent 1: http://www.patentstorm.us/patents/6618053.html
    Patent 4: http://www.wipo.int/pctdb/en/wo.jsp?IA=US2005009538&DISPLAY=STATUS
    Patent 5: http://www.wipo.int/pctdb/en/wo.jsp?wo=2007070358
    Dismissal: http://www.realityprime.com/news/%20skyline-patent-infringement-suit-agains-google-earth-dismissed/
    Also I changed all Links to the English Wikipedia to equivalents in the German Wikipedia.

    In the past I lead GE workshops two times. In the moment I have no continuation for this activity.

    With GE-happy greetings

    mavoe

  60. Pingback: Neocartography « Professional Management

  61. Pingback: RealityPrime » igEarth

  62. Great article! Do you know which programming language is used? Are they able to use any frameworks or was all this image handling stuff done on their own?

  63. Pingback: Come funziona Google Earth | TecnoDuo

  64. i have a few quetions. first, i realised the sattellite feed on most google earth programs are not up to date. Why is that? secondly, i cant get the newer or professional google without a liscence.is the professional only for crime-solving investigation or companies?next i would like to adress that the google earth softwawe is a great tool for reasearch and such, but couldnt stalkers keep track of where people live on there?(i think this might be why they dont keep it up to date.) thats a lot

  65. Pingback: How Google Earth Really Works | Jordan Ostreff's Place

  66. Pingback:       |>> Google Earth

  67. This article is way to verbose. I can tell from it that you are a true nerd. Just get to the facts without talking so much. I have a PHD in CS and I can explain something to the point. You talk to much and BS too much.

  68. This is really very useful and prolific article for me, because i was not aware about the google earth. Thanks for sharing some astounding information.

  69. this article is so stupid.
    google earth also use quadtree display terrain.
    mipmap is very ordinary tech.

  70. Would you like explain some more details about data orgnization of Google Earth and the flow chart of spatial data index of Google Earth?
    Thank you!

  71. Question:

    Is this clipmapping similar to the Megatexture tech that id software announced a while back?

    It appears pretty similar, and knowing those folks I wouldn’t be surprised if the underlying tech is derived from this.

    • Hi Chris,

      Yes, they are similar. The main difference from my point of view is that clipmapping (universal texture) focuses on a single very large virtual texture that has a single point of highest resolution and tapers off appropriate to a viewing frustum. MegaTexture has some nice additions that allow it to handle multiple spatially-proximate textures that can be applied to terrain + buildings, etc.. and is more generalizable to games and rich 3D environments. Doing that is something we talked about at Keyhole when we discussed adding 3D building facades, but we didn’t go very far in that direction. So I’d describe MegaTexture as a great generalization of similar foundational ideas.

      To Carmak’s credit and the world’s benefit, he doesn’t try to patent or protect any of his technology. However, to be clear, Chris Tanner’s early software clipmapping and SGI’s earlier hardware versions definitely preceded MegaTexture as far as I know. Carmack is actually friends with some of the same ex-SGI folks, which also relates to his preference for OpenGL.

  72. im eager to know the concept behind the google earth . coz im going to make a project on basis of this. my project includes a machine which captures the live vision of people on the earth, and detects our person .this is named as smart cam cob. will be used only by the crime investigators

  73. If i interpreted this correctly – I had no idea that Google used the Keyhole Satellite for the purposes of Google Earth. As I recall that was the military’s super satellite. If this is the case then there must be a newer more amazing device.

    • I don’t think the KH series of satellites was still around by the time Keyhole started, but there are commercial companies selling aerial data using much newer versions of the same principles.

  74. Thanks for letting us know about how Google earth works.Now I have a good idea about it.

  75. A lot of thanks for providing this valuable information.Never gave it a thought as to how very high resolution images are available on desktops with ease.

    It was mind-boggling and at times I was in awe.Moreover,there is tinge of humor in your writing which makes the experience even better.

  76. This is all very informative, and I’m sure in the fullness of time I’ll have the IT-speak to understand most of it: my question is very simple, if someone might break this down for me. Every time I use google on my (rather slw) connection, images “download” and get clearer, so the next time I dont need to wait, they are already clear. When I lose my laptop, or upgrade, as I have done once or twice, my images are “lost” and I must start the process again. Are these images stored on my my places.kmz file? Is that what I need to back up? It seems rather small, only 7kb and I have all of Nairobi on my PC! Help pleez!

Comments are closed.