The Media Lab’s video holograms appear to float above a piece of frosted glass. An electronic device behind the glass, called a light modulator, reproduces interference patterns that encode information about the pictured object. Laser light striking the modulator scatters just as it would if it were reflecting off the object at different angles.
Nothing can beat true holography for responsiveness, accommodation, and presence. Systems that track your head to draw two, or any a small number, of "correct" views will always have some latency — it takes time to read the sensors and the image rendering pipeline also takes time to use the new data, at least a frame or two. A sixtieth of a second may not seem like a lot, but it’s noticeable as visual lag.
As for accommodation, I’m referring to the whole set of things your eyes do to adjust to natural light coming in — they tilt, they converge or diverge, they focus on near/far imagery. Without measuring each person’s facial geometry, there’s no way to perfectly adjust a stereo pair of images. And the fundamental problem with non-holographic systems (except perhaps volumetric ones) is that no matter where the virtual object seems to be, the pixels are generated on a 2D screen, which means your eyes get confused as to where to focus.
On Presence, you can read this for some of the issues. Peripheral vision may be more important than accommodation, less important than latency, but every little bit helps.
What remains to be seen is how the consumer version of this system will handle these issues — will it be more like typical stereoscopic rendering, rendering just 2, 4 or a few dozen different views, subject to the above constraints? Or will the ideal images you want to see for a quality experience be sitting there, ready, before you move your eyes, for a nearly seamless result.
I can’t wait to find out.