Stefan at OgleEarth has [apart from stirring up some minor controversy over just how many Google Earth users there are -- my take: 400M "unique installations" is an amazing metric, but the numbers I'd love to know (and probably never will) are min, max, and mean concurrent users...] done a great job rounding up multi-touch interface tech over the past few months.
The latest offering is a spherical multi-touch interface, as shown:
Multi-touch is almost certainly going to be the next big thing in PC (incl. Mac) user interfaces. One pointing device is good. Two gives you an extra degree of freedom. But ten is only slightly better than two, for most applications, although the collaborative aspect of 20 or 40 fingers on one video wall is not to be dismissed lightly. My arms get tired from just watching those Jeff Han videos.
The thing about this latest spherical display Stefan covers is: what about zooming in? I’ve always thought virtual globes should act like the real ones we played with as kids: put your fingers on it to spin, etc.. But what happens when you’re zoomed all the way to ground level? Does the plastic sphere flatten out to match the actual curvature? No. Of course not. Your screen full of imagery still wraps around the sphere. The applications are therefore somewhat limited, as they are for even the best Multi-Touch walls and surfaces.
So, looking beyond multi-touch, the key problem someone needs to solve is projective tactile feedback.
Why? Because the utimate "multi-touch" display is one you never actually touch — call it "zero-touch" — it’s a holographic, auto-stereoscopic, or pick-your-favorite-free-space-rendering-technique that lets you put your hands inside the 3D graphics, ideally touching or the objects as if they were at least partly physical.
The display part is almost solved, even today. Accurately tracking your fingers in a 3D volume is within 2 years of being ready. But how then do we let you feel the objects you touch? That’s the kicker.
Red Dwarf (the great British SF comedy) solved the problem with what they called "hard light." Star Trek solved it with some sort of programmable holodeck matter, from what I can tell. What the actual solution is, I don’t know.
But I’m going to go with a combination of muscular and peripheral sensory neural stimulation as the best practical bet.
Why create any physical effect at all when you can trick your body into feeling something that’s not really there? The sensory part makes you feel as if you’re touching something virtual .The muscular element makes it so that you can’t move your arm through a virtual object, whether you feel it or not.
I’m guessing that this can be solved in the next 3-10 years without requring surgery. The muscular part is being solved to restore motor control to paraplegics. That’s a big deal with many off-shoots for the rest of us. The sensory stimulation aspect is somewhat new, but will come about from game tech anyway, at least for sensing different kinds of surfaces and obstacles for your avatar or tank. Today, that’s limited to cheap vibratory devices, but it works remarkably well.
The future looks bright, and quite solid, from where I sit.
P.S. I’m going to stop adding these little footnotes after a few weeks, but keep in mind: this post does not reflect anything I know of or expect to personally work on at Microsoft, and the usual disclaimer applies (see About). In fact, I only just went through orientation and don’t even have my blue badge yet. I did, however, get to play with Surface(tm) in the lounge when I was waiting for interviews last month.