In previous posts like this and this from back in the depths of 2008, I made some aggressive 3-5 year predictions about available technology. Here are some mid-2011 updates on what works so far and what doesn’t.
Facial Expressions captured (3-5 years from 2008 = 2011-2013): True. Look at Avatar Kinect, coming soon. While limiting the rendering of said facial expressions to Rankin-Bass style XBox avatars was not my personal preference, the result still works well. The team has done an impressive job. However, it’s not as portable as I was hoping for in my original article in terms of the mobile AR solution, so a few points off for me.
Laser Retinal Scanners (2-3 years from 2008 = 2010-2011). False. Well, the tech apparently does exist, but it clearly hasn’t been commercialized to any degree, except in the form of pico projectors. Dim. Also, I’m now not at all sure this tech going to win in the end. See below.
3D Rendering is Photo-Realistic. (2011). True. High-end video game makers have been focusing on things other than realism in recent years. The result of their 3D engine work is certainly not indistinguishable from reality, but it works well enough not to notice. The physics of light and materials works really well. It tends to be things like cracks and dust and dirt that graphics programmers and designers overlook. The worlds tend to be too perfect, hyperrealistic rather than photorealistic.
Virtual Humans pass the Uncanny Valley (2013). Still possible, but doubtful. I’ve seen some examples of CGI still shots that could not only pass the uncanny valley, but easily fool a human into thinking it was real. I have not seen any 3D animation pass the uncanny valley though, except where it uses motion capture of a real human. And even then, it’s close, but typically doesn’t capture enough information to pass — the subtlety, deformations of fatty layers of skin and fluid dynamics. Even things like “eye gaze” is still a problem in 2011. For example, there’s good evidence we can accurately tell where someone is looking by the glint of their eyes, which changes based on the subtle deformation of their eyeballs as they go from “near” to “far”. Virtual actor “gaze” looked far more believable in Monsters, Inc. for example, than Beowolf. These are all solvable problems, but no one has put it all together yet IMO. Avatar mostly worked on this front, but it intentionally wasn’t using humans. We get two years until I need to admit the delay…
AR Contact Lenses (5-7 years from 2008 = 2013-2015 for sure; 2011-2013 is possible at great expense). It’s still possible! Well, forget the 2011 option, but this is looking better all the time. The University of Washington contact lens work is still 5-10 years out, alas. But the combination of a cheap mostly inert contact lens plus simple AR glasses is a potential game changer (assuming people who don’t need them will choose to wear the lenses). It’s certainly still possible for 2013, given Innovega’s recent public announcements. And Vuzix made some impressive announcements of their own from what I hear.
Google Earth for the Human Body (no date) — true! – with similar benefits and flaws we discussed. I don’t see the “mapping problem” solved yet, but it’s really hard. Give it time. As a teaching tool, it’s a great start. And it’s all done in WebGL as a bonus. I only complained about the sad state of Web3D standards, but that too may be about to change.
So as of today, I’d give myself about a 60% rating, with some items still TBD. Not a bad batting average for such aggressive estimates, but hopefully I’ll do much better next time.