Display Technology

I got my professional start working in Virtual Reality in 1992 for a small start-up in Seattle, where we built big immersive displays from wood and plastic. Later, at Disney, I got to work with both custom head-mounted-displays and wall-sized projection displays (we built the world’s first hexagonal CAVE, a good 5000 pixels across). I’ve researched and endlessly pondered the issue of where display technology is heading. Let’s see if my answers are in line with yours.

The State of View

Right now, there’s a bit of disconnect between what’s marketed to consumers and what’s best. Many of you have already gone out and bought your swanky 42 inch or better plasma/LCD/DLP TV sets. Better make it last 4-5 years. From 5 years out, the "must have" will be wall-sized displays and you’ll probably want to shell out once again. Fortunately, you can’t go much bigger than a wall without rebuilding your house.

For those willing to do a little home engineering today, ceiling-mounted projectors are bright enough to easily scale to 10 or 12 foot (180 inch diagonal) displays without dimming the lights. At reasonable price-points (say, the price your LCD TV) resolution is limited to 1024×768 (e.g., 720p), but full HDTV resolution and higher starts at around $8000 today and will be down to the low thousands within 3-5 years.


With HDTV still slowing coming to life, the next trend will be immersion, not more pixels (the last thing the industry needs now is another standards fight).  Larger screens (10 feet and up) are a decent substitute for the movie theater experience. But for immersion, you’ll want some peripheral vision, aka wrap-around with at least 1/2 the resolution of the center screen. And you’ll have two main options. Projection can always fill in in a pinch, using a curved screen with software or corrective optics or using 3 flat/angled screens. Three projectors would be fitted into a single "emitter" device and the screens may even fold to save space (but increasing cost). But get ready to reconfigure your living room in any event. The simplest possible version of this sees the "ambient lighting" colored-LED gimmick taken to its logical conclusion — two small projectors at the edges of your TV project additional peripheral color information onto your wall (or two slide-out screens) at some resolution, extending your TV significantly.


We already see the start of this with multiple-monitor setups for your PC games. It’s still niche, but it’ll go mainstream once the networks (especially cable) realize they can use their spare bandwidth to offer extended panoramic views as a premium option. In other words, they’ll program for 3 screens simultaneously, the main center channel and two optional sides, all linked to form a seamless panorama.

Watch for this in sports first. In fact, this trick could be done today, perhaps in a sports bar with three big TVs or a theater with a single very wide screen. The main technical obstacle for the networks is in bundling 3 cameras on one pivot (not hard) and dedicating the bandwidth. There are some trickier issues, like how to handle "zoom," but those are still solvable with software or are at least avoidable.


The more exciting displays will come with any of a dozen variations on the flexible display surface, using electronic ink or any suitable medium such as flexible OLED. On the 5-15 year horizon (depending on many factors), you can expect to buy a "roll" of active display material, say 7 feet high and 20 feet long, which you can arrange anywhere from flat, to a semicircle around your couch, to a full 360 circle of video around you, standing in your immersive environment.


The current best choice for the 3D effect is the parallax barrier, essentially just another opaque display in front of your color screen that makes it so each eye gets a slightly different picture. That should scale to flexible technology as well, so [thicker] 3D wallpaper isn’t too far-fetched. And there’s always the good old lenticular lens (bending the light rays so each eye sees every other pixel). These are still expensive to produce, but with holographic lenses (photon-scale prints which mimic the properties of real lenses), the price and thickness may come down to nothing.


Head-mounted displays, even when shrunk to the size of sunglasses, are suboptimal for entertainment, mainly due to the inevitable latency when moving or turning your head. It feels a lot like being drunk, even when it’s only 1/30th of a second. Fixed (projection or any equivalent wall-sized) displays only have head-tracking latency issues when significantly moving your head, not turning, and you can get away without any head-tracking at all as long as you don’t need to interact with virtual objects closer than about twice your screen size. Adding parallax barriers to fixed-mounted displays, the latency can be dropped to zero, as long as enough channels (or slices, ready for each eye to see) are predicted and rendered in time to see.


But HMDs have one big advantage in portability. Augmented reality is the overlaying of virtual information on the real world. And it’s already used, for example, in aircraft maintenance. For entertainment, imagine a melding of the real environment with the virtual in alternate reality games, dragons in your back yard. This area is having some trouble commercializing right now, but the goal is to have a heads-up display for anything you might find today on your cell phone or PDA, including automatic recognition of objects, people, and places. At some point, your phone call will include a life-sized image (a "ghost") of the person on the other end of the line, walking beside you down the street (I’m ignoring how to get that image, just how to display it).


But that’s still a ways away. I imagine HMDs will become more common in 5-10 years with further miniaturization. Some companies are even using lasers to beam the image into your eyes without a screen at all (I tried it 10 years ago in the lab and I hope it’s improved). But the big sea change will happen when miniaturization takes HMDs down to the size of contact lenses in 10-20 years (certainly needing on-chip lasers or holographics to make it work) . There, you’ll see a dual interface. Open your eyes and you get an overlay of virtual on real objects, text, highlighting, even magnification. Close your eyes and you’re in a complete virtual world.


The more mundane vision is of the virtual office, which people are pushing for today in the same way they pushed for the paperless office in the 90s. Sure, I’d love to have a real holographic desk surface full of virtual 3D widgets I can slide around, virtual paper instead of monitor, Google Earth floating on my desk — that sort of thing. But will the benefits be worth the expense? Not for some time to come. Studies have shown that more screen area can equate to higher productivity (up to 30% iirc), but I haven’t seen any study that shows any benefit for a virtual floating piece of paper vs. a 2D window on monitor. I’m a big fan of 3D desktops, but mainly for aesthetic reasons. It’s going to take some enterprising software development to find a real use for the 3D desktop.


So for the easiest path through all this chaos, I’m imagining the next push will be for multiple or simply bigger screens, possibly multiple channels of television side-by-side followed by wrap-around displays for the home. In 5-10 years, we’ll see augmented reality take off for the office and the street. And the virtual contact lens brings it all together in a nice little package by the time I’m ready to retire. Which is a good thing too, since I may really need glasses by then.

See page 2 for further thoughts. Please feel free to comment, share your thoughts, and I’d be happy to go into more depth on any given area in future posts. This stuff used to be my bread and butter as a consultant, but I’ve moved on to other areas of R&D these days.

13 thoughts on “Display Technology

  1. Just as an example of the projector vs. TV issue, I’ll use my own home theater. Rather than spend $$$ on a good HDTV, I bought a nice 1024×768 3000 lumen projector for about $2000 two years ago. For the best image quality, I run this off a pure digital (DVI) feed from my computer, where I play DVDs and have a HDTV tuner card. With a 10 foot screen, it’s hard to even find an HDTV for $6000 that comes close, and I’m running the projector at only 50% brightness to extend the bulb.

    The only downside is shadows if you walk in front. But with a rollup screen, it’s even flatter than the flattest plasma screen. It totally disappears until it’s time for movies.

  2. I know Vernor Vinge has written about contact-lense-sized projectors that overlay a virtual world over the real world (and he envisions them arriving within the next twenty years, too), but I confess I’m skeptical.

    How do you imagine these things are powered? Is the CPU within the lense itself (and if so, how much heat is generated)? Or is the CPU somewhere else, and it sends data to the lenses via infrared? And how are the two lenses kept in sync? Do they have to maintain line-of-sight with a transmitter that you’re wearing? (If so, how well would that work when your eyes are closed?) And all this in a lense safe enough to wear on your eyeball?

  3. Vinge is my hero. I hadn’t realized, or maybe I forgot. Which story was that? Anyway, I imagine it would need to be seriously everything-on-a-chip, lasers or micro-mirrors, power, CPU, networking, positional sensors. Very tight integration all around to make it work.

    For power, I was thinking of a couple of options. Battery is simplest. Charge at night. Capacitors aren’t so small, but there’s relatively a lot of room outside the pupil area. Solar is an option too. Most of the lens surface can be photovoltaic and need not be transparent. I don’t know if that’s enough power. Heat dissipation is an issue too, so consumption would need to be low in any event. We may also be talking biological processing here, in which case we might be able to pull from the living eye cells (the cornea and lens are made of hollowed out cells, iirc, but there’s more than that).

    That’s about the limit of my hardware skills. From a software point of view, having the CPU on board is best, since latency is critical. Each lens could act independently in rendering (each might rotate or slide on your eye anyway), but maybe just send and receive image data to a remote CPU on the body somewhere. The EM wavelengths are proportional to size of the antenna, so that may be short for going through walls. But probably long enough to get from your eyes to your hip for a PAN on low power. How much of a bluetooth earpiece is speaker and battery?

    Anyway, not in the next 10-15 years, I expect.

  4. They feature prominently in Vinge’s “Fast Times at Fairmont High,” and I gather his recent novel RAINBOWS END is set in the same milieu.

    How could each lens could act independently in rendering? It seems like the slightest discrepancy between the two would give the wearer headaches (not to mention ruin any stereoscopic effects).

    I hadn’t thought about photovoltaic as an option. So these lenses simultaneously act as projectors, cameras, antennae (assuming that the CPU is elsewhere), and solar collectors, while remaining flexible, hydrophilic and oxygen-permeable? Wow.

    And for what? Just so people don’t have to wear glasses? It just seems that external eyewear would be so much simpler.

  5. The two eyes are going to need to render two independent views no matter what. Both would need positional sensors to account for where your eyes are looking (which, with convergence, accomodation, etc.. is almost never in true parallel), how the lens is situated (rotation) and so on.

    While communication between them is needed for determining focal distance (easier than measuring the eye’s own lens), waiting for one "right" view direction would be worse than the perceived error with two separate ones. Filtering both would better solve that and smooth out any jitter in the sensors. Our vision systems have some built-in image stabilitzation too, especially accounting for eye movements. And I wonder if the contact lens can image reflections off the retina as well as project on it for an extremely precise callibration and image registration? That would be quite cool.

    One of the causes of headaches for HMDs is the fact that the HMD assumes both eyes are in parallel gaze, focused at a fixed depth, and generally not moving. The best systems use eye tracking, but even these have a hard time pre-convolving the image so the eyes get the right focal information in the right parts of the images to avoid headaches (same problem as looking at a blurry image — worse if it’s blurry due to the wrong kind of filtering).

    Anyway, the contact lenses, using lasers or holographic renderning would theoretically be able to avoid all that, or at least, they require us to solve all those problems to make them work.

    The benefits of contact lenses over glasses are first: NO GLASSES. But all of the above specifics about differential vision apply too. The experience would have to be better because of how the lenses would work, better registration of real and virtual objects, better eye tracking, better focal accomodation. Then there’s what you get with your eyes closed, which glases can’t do. I personally like the idea of popping in and out of a virtual world vs. the real one by closing and opening my eyes.

    I’m not sure the photovoltaics need to be so advanced as you say. As long as they’re thin and flexible, they can be sealed in a thin coat of polymer but with enough O2-sized holes in the lens itself.

  6. Yes, of course the two lenses have to render different images all the time, but computing the difference between them requires accurate information about the relative positions, etc. of the eyes at all times. Glasses could get that information more easily that contact lenses; it’s easier to track eye movements and focal depth with a laser than with some positional sensor.

    As for the “NO GLASSES” argument: I think that by the time materials technology reaches the point that all the necessary functionality can be combined in a contact-lens-sized object, we’ll have some form of direct stimulation of the optic nerve. The former task seems at least as difficult as the latter, if not more so.

  7. Yeah. I’d agree that, right now, putting eye-tracking on a frame is better than any wireless motion sensing techniques, which is to say neither approach is quite there yet. I don’t believe optical tracking from the eye’s point of view would be so much harder than optical tracking two pupils from a frame, as long as the cameras had instant range information as well as color. You might not need to find correspondences between the two eye’s cameras at all (just a reasonably static scene). But, yes, VR sunglasses will precede any sort of VR contact lenses by a few years at least. I just don’t think they’ll be as good or as popular 20 years from now.

    Anyway, if by "direct stimulation of the optic nerve" you mean a device implanted and trained for individual neurological differences, then yes, that’s likely in 5-10 years, though not desirable for the casual user and far from video fidelity. For people with impaired vision it’ll be a great option though.

    The technology for focusing EM waves from an external (casually worn) source to control individual neurons (assuming we even know which ones and how) is likely more than 20 years out. It’s not the same as zapping a tumor. I’ll post more on that later. My wife is a neuroscientist, as it happens, so I’ll do more research on that and post again soon.

  8. VR sunglasses will precede any sort of VR contact lenses by a few years at least.

    A few decades, more likely. We have rudimentary VR sunglasses now. Current contact lenses are completely passive.

    I just don’t think they’ll be as good or as popular 20 years from now.

    As good or as popular as what?

    Sure, direct stimulation of neurons to produce high-resolution vision for the casual user is a long ways off. But so is having a contact lens that acts as both a high-resolution camera and retinal projector.

  9. "as good or as popular" refers to glasses vs contact lenses. I think once they work, the lenses will be much more popular for the reasons I mentioned above.

    Maybe some people like wearing glasses, but once vision problems are routinely repaired, glasses should become somewhat niche. For the nearer term, I think ubiquitous 3D display surfaces are also going to keep VR glasses somewhat niche.

    Sunglasses are popular for outdoors (at least on the west coast, not so much in NY), but I’d also much rather have a single display tech that goes with my indoor and outdoor life.

  10. I’m not saying people enjoy wearing glasses. Most people don’t put lenses against their eyeballs for fun, either. Once vision problems are routinely repaired, contact lenses will become niche, too.

    I agree that 3D display surfaces are going to be more popular than VR glasses. They’ll also be more popular than VR contact lenses for the same reasons. But assuming that people want private visual displays at all, glasses will be the better (perhaps only) choice for a long, long time.

    Consider: sunglasses that darken in the sun have been around for decades. Do we have contact lenses that do the same? How long will it be before people routinely choose to wear photochromic contact lenses instead of sunglasses? I expect a similar lag time for any subsequent technological advances in glasses.

  11. I think people would love self-tinting contact lenses if they were readily available and cheap. I found one site at least: http://www.vstk.com/
    Personally, I only buy cheap sunglasses because I break or scratch every pair. But we digress.

    I promise a highly debatable premise for my next Brain Interface vs. External Sensory VR post.

  12. Concerning power supply of contacts:
    The output of the lasers are about 100 uW. It’s possbile to supply power of that kind via radio – same as RFID. If the lifetime of the contacts is short like a day or so, a tiny battery should suffice. Power supply shouln’t be a big deal.

    I can’t see how one can control all the laser beams, though. Anyone knows?

  13. On steering, I can imagine a layered chip that has both coherent light generation and a DMD zone to steer it.

    The optical geometry seems a bit hard to make the contact lens flat enough and still get a big enough range of angles for the lasers on retina. But I’m sure some smart person can figure that part out. Maybe there’s an equivalent to DMD that can individually refract instead of reflect the beams?

    It’s a bit out of my expertise. But I’d be happy to write the software if anyone wants to help build this. 🙂

Leave a Reply to avi Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.