The Unauthorized History of Virtual Worlds


I wrote the following essay to help us get going crafting a review paper for a major comp-sci journal. ‘Us’ in this case was Blaise Aguera y Arcas, one of the founders of PhotoSynth and Virtual Earth’s new architect, and Jaron Lanier, one of the pioneers of VR, who thought of pretty much everything before I became conscious of the world.

Now, I should caution that Blaise didn’t ultimately want to use this text and Jaron equally had issues with it. The tone is all wrong for an academic journal, plus Jaron disputes some of the dates I recorded from my research (he may well know better). But I felt it might at least be entertaining to RP readers, so I’m posting it for you to enjoy. Still, don’t take any of it as official, just me being a smart-ass.

 

 

October 20th, 2008 marked the 30th anniversary of the MUD [1]or Multi User Dungeon, widely recognized as the world’s first multi-participant text-based virtual world. Only three years later, a somewhat less interactive work, True Names[2] by Vernor Vinge, imagined full multi-sensory worlds with millions of participants. The film TRON debuted only a year after that, popularizing (if not actually monetizing) computer-mediated virtual worlds as full-on alternate realities — places with lives onto themselves. But before any of these were even conceived, The Veldt[3] by Ray Bradbury, envisioned “The HappyLife Home,” a fully immersive CAVE-like space, consuming parents and kids alike way back in 1950.

The history of virtual worlds is a complex mesh of fact and fiction, weaving pioneers, dreamers, authors and critics in a quest to define a grand vision and to meet an ancient need, dating back to the days of burnt charcoal on cold cave walls. That need is to communicate, to share and persist what is otherwise ephemeral, isolated and ultimately bounded to the lifespan of memory: our thoughts, our ideas and our stories about life, real and imagined. It is perhaps fitting that these visionary fictions are themselves conveyed to audiences new and old as print and film-mediated virtual worlds[4],  just as the CAVE acronym[5] itself is a recursive allusion to “Plato’s Illusion” playing on those same torch-lit walls.

That grand vision goes beyond mere communication. The desire for ubiquitous virtual worlds is an understandable manifestation of our collective (if not universal) longing to overcome the rules that contain us, to propel our minds over the limits of matter, space, and even death of which the “mortality of thought” is just one example. The end goal for many is to construct alternate realities so malleable, so perfectly adapted to our innate desires, that we could fairly call the results magic.[6] In those brave new worlds there need be no scarcity, no ugliness, nor pain. We can be whatever we imagine or wish ourselves to be and suffer none of the limits of ordinary mundane life.

Then again, in reality as in fiction the stories never quite seem to work out that way. Reality has a way of always winning in the end. And as with the previous allegories, this grand quest of ours is so cyclical, so inflated and self-referential as to often resemble the construction of a Klein bottle. The true history of virtual worlds is one of visionary and often impervious genius, promises made (and made again) and the search for the most elusive human-computer interface ever envisioned: the one that disappears.

One of the early visionary geniuses in the field was the legendary ACM Fellow, Ivan Sutherland. His ground-breaking Sketchpad system plotted the lines along which future computer graphics interfaces would be drawn. He invented the head-mounted display, heavy as it was, to better couple the virtual imagery to our actual head motions and in so doing laid the foundation for untold future systems that place one or more virtual interactive cameras in a computer-mediated reality.

But just 10 years earlier and only a few years after Bradbury’s fictional exploration of virtual family values, Walt Disney more literally broke ground on a site in Anaheim that would become, for a significant period of time, the world’s largest physical virtual world.

Until then, the best example of a “fantasy land” was the haunted house — a grotesque and distorted reflection of reality, not entirely unlike some early experiments in VR. Disney’s theme parks provided a comfortable level of immersion for many people — not nearly as inescapably immersive as, say, Korea or Vietnam, and not nearly as interactive as the Renaissance Pleasure Faire and “RenFaires” since, but safe and fun for the whole family.

The 1960s also saw an explosion of experimentation in the media of the time, including Mort Heilig’s Sensorama, which included stereo visuals, smells, sounds and even haptics. Filmic experiments in 3D and surround visuals abounded, with Disney’s Circle*Vision[7] being one of the most widely viewed. The addition of sturdy handrails to keep guests from falling over (in a stationary theater) is a testament to the power of these experiences to move and consume us for better or worse.

While we’re not sure if Sutherland knew Disney or Heilig in his day, we do know that Evans and Sutherland computers were used to help animate Disney’s TRON in 1981. And ten years after Tron failed to make a dent in any critical or commercial sense, Disney Imagineering’s VR Studio would similarly begin their real-time interactive VR experiments on Evans and Sutherland image generators.[8] The obvious choice of movie content gave way to a failed Rocketeer attraction and later Aladdin’s Magic Carpet Ride, sporting heavy HMDs, counterbalanced much like Sutherland’s Ultimate Display.

One of Sutherland’s students, Ed Catmull, also found inspiration in Disney animation. He embarked on 30 year quest to reinvent the art and science of animation, finally coming full circle in the new millennium, as the head of Disney’s animation studios after Disney finally married Pixar.[9]

The hard road towards better computer-mediated storytelling (in HMDs and on the silver screen) merely proved that obstacles are there to surmount. Randy Pausch, the scientist most famous for his profound “last lecture” at CMU, observed that obstacles “give us a chance to show how badly we want something” — or, perhaps, to make us take the time to understand why. Dr. Pausch worked with the same Disney VR Studio, CMU’s Alice software, and other members of the story all of whom badly wanted to solve the problems of more easily building and experiencing computer-mediated virtual worlds.

But reinventing the world takes time — often a generation or two. Ivan Sutherland was reportedly influenced by Vannevar Bush’s 1945 atomic-age essay “As We May Think,”[10] which is also widely credited as inspiration for the World Wide Web. That it took 50+ years to produce a Google and an MSN to make the web more tractable is merely a reflection of the difficulty we have in taking grand ideas and rendering them in a form that works for ordinary people – the proper convergence of money and market, timing and technology; but more importantly a better understanding of just what it is we should be asking for in the first place.

Well before we had many cogent glimpses of that sort of revelation, the 1970s saw the exploration, largely on university mainframes, of true multi-user environments. Maze War in 1974 was the first known multi-user environment and graphical to boot. But it was very limited in its influence. MUD, in 1978, had much more success. The lack of graphics proved to be no deterrent, and in fact arguably improved interactivity and depth, simplifying some very hard problems to a more manageable scope.

The 1980s saw a new push into real-time graphical interfaces, with cheaper and more available commodity hardware. On the desktop, Apple and Microsoft pushed 2D windows, icons and mice to great effect. In movies, CGI went from niche to boutique to a mainstay of visual effects. And VPL Research pushed the envelope in full-sensory virtual reality (and even the term itself), providing visual programming elements and a grab bag of more natural user interfaces. Their Reality-Built-For-Two was the first commercial VR system, shipped in 1989.

The Achilles heel of commercialization is the requirement of making money. VPL ultimately sold itself and its patents to Sun in 1998. Still, in 1989, Mattel released a PowerGlove that simplified the VPL DataGlove concept into something mass-marketable. It unfortunately failed to spawn the kind of kinetically addictive games that the Nintendo Wii (now Kinect) presently enjoys, despite many similar capabilities. And while the Visual Programming Language VPL developed broke new ground, it fell to Lego and Microsoft’s Robotics Studio, years later, to truly push some of these concepts to wider audiences.

The 1980s can be thought of as the early adolescence of virtual worlds in which the core technological concepts came to be expressed, tested and propagated to anyone who would listen. Habitat, for example, emerged in 1988 on the popular Commodore 64 platform, and was arguably the first mass-market virtual world, presaging mega hits like Habbo Hotel, Club Penguin and even Everquest by over a decade, but never quite gaining the market validation they sought.

If the 1980s are the early teens, the 1990s represent the often-raucous teen to adult transition. Unfortunately, as difficult as it is for any child actor to grow up with the hype and spotlight of Hollywood so too were the overwhelming expectations for VR technology a part of its downfall. Movies like The Lawnmower Man promised an effectively supernatural decoupling of VR’s espoused benefits from any actual truth.

Publications liked Wired treated VR visionaries (or indeed, anyone who seemed to have a good futuristic idea) like rock stars. In fact, it’s the expression of difficult computer science concepts in natural human metaphors that is both the greatest strength and the ultimate weakness of VR – anyone describing it to lay audiences can use language that evokes sweeping images and expectations that are currently impossible to meet.

Meanwhile, with all of the attention and potential of VR technologies for computer- mediated virtual worlds, research funding increased dramatically. Among the efforts, the Human Interface Technology Lab at the University of Washington pushed new technologies to solve some of the core interface challenges in VR, leading to devices like the Virtual Retinal Display, a laser-based display that is today most closely related to pocket-sized low-power projectors. UNC pushed the envelope on haptics and core technologies, while Universities like Utah, Ohio State, Brown, and Stanford pushed ahead primarily on 3D rendering. Carolina Cruz-Neira and others pushed the envelope in projected virtual environments, culminating in 1992 with the CAVE at the University of Illinois Electronic Visualization Laboratory. Meanwhile, continued military investment in visual simulation drove investment in graphics supercomputers, ultimately leading to cheap commodity 3D hardware acceleration for PCs.

Virtual Reality went more or less like the space program. Massive investments didn’t result in any influx of civilians going into space, no personal robots nor flying cars. It certainly got a lot of public attention, anxiety and perhaps disappointment after some early climaxes. But the technologists and dreamers marched on. And core technologies produced did in fact wind up in everyday consumer devices, from cell phones to gaming consoles.

While true Virtual Reality devices failed to take over the world in any meaningful way, the offshoots of the very same work wedged themselves into our daily lives. While Disney Imagineers worked on million dollar SGI hardware to build an immersive Aladdin VR attraction in 1994, they played Doom and Quake in the office for fun. Giant dinosaurs gave way to nimble mammals, just waiting for their chance. John Carmack’s games simultaneously spawned an entire genre of blood-splattering Demon/Nazi/Zombie shattering realism and helped blast PC-level 3D graphics into the mainstream.

The gaming community can be credited with seeing some of the most lucrative uses of 3D graphics and interactivity, and of turning it all into a sustainable, even thriving business[11]. Gaming has turned into such a serious business that a whole branch of Serious Games has emerged[12].

But while game companies were having fun and making money the internet boom took full effect with a much stronger emphasis on the latter over anything else. SGI famously launched Cosmo, an offshoot of VRML 2.0, to take the “Metaverse” by storm. The “dot com” bubble dwarfed by orders of magnitude any investment in games or VR pushed home broadband adoption but ultimately left US network infrastructure and networked virtual worlds bobbing in the wake of the April 2000 tsunami.

Phillip Rosedale was one survivor of that bubble. He was one of the inventors of a video codec that made Real Video hum, having sold his company to Progressive Networks in 1996. He founded Linden Labs in 1999 with the proceeds and some help from friends in physics and video games. And while the evolution of massively multiplayer on-line games can be traced from Maze War to XPilots to Meridian 59 and World of Warcraft, Second Life differentiated itself in several important ways: first, there was to be only one shared world, not many shards or instances to split the load; second, the world is malleable and subject to user’s whims; and third, it’s not a game, it’s an alternate existence onto itself, tied into real world currency and the web itself but separate just the same. Second Life also rejected the standard VRML view of the world, of scenegraphs and polygons, in favor of a completely dynamic soup of what are effectively virtual Legos. The end result is something that is both powerful and still, eight years later, limited by its own design to a walled-garden and only linearly scalable topology.

Another survivor of the dot-com implosion was busy building a fundamentally different kind of virtual world, right in the bubble’s wake. Keyhole was a spinoff of the defunct game technology company, Intrinsic Graphics, involving many former engineers from SGI, who had worked with VR early adopters from Disney Imagineering to the NIMA and NGA[13]. The key technology could stream an entire planet worth of visual information over a standard network connection and render the relevant window it in real-time on any PC. But the true advantage over contemporary systems, GIS and otherwise, was in how simple and intuitive the interface was. In 2004 Google acquired the company and renamed it Google Earth. And today, it defines what a mirror world is supposed to be – an accurate reflection of the real world, equally navigable from your desktop or mobile phone.

There was yet another kind of mirror bubbling to the surface around this time, one much more grass-roots. Bloggers, with unfettered access to the Web, began posting about their daily lives, their interests and their friends. Social networks began to take root in 2003, in a second wave of web startups, the so-called Web2.0. They developed “what’s new” feeds meant to keep people in the loop. Sites like Twitter emerged later to simplify the process of posting one’s status to the net. And more recently, efforts like FourSquare, and others seek to leverage real-time location on mobile devices to build up a view of the mirror world that is both personal and reflective of our individual realities.

A distinct kind of virtual world was just coming into focus around the new millennium as well. The earliest examples of Augmented Reality can be traced to systems like Myron Kruger’s VideoPlace in 1975. These consisted of combining video streams of real people and real or artificial places to simulate a sort of detached immersion. Much as with a meteorologist on the local news, participants could see themselves acting in a virtual environment in the third person only, on monitors for example, vs. the first person points of view obtained with HMDs and CAVEs. Eventually, technology was added to accurately track and emulate camera lenses and to seamlessly register computer imagery into the video. Today, augmented reality typically refers to live captured from a person’s point of view, overlaid with relevant CGI and fed back to some display device, perhaps even a head-mounted-display.

The promise of augmented reality is best epitomized in Vernor Vinge’s novel, Rainbows End, where VR contact lenses provide continuous access to the augmented world both inside and out. Haptic interfaces provide the missing sense of touch to manipulate and make these virtual objects real. But the state of the art is nowhere near this threshold. Problems with tracking and registration of objects still remain, but the principle challenge remains the display device, making it small, unobtrusive, and yet high enough fidelity to enable a person to walk around and actually use it without getting hurt by the real world, which hasn’t gone away.

In the development of Virtual Worlds technology, we’ve seen displays go from a few vectors to thousands, millions, and lately billions of pixels, and from a few dozen to a few billion polygons per second as well. We’ve seen haptic interfaces shrink from room-sized devices, reminiscent of the Inquisition, to gloves and even direct neural stimulation with no mass at all.[14] We’ve seen input devices go from mechanical arms to wired magnetic sensors to wireless and even optical motion capture. And we’ve seen networks go from slow drizzles of bits to full-on torrents, with sophisticated methods of prediction to cover latency and errors.

But yet Virtual Worlds, despite such advancements, and despite the adoption of its technology and methodology in many fields, is still widely seen as a future technology, not relevant to our everyday lives, a walk down the street, a trip to the mall, a day in the office. Web 3D is derided as unnecessary and indeed cumbersome, given the success and simplicity of the current 2D Web. Second Life is seen as niche, despite many businesses trying, for a time, to set up shop in one of the best Metaverses around.

Clearly something is missing in the equation.

After forty years of research and development since the ground-breaking days of Ivan Sutherland we’ve finally beginning to come to a realization. People like escapism and fantasy worlds. Kids love to play in the modestly dimensional clubs of penguins and hotels. And companies are rushing in to tame the wild 3D west and pan their virtual gold. But the vast majority of human beings still spend the vast majority of their time immersed in a much more compelling and less inescapable 3D environment, which we tend to call “real life.”

Computer-mediated virtual worlds are just beginning to catch up with what we do out here in the real world. We buy and sell things. We get and share information. We communicate and “get stuff done.” We can increasingly do all of that in a virtual world too. But we can’t do it nearly as well.

Though we bemoan the limitations of 2D displays and table-top mice who have overstayed their welcome by a decade or more, the reason we don’t typically do these things in a virtual world is not about the interface, or the speed of CPUs or GPUs. It’s because we live out here, in the real world. No one likes to commute across an international — or inter-dimensional border — on their way to work.

A significant push in modern immersive environments research, then, focuses not as much on how to bring us to Neverland, but on how to bring Neverland to us – to make The Virtual a part of our lives in a way that benefits us beyond the hype of paperless offices and social (read: peer pressure) networks.

What the new Virtual is all about is reality but augmented, mirrored, and hyper-realistic, giving benefits beyond VR, AR, and even AI (which has similarly vanished into the lattice of modern life).

But what does it take to make this vision real? How do we mirror the real world in a way that we can make it interactive, turn it into a trellis for overlaying whatever whimsical and/or beneficial fictions we can dream up? Once we have that trellis, how do we interact with these hybrid real/virtual objects? How do we trade them, when their substance costs no more than the electricity they consume?

Only one thing is certain: the story of the next 25 years will be written in the full light of day.

[1] MUD was written by Roy Trubshaw, Essex University in 1978. It was the first adventure game to permit multiple users.

[2]True names(©1981) first appeared in Dell

Binary Star #5, and again in True names and other dangers (©1987, ISBN 0-671- 65363-6)

[3] Published 23 September, 1950, in The Saturday Evening Post, and again in the anthology The Illustrated Man in 1951

[4] A matter of definition, virtual worlds can be said to exist in our minds first and foremost, as we never directly experience any objective reality – everything is mediated by our senses, and our memories in large part. Computer-mediated virtual worlds are a novel extension of the same idea, adding a level of depth that film often lacks. Textual virtual worlds (e.g., books), on the other hand, are still the world’s most effective form of conveyance, considering price/performance and effective bandwidth, though the worlds they create in our minds can be said to be a highly lossy decompression of whatever the author had in mind.

[5] CAVE in fact stands for CAVE Automated Virtual Environment – the recursion has indeed been attributed to a Platonic influence.

[6] From Arthur C. Clarke’s third law: “Any sufficiently advanced technology is indistinguishable from magic.”

[7] Originally named Circarama, and renamed to Circle*Vision in 1967.

[8] E&S systems were used to prototype the Rocketteer virtual reality ride, which was never released, and were later replaced with more powerful and programmable supercomputers from SGI for the Aladdin ride. In a strange repetition of history, the VR studio choose to suspend their HMD from the ceiling, much as Sutherland’s first HMD in 1968 had done. (International Conference on Computer Graphics and Interactive Techniques, 1996, Proceedings)

[9] Or vice-versa.

[10] As We May Think, The Atlantic Montly, July 1945.

[11] Insert $xxxB estimate

[12] This market estimated at $9B alone.

[13] The National Geospatial Intelligence Agency, named various other acronyms over its several lifetimes, has been the prime consumer of digital earth imagery. In conjunction with Al Gore’s Digital Earth Initiative in the early 1990s, it’s responsible for the exponential increase in availability (and reduction in price) of aerial imagery which makes Google Earth and Virtual Earth possible.

[14] Presently limited to surgical limb-replacement procedures, the non-invasive stimulation of muscles and sensory receptors is still quite nascent.

  1. #1 by Damon on March 23, 2009 - 10:31 pm

    Great article of the [unofficial] History of Virtual Worlds Avi. This is definitely a piece that offers insight into where it all came from. Now back to building in the full light. :)

  2. #2 by Tish Shute on March 24, 2009 - 6:11 am

    A great article. Thanks Avi – we forget these histories so quickly. I agree with your conclusion. And look I forward to the next chapter as the interdimensional border – virtual/real dissolves.

  3. #3 by David A. Smith on March 24, 2009 - 12:59 pm

    Hi Avi,
    You might want to note the NASA efforts that I think really jump started the 80′s including directly influencing VPL. (see the paper “Virtual Environment Display System” Scott Fisher, et al). I think this was the first important work in truly interactive environments that was done after Sutherland. It had a huge impact on my work.
    David

  4. #4 by Ken on May 22, 2009 - 7:25 am

    ‘Computer-mediated virtual worlds are just beginning to catch up with what we do out here in the real world. We buy and sell things. We get and share information. We communicate and “get stuff done.” We can increasingly do all of that in a virtual world too. But we can’t do it nearly as well.’

    My concern is that computer-mediated virtual worlds are contributing heavily to the loss of refined sensibility, the full use of our senses, of our body. The ‘Indian’ in us – smelling, moving graciously, experiencing nature with all his senses – killed completely and irrevocably. I don’t mean this in a sentimental, romantic way. Your short list of what we do in the real world, is about manipulating and changing it only, I miss in your summing up our ability to appreciate, sense and enjoy natural beauty _as is_.

    The essential question in my view is: how to rediscover our natural sensibilities and joys – and thereby the wonder and beauty of the natural, ‘given’ world, which comes to light _by itself_ in all its complexity – hand in hand with the virtual technoworlds that may immerse us in an unprecedented fashion. The great danger is, that all our sense-perceptions will be _manufactured_, our interests and joys _programmed_ and _marketed_.

    I see a link here with your excellent remarks in your post ‘Designer Babies’ and would appreciate very much to read (more of) your thoughts on this.

    [I apologize for my Englisch, I'm not a native speaker].

  5. #5 by Ken on May 22, 2009 - 7:43 am

    Just now I came across this post on the blog Transumanar.com: http://tinyurl.com/pqjjm9

    ‘On the IEET and Sentient Development blogs there is an interesting article by Athena Andreadis on “If I Can’t Dance, I Don’t Want to Be Part of Your Revolution!”. Athena says: “Both [transhumanism and cyberpunk] are deeply anhedonic, hostile to physicality and the pleasures of the body, from enjoying wine to playing in an orchestra. I wondered why it had taken me so long to figure this out. After all, many transhumanists use the repulsive (and misleading) term “meat cage” to describe the human body, which they deem a stumbling block, an obstacle in the way of the mind… However, we demean the body at our peril. It’s not the passive container of our mind; it is its major shaper and inseparable partner.’

    Athena is making the point I try to make above much more eloquently.

    Needless to say, that Giulio’s view (see link above) does not convince me a bit. And it’s clear to me that ‘debate’ on this issue of the primal and essential value of the (sensibilities and capabilities of) what Giulo calls ‘biology’, but I would rather call ‘nature’ (the real world as it is emerging by itself and in all its forms, with the beauty not only _in the emerging_ but also _in the emerged_ as is), especially our body – made as disease free as possible! -, is very, very hard. It may be that the core of the debate is a ‘being’, a ‘phenomenon’ which is _evident_ to you, or not…

    Of course I don’t want to suggest that ‘computer-mediated virtual worlds’ always and ultimately boil down to ‘disembodiment’ or processes which are detrimental to our natural sensibilities and capabilities. My concern and key interest is how to _not_ let it come down to that.

  6. #6 by Bob Crispen on June 3, 2009 - 12:22 am

    Amusing. Because the people who were there at the time are still here.

(will not be published)