Archive for October, 2012

The Force

What can be said about the Disney acquisition of Lucasfilm that hasn’t already been said?

Sure, it’s a great deal for Disney, even at 4.05B. And there’s no “George Lucas Media” waiting to sue over the rights. It’s a great deal for Lucas, who owned the company all out and didn’t want to endure any more criticism over his movies.

[update: Lucas will apparently donate much of his wealth to charity, making Star Wars, Indiana Jones, etc... even the critically panned installments, some of the most socially redeeming entertainment of all time... Well done!]

It’s probably a good deal for his employees, since Disney has better pay/benefits/stock, last I checked. And they have the bandwidth to keep the factories humming for eternity + 50 years, which is how long copyright will last, as long as Disney has any say in it.

It’s an unknown future for Star Wars fans, since Disney has proven they can be both much better and much worse than Lucas himself at making great movies. For me, no Disney film has ever reached the emotional quality of episodes IV and V, even as a kid (these are, after all, my #1 and #2 favorite movies). But most Disney films, especially the more recent ones, have far exceeded the plot, character and tone of episodes I, II and III. And I’m including The Black Hole.

At least we get to find out, with more movies now being planned. I’m excited and optimistic, esp. if they do what they’ve done with the Marvel universe. Perhaps they’ll even re-do the prequels, with a Star Trek-like reboot that forgets all about metachlorians and trade disputes with quasi-racist cartoon characters…

What I’m most excited about is the potential fusion between ILM, Pixar, and Disney R&D, assuming there isn’t some deal to split the Lucas properties before sale. If Disney does at least as well to integrate the companies as they did with Pixar, I think this would become the biggest wonderland among all high tech creative places to work.

Here’s the lost opportunity though: Lucas supposedly loved the fan-made works in his universe (if not the critics). What would have happened if Lucas had crowd-sourced and then produced the next Star Wars film instead? Who better to protect the franchise than the fans? It’s a pipe dream, I guess.

There’s one thing that’s virtually guaranteed now. Star Wars will be going in and out of “The Vault” every 6-10 years. Disney has perfected the art of drumming up nostalgia by withholding your “precious” until you beg for it at any price.

No Comments

AR is Dead

I haven’t written much about Augmented Reality since joining Microsoft in 2008. I can’t really say why that is. It’s been a topic of some passion for me for over 20 years. But I think it’s time to start sharing some of my thoughts on the subject. Here’s a first installment.

1. AR is Dead (long live AR!)

Think about this. Virtual Reality was the “it’ll change everything!” buzzword of the early 1990s, well before the dot.com boom made and broke so many millionaires. Today, how many virtual reality experiences do you see a day, a week, a year?

None? Maybe IMVU and Second Life still count. So… two, if you’re so inclined?

Actually, Virtual Reality is a hundred-billion dollar industry, if not more. It’s just not called VR anymore, that’s all.

It’s called: Online Multiplayer Games, Simulation, Virtual Training, Telepresence, and even Natural User Interaction. It made the movie Avatar feasible. Kinect just sold a zillion units in its first six minutes and, yes, when you’re playing Kinectamals on your TV, you’re participating in a virtual world.

Boom. Some marketing person lied to you.

It’s fine. It’s their job to sell stuff by whatever name works best.

(Oculus Rift being the notable exception to this rule — can they revive the original technical term?)

Q: So, given the same trend of a crappy buzzword that will likely die with the downward hype cycle, what will AR actually be called in five years?

A: Shopping, Advertising, Training, Education, Art, NUI, Spatial Games, Location Based Games, Situational Awareness, Geoimmersion, Ambient Computing, SoLoMo, and even Holographic Telepresence, to name a few…

“AR” as a buzzword is near the end of its life. Especially when you see something like this:

Ikea AR App

See How IKEA Furniture Fits Into Your House With New AR App

The app consists of a live camera image with a set of downloaded 3D models of furniture superimposed. There is no AR tracking whatsoever, not even using the compass, as far as I can tell. Moving your iPaid slightly from side to side throws off your careful manual alignment. So it’s basically the same quality as holding up an acetate sheet with a picture on it, although you can manually rotate the object.

Now, I don’t want to bash what’s actually a decently useful app for in-situ furniture visualization (even if it is not actually an IKEA-sponsored app either, as noted in the fine print). I have to seriously commend the three guys who built this in 4 weeks for truly grokking the concept of “minimum viable product.”

But this is “augmented reality” like I’m a supermodel.

A basic definition of AR says there needs to be at least some computational connection to the real world. You must sense the world in order to augment it, right?

2. What the hell is AR anyway?

This is probably a moot point if the term is going away. It’s really whatever people say it is. Who am I to try to make up definitions that stick?

But, it turns out, there is already a kind of basic accepted definition for AR: an experience that captures a user’s view of the world and selectively augments it. But what does “augments it” even mean?

This is where the rubber hits the road. Just overlaying something on a video stream is not AR. That could better be called real-time CGI video effects, video keying, and basic videographics.

Myron Kruger was doing this in stride when I was just becoming aware of real-time anything and he was already somewhat jaded by that point. At the time, I didn’t really understand why. Myron totally got that it wasn’t just the green-screen that did it, but the interaction between the virtual stuff and the real person that mattered.

Consider this: to augment the world is to change the world.

In other words, AR is not AR without impact, even if that impact is only on you or other people who inhabit the world in some limited space. Tagging a wall with a can of spray paint is a better example of augmented reality in a more literal sense. So let’s move away from the simple definition and get down to brass tacks (which, btw, was another form of physical augmented reality, circa 1863).

3. Augmented Perception

Most of the AR that people talk about today makes no change to the world itself, only to the sensory input streams of the user, and indirectly at that. In fact, the world has no idea we even exist, and it’s far from augmented by anything we say or do. We are simply using technology to mediate and improve our perception of the world. So let’s call it augmented perception then. Try it on for size.

Such a service may add a desirable superpower, in the form of seeing information that’s normally hidden from human view. Let’s call this clairvoyance, or perhaps “digital clairvoyance” to distinguish from the paranormal variety. We can see information about real things, names, birthdays, reviews of favorite restaurants, or purely fanciful overlays, like ghosts or zombies chasing you downtown.

The internet itself is an abstract information space, but it has a natural spatial projection on the physical world that we will soon see for ourselves. But as with search engines today, The Google or The Bing, we don’t want to see the internet in its naturally incomprehensible multi-dimensional hyper-space, but rather some arbitrarily filtered  view of what we might find useful to see right now.

Indeed, the goal of Augmented Perception is not, as Will Wright correctly pointed out, to dump more information on our already-overloaded senses. It’s not to take Times Square and square it again with even more sensory/information overload, as in “to everything there is a thought bubble, turn, turn turn.”

Augmented Perception is there to help us understand the world more fully, more readily, and more profoundly.

Augmented Reality can make us more aware of our immediate environment rather than distract us from it…there is value in proximity information…tap into the collective commune memory…situational awareness is what I am dreaming of”

This is increasingly being called Ambient Computing and/or Situational Awareness. Visual representations are just one way to achieve these goals. I’m interested in many others. And indeed one may need to remove information, to dim parts of reality, blur the backgrounds and generally reduce ambient noise in order to actually enhance the signal aka improving people’s perception of that which is important to them right now.

The AR marketers didn’t tell you that either.

4. Augmented Interaction

The conceptual “dual” of Augmented Perception is found in our ability to interact with the real world in enhanced ways. That’s another flavor of mediated reality, focused on intent and action. One’s thoughts may jump straight to perhaps the most commonly wished superpowers — flying, time travel, super strength. For some, like my wife, it’s invisibility, but that’s beyond simple explanations. Those are all nice to want, but there’s something much more profound coming right around the corner, in the form of apps like Google Now, facebook mobile, and more.

Just think of what it means to drive over a pothole, wish it was fixed, and a road crew suddenly shows up to fix it. There’s an app for that, cliche complete.

This is a kind of “loop-closing” narrative that reduces the friction from thought to action, bringing us that much closer to god-like powers of thinking something and seeing it happen in front of us. Well, perhaps potholes and city utilities are not the best example of “instant action,” but every little bit helps.

Just think of what it means to see a product you like, then “like” or “want” it via some pinned photo or phone button, and then magically find an ad or “buy now” button next? Near-instant gratification and wish fulfillment is what that is. That’s certainly high on the marketer’s own wish list, if not yours. The holy grail of marketing requires knowing what you all might want at any given time and offering it to you right when it’s most convenient. That’s not an advertising business model as much as it is a credit card business model — guaranteed revenue from every transaction. This is where Google is heading, btw.

[Though to be totally honest, the subsequent invoice on your impulse buys does kind of ruin the supernatural aspect of this superpower, like buying yourself a birthday surprise. Oh, look! Thanks! I love me...]

What will happen when we have Augmented Perception and Augmented Interaction bound together for a truly symmetric read/write mediated reality? We’ll be able to create Second Life in the Real World, for starters. Why would we (perhaps not) want to? That’s a subject for whole other post.

5. AR is Dead (long live AR)

We’re at the cusp of amazing things like Google Glass, upcoming Vuzix Holographic lenses, Oakley AR Ski Googles (btw, Oakley is owned by Luxottica — along with almost everything else — forming a natural monopoly on most of your vision needs. So you’d better be interested in whatever they do in this space).

And who can say what other companies are lurking on the horizon?

How on earth could we say AR is already dead? It’s clearly and inescapably about to explode into a prismatic rainbow of possibilities.

Yes. This is all true. But as with Virtual Reality, the future of AR is not wallowing in the history of great ideas that might have been. The history of AR will be written by whomever makes the most successful products, the killer hardware and software experiences, picked up by millions, if not billions of people.

It’ll be called whatever they decide to call it. But it won’t be “AR.”

Because they’ll have a huge incentive to distance themselves from all the crappy experiences that came before. “We’re not AR,” they’ll say, “That’s old. That was all about cell phones and jittery graphics, about fiduciary markers and 3D chairs floating in space. What we’ve done is unleash interactive magic on the world, increase human potential, and usher in a world of augmented and yet totally natural human perception and interaction.”

Augmented Perception and Interaction. That’s not half bad. API would be a good acronym, if it wasn’t already taken. But whatever we call it, for better or worse, it will be your computer-mediated interface to the world.

1 Comment

Panoramas on Bing.com

I’m proud to inform that a feature my team originally built as part of Read/Write World back in the fall of 2011 is going live on Bing.com to display panoramas from Photosynth (esp. in this case 360 Cities).

Kudos to Bing and the Photosynth team for making it all come together.

FWIW, I lead the small team and architected the original technology that could render high-detail 3D surfaces, including panoramas, across most browsers (via WebGL, Canvas, Silverlight and even Flash if needed) with the same high-level Javascript running everywhere. The hard work was done primarily by Peter Sibley, Hansong Zhang, and Adam Mitchell. Kudos to the PM, David Gedye, for tirelessly working to get this technology on the Bing.com homepage. And special thanks to Blaise for supporting it and the whole sprawling vision.

bing photosynth

Of course, this is just a very small part of what Read/Write World means. It represented just a portion of our team and time, but a useful bit of work nonetheless.

No Comments

LA Driving

If you wanted to know how to drive a space shuttle through Los Angeles, here’s the best video I’ve come across. Short answer: pre-visualize the entire route with precise LIDAR mapping and then mow down everything in your path. It’s friggin laser beams!

[yes, I know the laser didn't do any actual cutting -- but if you imagine the sensor (LIDAR) and effector (chainsaw, socket wrench, bulldozer) as a single system, it's conceptually a giant shuttle-shaped 3D lawn mower -- like the ultimate road rage in meticulous premeditated detail.]

With all the attention and admiration this got, one has to wonder who will be the first celebrity to drive up to the Oscars or Grammy awards in their very own space shuttle. It only cost $10M for a 12 mile drive, so it’s well within reach for some.

No Comments

Disney VR

It’s been about 18 1/2 years since I joined Disney for a super-secret VR project in Glendale.

The first experience we built simulated Aladdin’s Magic Carpet Ride in 1st person and 60 FPS, as if you were flying the magic carpet above the streets of Agrabah. The giant system that ran it was a three-headed (think SLI/Crossfire) SGI graphics super computer called the RealityEngine2, big as a refrigerator and producing ten times the heat.

We knew 1/2 million dollars per user wasn’t exactly a scalable for-profit experience. Today, an XBox is about 1-2x as powerful for 2,000 times less money. A Tegra3 chipset is in the same ballpark, but for about 25,000 times less. So the smart idea was to buy our way into the future and get a big head start on solving VR, on only a $20M budget!

  • BTW, apparently, in 1994, we could not take good screenshots of anything, either that or bit rot. But here are some photos I recently recovered. It kills me that I had an actual HD video of the ride but I brought it to a gig and lost it — curse you Zombie! 

The bulky HMD was the best I’d ever seen (note: my Oculus is still on order, too soon to compare) and the experience rivaled what you could get even today, except it was only meant to last 5 minutes vs. 100+ hours of Fallout 2 I’m still finishing. We put 100,000 people through this experience at Epcot Center in Florida.

I was there for the first 2-3 months, tuning the ride on-site. I actually had to work in a glass walled room while thousands of civilians filed by — “a programmer in his natural environment” the sign could have said, “please do not tap glass.”

We learned quickly that people were cognitively overloaded in VR and they tended to simply blow right past the great interactive 3D animated characters we built for them, like drunken drivers. So we invented a system called “weenie wires” that made the ride a cross between something totally free-form and a dark-ride where you’re on rails. You could always choose where to go, but we’d nudge you in the right direction and help you have more fun.

I’m reminded of this because my friend JWalt, who did so much work on the Weenie Wires, went on to win an academy award for his work in previsualizing movies. He now tours the country doing a one-man interactive show. He and I also built what might be the first 60fps full-body motion capture system for pupeteering “skeletally retargeted” 3D characters in real-time. Kinect uses a nice improvement on the idea — and it finally gets rid of those damn wires.

Speaking of a Who’s Who of Disney VR, I have fond memories of the whole crew: JWalt as mentioned above, Scott Watson, who is now CTO of Disney R&D, Jon Snoddy, who I rejoined later in life for a  startup called BigStage (making digital 3D copies of users) and who is now back at Disney along with another cool guy, Eric Rice. The always brilliant Randy Pausch did research in VR using our platform (if you haven’t watched his “last lecture” stop everything and please do so now). I managed Jesse Schell for a brief time, when he was just a junior show programmer fresh out of CMU, and look at what amazing things he’s gone on to do without even asking if it was okay. Aaron Pulkka was in the same boat as Jesse and has also done extremely well.

Many of the other folks from that time went on to become VPs of Disney or other companies. It was quite an amazing bunch of people to work with.

Anyway, after Aladdin VR, I went over to the main Disney R&D building and continued to hack away at new tech ideas.

The first thing I did was take a cool motion base they had built and made it fully interactive. This thing used six air-cells that we programmed via MIDI to simulate an undulating water surface. We put a raft on it, gave everyone an oar (with 6DOF tracking — think Kinect, but with wires) and even modulated the exhaust to blow wind and squirt water in people’s faces on command — yes I got to douse Michael Eisner. This became the Virtual Jungle Cruise ride in DisneyQuest, when it was still around.

I was also fascinated with ways to do VR/AR without an HMD. So we worked hard on projected virtual environments and audio, trying to put pixels and sounds everywhere we looked. The most successful was the 6-sided CAVE environment that showed up in part in the Hercules 3D ride. The least successful, from an “internal politics” POV was the multi-bright projector (shown at right).

True story: our top level VP of R&D had a pet project to build a HD movie projector pumping out so much light that he could use black velvet as a screen, just to get the purest blacks imaginable. It cost $100k or more and harnessed what I think was a singularity for power. I saw it and quickly realized I could tile 6 small off-the-shelf projectors at 1/2 the price and twice the brightness. I showed it to my boss, he liked it, and told me to never show it again. Ever. This may have been the first time I realized that fitting into a large company might take some getting used to.

Mostly, I was very interested in what we’d today call NUI. The image at left above depicts a user holding a simple stick or baton that we tracked in 6 degrees of freedom to navigate/fly through an environment. Any object worked fine, even a hat. The CAVE above was meant to allow us to point, using a wand, at any 3D object even in stereo 3D. The oars in the rowing ride were another example of multi-participant 3D spatial controls. Just think how hard it is for 4 people to drive one car. That’s the key problem to solve. Oars work very well, since it’s just like in the real world. You have to communicate or you go in circles.

One interesting outcome from that particular line of research was a game called “Crazy Taxi.” Strangely enough, my research into the idea of a whole family simultaneously shouting directions (“go left! look out!”) at a virtual taxi driver was the initial seed idea for a “Wild Taxi” ride (think: “Mr. Toad’s Wild Ride” meets the famous Blues Brothers car chase) that we showed to Sega. Then Sega (taking one of our creative directors) went off and built the arcade version for a single player using a joystick and produced billions of dollars of profit and several lawsuits, the latter of which I’ve previously ridiculed.

I’d still love to see the full-size car mock-up of the original Disney Ride. I’m sure someone will do this for Kinect at some point.

All in all, my 4 years at Disney was extremely fun and rewarding. A+, would do it again.

2 Comments