TL;DR: To succeed commercially, AR devices must become an object of desire. If a device gives one person the power over 1000 but gives no benefit back to those 1000, you’ve generated animosity instead of desire.
Consider: if Google Glass was able to make everyone it photographed famous (in a good way), then people would be lining up to simply be around people wearing Glass, the king-maker of a new generation. But in reality, making people look good is hard. It takes makeup, lighting, writers, directors, composers, fashion designers, and cinematographers to make actors and politicians seem more interesting than the rest of us schlumps. And CEOs have whole companies behind them to make them look good. The rest of us are so un-photogenic and mundane that we look like rejects from a casting call for Dumb and Dumber meets The Elephant Man. Or at least we feel that way most of the time, which is what counts.
On the issue of privacy, history has shown that people are willing to give up their privacy if they receive a real benefit. Yes, Google, you can have my email and my contact list and my location history. No, Facebook, you can’t. The shift to what some of us call “LifeStreaming” and the consumptive angle “LifeSurfing” (i.e., viewing other people’s lifestreams) is happening already. But it’s a generational shift, as always. Eventually, a good amount of the population will be broadcasting their lives. Some never will.
The benefit to society of all that sharing will be an increased ability to relate to other people, to see the world through different eyes. The carrot, however, will be for the people broadcasting their lives, a level of narcissistic attention that promises to make the broadcasters feel more important, more loved. Unfortunately, it is a shallow and unattainable kind of love. The narcissistic urge is one that can never be filled, which is why some people try so hard…
The benefit to a company like Google or Facebook is obvious. They need to crowd-source the world. And their main benefit is connecting people to interesting content. So the more of a mess it all is, the more you need them to intermediate and filter. And the more they can learn about you in the process.
The lesson, then, is companies are in it for their own ends, but they have to focus first on providing ample value to their customers and (often) the people around them. For AR-devices, which are inherently “first person” solo experiences, this is doubly hard to pull off. Kudos for Google for trying, but there is obviously ample room for improvement.
It’s amazing to see everyone getting involved in VR development. This used to be relegated to a small set of academics and foolhardy startup veterans like me.
I especially can’t wait to try the Vive in conjunction with what they’re here calling environment mapping. It’s the closest thing we have to high quality AR right now.
I have a few nits with the article though, based on some 20 years experience with VR. Most of these made it into the Metaverse Roadmap doc.
Here’s an old glossary of VR that summed up the research as 10 years ago (when I wrote it). In general, I think folks are overloading “presence” too much and only just beginning to grok what else is missing.
What’s really going on is a virtuous cycle that starts with Presence, requires Interactivity, such that people can affect the world. The extent to which the world is mutated by your interaction or mere presence is called Reflectivity. This includes just seeing your own body and seeing it move the way you imagined, or vice versa (also called proprioception, which is greatly hindered by having opaque goggles on). If we get this far, we can develop a kind of Resonance (a self-reinforcing signal) that builds with each positive interaction in this cycle. OTOH, every time this cycle is broken, like when you put your hand through a wall, or something doesn’t move when you expect it to, it degrades the resonance and erodes the effect, sometimes severely.
When it works, we get a two-way synchronization between the entirely computational virtual world inside the computer and the entirely mental virtual world in your brain. This mimics what happens naturally, when we model the real world and our models are validated or refined by each interaction.
It’s not much more complicated than that.
But for those working on latency, resolution, etc.., that’s only the first 25% of the job. Physical interaction is also key, eventually haptic. But so is making the virtual world mutable enough for participants to impact it, so their interactions are credible across low and high-level systems in our brains. Sticking to how things work in nature is always a good starting place.
It’s not often one gets to work on a product with a billion anything. I’m extremely fortunate to have had the opportunity to design/invent/iterate in the earliest days of at least three products in that league, and I’ve learned from others who have had more.
Here, I’m going to focus on Google Earth, which has well over a billion users, thanks to Google’s scale. I’ll pick examples from my experience to help illustrate the lessons.
I am proud and excited that Microsoft has finally announced a project I started working on in 2010, before anyone called it “Fortaleza” or “720” or “Hololens”. When I started architecting AR systems and designing the very first experiences around what we first called “Screen Zero”, I didn’t care about credit (lucky me, because I wouldn’t get any). I just wanted to help change the world.
And so it will…
One of the most fortuitous aspects of this is that I now get to work (at Amazon) with two of my original teammates from the early Screen Zero days: Rudy and Sheridan. Andy is still at Microsoft, maybe Katie too. But there are many more veterans working with us now. Amazing people make amazing things!
Currently, the most common testing technique (for men) is what’s known as penile plethysmography. This involves placing a ring-style sensor around the offender’s penis, then measuring any changes in its circumference as they’re subjected to a variety of visual or auditory stimuli. One problem with this approach is that subjects can skew the results by diverting their eyes from the images.
Holy Clockwork Orange, Batman. I can understand the wish to determine if sex offenders are likely to offend again, when determining their parole. But what we’re talking about here is effectively using VR to enable unavoidable thought crime, and entrapment thereof. Look away from the virtual child and you can still be guilty, because what we want is not what you did (legally or otherwise) but what you will do, aka your latent intent, even if you don’t consciously know it.
I had a lot of fun helping to judge several dozen lively entries in the SF VR Hackathon this Sunday. I guess I’m at a point in my career where I’m qualified to judge but don’t have enough time to actually create fun projects (outside of work).
For “only” three days of work, there were some amazing entries. A number of them won prizes. I wish we’d had more categories and prizes to give out to some of the other notable efforts, like a mind-bending recursive/immersive zombie game from one of the sponsor teams and a really interesting virtual world built entirely in a pixel shader.
The grand prize went to a ghostbusters riff that did an amazing job solving (for narrow use cases) user input in VR, which I think is still one of the biggest unsolved challenges. Here’s some video of their experience. The controller had great haptic feedback, and the weak cardboard backbone connecting the two pieces actually added more value than it took away.
Nicely done, but I can’t figure out how to embed properly:
The only real conclusion one can draw from this article is that marketers are really excited about VR’s ability to attract attention. Here are 7 reasons to think harder:
1. “Movie theaters” full of HMDs are unlikely (even ignoring hygiene & robustness issues). The economics don’t make sense for the equivalent of having 300 people watch the same thing on one expensive big screen.
Even factoring in the cost of a new PC or console, we’d more likely see the equivalent of “internet/game cafes” for those who can’t afford their own VR setup at home (plus rentable airline equivalents), more as a niche and trailing edge.
2. Hollywood + VR movies already exist. They’re called games. Now, game developers generally put substantial effort into making their cinematic intros and cut scenes. But even with higher production values, most people watch these a few times and then skip right to the gameplay. The gameplay must be better than the intro movie or the investment will only succeed on YouTube, if at all.
Take-away: interactivity is key.
3. Physical interactivity in VR is not yet ready for prime time. Reason being: the closer you get to the human body for sensing our movement, the more proprioceptive skill we have and the less tolerant we are of noise and other errors.
So in the short-term, the level of interactivity will range from “almost none” to the equivalent of a gaming controller. Designers have to work around those limitations. A good example is to skip touch entirely and use voice to control things.
4. In UX research, we found that people’s levels of comprehension of things like story and character in VR were very poor, probably due to information overload and not knowing where to look for cues. Movies solve this by leading the horse to the water, so to speak, with expert cinematography and more. So the chances of a subtle cinematic narrative are slim until we develop those muscles in VR over many years.
Think more “TV Soap Opera” than “Gosford Park.” And in terms of Presence, think more “Saving Private Ryan” than “The Man from Earth.”
5. Movies are an inherently social experience, esp. going to the theater (which we said isn’t helped by VR). Perfect, you say, because Face/Rift is a social network. Actually, FB today is more a social experience of last resort. It is most social when you don’t have a better way to interact. Just imagine a group of six friends hanging out, noses down, all browsing FB on their so-called smartphones. I know it happens a lot, and they certainly think they’re being social, but who believes it? It’s at its best when it’s connecting people who won’t otherwise see each other.
6. VR movies will initially be more of solo experience that we can talk about and retroactively construct the social element, like talking about the latest episode of “Lost” or “Game of Thrones” the next day. We can feel like we watched them together for some “social backfill.” I’m guessing that the more presence we feel in the VR experience, the harder it is to later backfill in those missing friends, but the more we’ll want to try, leading to more of the feeling that we’re losing real human connections by going so virtual. Prediction.
7. Someone will therefore add avatars to these immersive VR movies to solve this. Good thinking. If captured with high fidelity, this will be a little closer to the quality of being together in person, and there’s always the cool new immersive milieu to explore together.
But here’s the dilemma. If your movie is interactive, you have to solve the holy grail of immersive interactive 3D storytelling, which the fictional Holodeck didn’t even get right. Tony and Tina’s wedding (the interactive play) is probably our best model, but that’s all about the actors making it work.
If your movie is not so interactive but you still add friends and family to the scene, the greater degree of presence ironically makes it more awkward to see them unless they’re transformed into the story, a stark reminder that you’re not actually there. It’d be good for Jurassic time travel, but not so great for Star Wars, where seeing my mother standing next to Darth Vader would change the experience a bit.
Not surprisingly, VR will likely work better for participants who are more physically remote than in the same room — exactly like FB does today. It adds to social interactions where distance makes it harder, but caps it where real proximity would make it easier. It’s no wonder FB likes this view of the future.
For Hollywood, it’s about the business of monetizing attention on one level, and the art of storytelling on another. On second thought, maybe they’re not that different after all.
I just got outed on Techcrunch. So I’ll come clean.
I’ve recently (April 2014) rejoined Amazon as a manager and developer on the Prime Air team.
We’ve set up a new team in downtown SF to focus on some interesting aspects of the project. We’re growing rapidly. If you’re interested in the project and love the Bay Area, feel free to reach out or apply directly via the Amazon website (here or here)
So why did I re-join Amazon?
The simplest answer is that I really admire this team, this project, and this company. I’m not one to gush or blush — if anything I excel at finding fault. But this job is really fun. We have trained professionals who love to do the stuff I don’t.
The project doesn’t need any more hype from me. JeffB already talked about it on 60 minutes. You may have heard me talk about various superpowers in another context… This is a similar level of game-changer IMO.
Speaking personally, this project meets a number of important requirements for me:
First, it needs to be fairly green-field. I did early AR/VR in the 90s. We built an entire Earth in 2000. I worked on massive multiplayer worlds and avatars after that. I moved onto robotic parachutes in 2004, designed geo-social-mobile apps in 2008, then telepresence and more stuff I can’t talk about after that.
I like to learn fast, often by making mistakes, with a whole lot of guessing and path-finding until the way is clear. By the time 100,000 people are working on something, there are up to 100,000 people who are potentially way smarter than me, plus ample documentation on the right and wrong ways to do anything.
Second, I want to work on projects that use new technology in the most positive ways, sometimes as an antidote to the other negative ones out there. I’ve left companies on that principle alone…
I’ve both given and received some criticism over this – even been called a “hippie.” But I didn’t inhale that sort of sentiment. I just moved on. At the end of the day, I always try to do the right thing and help people wherever I can.
That’s based on what I like to think of as “principles.” Many of the reasons I like Amazon as a company are due to its principles.
At Amazon, I saw these principles come up almost every day on the job and I was suitably impressed. Naturally, they’re used as a kind of lens for job candidates, esp. as a way to efficiently discuss their leadership skills. But these concepts are used and reinforced almost daily for things like professional feedback and taking responsibility, above and beyond our job specs.
I’ve seen senior leaders uphold the “vocally self critical” principle in meetings, where at other companies such behavior might be called a “career-limiting” move. This principle alone meant that even in my earliest interviews, I could be blunt about learning from my past mistakes without worrying if I should say things like “my biggest fault is that I work too hard.” What a relief.
The first Amazon value on the list is, of course, “customer obsession.” There’s no other value that rises above this, not expedience or profit. And in my opinion it shows.
Companies that stick to their principles tend to be consistent and well-trusted. Having clear and understandable principles, reinforcing them and even working through when they seem to be in internal conflict leads to making better decisions overall and avoiding really bad ones.
That’s especially true when you don’t have the luxury of seeing the full repercussions of your choices in advance. These principles are there for when the choices are hard or unclear, not just when they’re easy.
I believe that companies that get this, and especially those that put their customers first, are the ones that will succeed.
I have to admit, even with 25+ years experience with computer graphics, on first viewing I thought The Lego Movie was mostly done with stop action photography.
I figured maybe 80% physical and 20% virtual. Turns out it was closer to 99% CGI with some real legos thrown in for good measure. Other than the live action scenes, I couldn’t tell you where the real legos sat.
There were some things, like the water, explosions and more, that looked way too procedural to be done by hand. But still the rendering, shading and animation were so close to perfect, so physically correct down to sub-surface scattering and extreme depth of field, that it was almost impossible to tell.
Amazing job. And especially impressive given how well they could tell the story without relying on the usual tricks of animation and CG, staying true to only what real legos can do.
The real tip-off about the CGI was in the lighting, which allowed for certain legos to emit light or light to come from no actual source. That would be pretty hard to do in reality without a really complex effects pipeline on top.
Here’s a longer video that explains how it was done: