Archive for category Uncategorized
Here are the slides from my AWE 2015 talk and a link to the video on youtube. See below for the original speech in prose form. Thanks again to Ori and Dave for inviting me. And I was totally humbled to be sharing the stage with some of my heroes that day.
Funny story. I’d practiced the whole speech for a week or more. I was totally relaxed back stage. But I somehow got nervous in the moment and the speech escaped my brain about 20 seconds in. Embarassing!
Without a teleprompter or any notes, I had to wing the whole thing. So a big thank you to Travis and the A/V team for giving me a new clicker to buy time and cover for my fumble. Totally cool move.
Let me know which version you like better, “as written” or “as delivered”
Lesson: next time I’m going to just do it more spontaneously, since that’s how it may wind up anyway.
The Original Speech:
 In the last 23 years, I’ve worked for some really big companies and some really small ones. I’m not here to represent any of them. I’m here with the simple job title of Person. And I’m here to hopefully inspire some of you to take action, and others to at least understand what needs to be done.
We’re all here today because we recognize the game-changing potential of AR/VR. This technology brings magic into the world. It gives us superpowers. How can that not be game changing? But this new magic is so powerful, and the potential is so big, that some of the biggest companies are already vying for control.
 So what happens when big companies – with a variety of business models – bring what we might call “big magic” into the world?
I was a little worried about using such bold words until I heard David Brin talk so eloquently this morning. I’ll sum up. The danger zone of any big new technology is when it’s still unevenly distributed. We saw this fire to radio to books to TNT. There is no such thing as a purely good technology. It’s all in how you use it.
The good news is we get to decide how this goes down. We’re the creators, but also the customers. We can shape the world we want.
 I gave a talk here two years ago equating AR/VR to a host of new human superpowers. I’m pleased to see the theme of the conference this year.
That talk is on-line if you’re interested. But even then, these ideas had been percolating for a long time and I was just dying to talk about it.
 In 2010, I’d joined a secret project inside Microsoft to reboot the next-gen Xbox…
Leadership had concluded that cramming 10x more of everything wasn’t enough. They wanted something fundamentally more game changing, something where they could spend, say, a billion dollars to buy a strong lead. They wanted something that would normally scare them (and everyone else) from even trying.
 I had a few ideas…
I’ve been very lucky in my career to work with amazing people on amazing opportunities.
I got to work on Disney’s $20M Aladdin VR ride, helped craft Google Earth and Second Life. I was recruited to Microsoft in 2008 to help build social AR-like experiences into Bing. We called the project “First Life.” Alas, some folks didn’t think mobile was going to be a big deal and it stalled. So I switched tracks to work on communications, social avatars, and then interactive video holography.
That lead me to join XBox Incubations, with perfect timing, to propose and build the very first HoloLens prototypes and concept definitions, and invent about 20 new ideas in the first six months.
 Just to clarify:
TP is Telepresence. Holographic toilet paper == worst idea ever. The use (some might say abuse) of the word Hologram came from popular fiction, like Star Wars.
Hundreds, if not thousands of people worked on HoloLens after me, solving some very hard problems. Many of the original team have moved on. They ALL deserve credit.
 So AR is really coming. It’s only taken 47 years since Ivan Sutherland built the first prototype.
 But all of a sudden VR is exploding again. Yes. I want my holodeck too. But since my Disney VR days, I’ve come to realize that early VR is going to be mostly “Dark Rides.” Think Pirates of the Caribbean. You’ll sit in a chair and experience an exhilarating, magical, evocative but not-very-relevant journey.
On the whole, VR is:
üHigh Presence and Immersion
üLow Relevance to Your Daily Life
Not that there’s anything wrong with a little escapism, from time to time.
 The fundamental difference between AR and VR is not hardware. Same tech will eventually do both easily. The fundamental difference is that AR builds on Context. In other words, it’s about You and Your World. And context goes to one kind of monetization.
Mixed Reality, as a reminder, is that whole spectrum from AR to VR. You could look at it as a spectrum of reality vs. fantasy, but it’s more instructive to see it as a “Spectrum of Relevance.”
 Why are highly relevant experiences worth an order of magnitude more?
1)Because we spend so much more time and money in the real world
2)Because we care so much more about the real world
All good so far. AR is a goldmine of reality. VR is a goldmine of creativity.
 But, Beware the Dark Side
 You knew there had to be a dark side somewhere, right?
Fact: the more you can be swayed by a given ad, the more that ad is worth. Companies want track your desires, your purchasing intent, and your ultimate transactions to (as they say) “close the loop.” The world is moving from analyzing your clickstreams (on the web), to analyzing your communication-streams (as in chat, voice, email) and eventually to studying your thought-streams.
How do they obtain your thought streams and mine your personality without literally reading your mind?
It’s not like people would ever treat other people like lab rats…
 Oops. And Facebook is not alone, not by a long shot.
Note: scientific experiments are often very positive. There rely on this thing called “informed consent”
And no, EULAs and privacy notices don’t count. Let’s stop pretending people read those. Informed consent means informed consent.
 In 1995, I had the honor of working with Dr. Randy Pausch at Disney Imagineering to help study, with informed consent, how people experienced VR… We continuously recorded people’s gaze vectors – hundreds of thousands of people — as they flew their magic carpets through the world of Aladdin, to study which parts of our storytelling worked best.
BTW, we found that while men averaged a head angle of “straight ahead,” women, on the whole, looked 15 degrees to the left. What?
We figured out that the motorcycle-like seat of our physical VR rig forced people wearing skirts to sit side-saddle. So, statistically speaking and unintentionally, the data told us if you were wearing a skirt.
 More recently, VR helped reveal dangerous sex offenders before their release, even where the offender believes he’s been cured. They were shown risky scenes. I won’t elaborate on how their responses were measured…
But with coming face capture, eye tracking, EEGs, muscle sensors, skin conduction, pulse and more built into new HMDs, imagine what kind of latent inclinations can be teased out of you. Companies like Facebook and Google, betting on VR, will be able to show you something and tell instantly how you feel about it, no Like Button necessary.
 Did you look at the woman in the red dress? We know you did.
The thing about the Matrix is: the whole humans as batteries trope is kind of silly. But if you imagine people as wallets and credit cards connected to the internet, that seems to be exactly how some companies look at their customers.
But for the record, I don’t think we’re in danger of being grown in vats anytime soon.
 Tobii is a leader in using eye tracking to help understand user behavior.
The picture on the left is of a woman wearing glasses that track her gaze as she shops. The person with the tablet is studying her behavior.
Another study on the upper right tracked men and women’s gaze over various photos. Conclusion: men have no idea what they’re staring at most of the time. These are involuntary reactions. Stimulus and response.
To the extent AR or other devices track what we see and do, companies will be able to monitor our sensory inputs and emotions as we pursue our day. The thing about AR is it now gives us a compelling reason to wear it all day long.
 The point of all this is not to get scared, feel powerless and withdraw.
The point is that we have control. We always did.
Nothing in the world is free. You’re going to pay for stuff one way or another.
Companies that sell things can and should be the most customer-focused, protecting privacy and curbing abuses. That’s in their core business interest
Companies that sell user data, sell ads, sell you, well, they have every incentive to keep pushing the envelope on this front and keep you ignorant of it.
It’s all about their business models, not you personally. You can steer this by simply choosing who you do business with.
 Case in point, Apple lately has one of the better takes on user privacy, responding to latent fears over just how much data they’re collecting. They’re a product company, and even their iAd product is more privacy-friendly than most.
But can Apple bring it home? The next thing I want to hear from Apple is: “You OWN your data. You made it. It’s about you. Can we help put it to work for you, please?”
HealthKit is the closest thing to that so far, with opt-in studies. And it’s great to see them trying to figure this out.
I’d also give Cortana kudos for the notebook feature, letting you easily see and edit what Microsoft knows about you. That comes from consumer demand.
 Recapping so far:
Big Companies are bringing “Big Magic” to the world
Big Magic can either Liberate or Enslave us
We get to pick. Here’s how…
 Basically, we need to build the AR equivalent of the World Wide Web. And I don’t mean just boxes in space.
You own your content, your little part of the graph.
You create the world you want to live in.
 All of these statements may be true, to some extent. But they don’t have to be true. We’ve also let developers of web technologies largely off the hook. We can demand parity of browsers and native experiences. Apple, Microsoft have for years let their browsers, especially on mobile, lag the native side.
Now, it’s true that having a free and open web today doesn’t guarantee privacy or lack of exploitation. Just look at web bugs and cookies and Facebook. And security is the primary reason cited for the lack of features in web tech.
But having a free and open web does at least make it very hard for any one big company (or government) to eliminate your choice unilaterally. You get more options the more open the field is. And you get more voice. That’s the point. Just look at the fight over net neutrality. Could that have happened if AT&T provided everyone’s internet service? No way.
 So consider what made the WWW a winner. Why didn’t the web take off as a series of native “apps” and walled gardens when they’re clearly much more safe and capable?
üContent is device independent
üContent is dynamically and neutrally served
üContent is viewable, copyable, mashable
 Same for the next phase of evolution.
 Content is going to need to adapt based on the chosen device, its resolution, perf, field of view, depth of field. And for AR it’s going to also need to adapt to real-world location, people and activity.
Baking this all into native code and statically packaged data is problematic. It has to be adaptable, reactive at its core.
There are millions of self-taught web developers out there who live and die by View Source and Stack Exchange. It will take an army of AR/VR enthusiasts to likewise capture the real world and build new worlds that we want to see.
Or it could follow TV, Movies, Games and big Media down a content-controlled narrow mind-numbing path. I hope not.
 In AR, content has to adapt to the user’s environment, including other people in view.
Here we see just the furniture playing a role. That’s pretty cool to see in action.
Mapping the world is far less invasive than mapping our brains.
 Business instincts will naturally drive companies to have app stores, to protect all IP and mediate access from the irrational mob, i.e., you.
Resist the urge. It’s not good for them and it’s not good for you.
The value of copying and remixing content far outweighs the loss of control. Look at YouTube vs. the App Store.
I look at App Stores and see more clones and less inspiration. DRM doesn’t prevent copying. It just makes everything suck.
 Most Importantly: We need a way to link people, places & things into a truly open meta graph.
Here, I’ll praise Facebook for Open Graph and Microsoft for following with their own kind of graph. What we need next is the meta-version of these that spans companies to build a secure graph of all things or GOAT.
Open experiences need to understand the dynamic relationships among people, places, and things. But information about people should be considered private, privileged, and protected. Those links can’t even be seen without authentication, authorization and auditing, aka user’s informed consent.
Users will live in a world where they subscribe to layers of AR based on levels of trust. Do I like Facebook’s view of the world? If yes, then I can see it. Do I like Microsoft’s. Ok, then that’s visible too. Do I trust Facebook with my data? If yes, then they can see me too.
We can build this. We built the web. It need not be owned by any one company. And we have just enough time to get it right.
 This is the key. You already own the content. Copyright is implicit in the US and beyond. If you published it, you own it.
If you express yourself on Facebook, right now, they own it, or at least can use it anyway they want. That’s because you clicked a EULA. But that’s not the natural state of affairs
We need a markup language for reality, letting us describe what IS in a semantically rich way.
We also need an expression language for content, that lets this content adapt to the environment.
There are some great starts in open standards. We can build the next steps on top of those.
 Ask yourselves: why are we doing this AR/VR stuff? For the technology itself? For the money?
It’s not an internet of things or a web of sites or a graph of places. it’s about people.
We do this because we ARE those people, building amazing things for ourselves and others to enjoy. And the things we build next are going to knock their socks off.
So our focus must always be on the people, our customers, and how to help and not hurt them. Because, even if we’re selfish, they and we are one and the same and our choices matter.
 We live in and make up an internet of people.
Thank you for inviting me here today.
It’s amazing to see everyone getting involved in VR development. This used to be relegated to a small set of academics and foolhardy startup veterans like me.
I especially can’t wait to try the Vive in conjunction with what they’re here calling environment mapping. It’s the closest thing we have to high quality AR right now.
I have a few nits with the article though, based on some 20 years experience with VR. Most of these made it into the Metaverse Roadmap doc.
Here’s an old glossary of VR that summed up the research as 10 years ago (when I wrote it). In general, I think folks are overloading “presence” too much and only just beginning to grok what else is missing.
What’s really going on is a virtuous cycle that starts with Presence, requires Interactivity, such that people can affect the world. The extent to which the world is mutated by your interaction or mere presence is called Reflectivity. This includes just seeing your own body and seeing it move the way you imagined, or vice versa (also called proprioception, which is greatly hindered by having opaque goggles on). If we get this far, we can develop a kind of Resonance (a self-reinforcing signal) that builds with each positive interaction in this cycle. OTOH, every time this cycle is broken, like when you put your hand through a wall, or something doesn’t move when you expect it to, it degrades the resonance and erodes the effect, sometimes severely.
When it works, we get a two-way synchronization between the entirely computational virtual world inside the computer and the entirely mental virtual world in your brain. This mimics what happens naturally, when we model the real world and our models are validated or refined by each interaction.
It’s not much more complicated than that.
But for those working on latency, resolution, etc.., that’s only the first 25% of the job. Physical interaction is also key, eventually haptic. But so is making the virtual world mutable enough for participants to impact it, so their interactions are credible across low and high-level systems in our brains. Sticking to how things work in nature is always a good starting place.
Currently, the most common testing technique (for men) is what’s known as penile plethysmography. This involves placing a ring-style sensor around the offender’s penis, then measuring any changes in its circumference as they’re subjected to a variety of visual or auditory stimuli. One problem with this approach is that subjects can skew the results by diverting their eyes from the images.
Holy Clockwork Orange, Batman. I can understand the wish to determine if sex offenders are likely to offend again, when determining their parole. But what we’re talking about here is effectively using VR to enable unavoidable thought crime, and entrapment thereof. Look away from the virtual child and you can still be guilty, because what we want is not what you did (legally or otherwise) but what you will do, aka your latent intent, even if you don’t consciously know it.
Nicely done, but I can’t figure out how to embed properly:
The only real conclusion one can draw from this article is that marketers are really excited about VR’s ability to attract attention. Here are 7 reasons to think harder:
1. “Movie theaters” full of HMDs are unlikely (even ignoring hygiene & robustness issues). The economics don’t make sense for the equivalent of having 300 people watch the same thing on one expensive big screen.
Even factoring in the cost of a new PC or console, we’d more likely see the equivalent of “internet/game cafes” for those who can’t afford their own VR setup at home (plus rentable airline equivalents), more as a niche and trailing edge.
2. Hollywood + VR movies already exist. They’re called games. Now, game developers generally put substantial effort into making their cinematic intros and cut scenes. But even with higher production values, most people watch these a few times and then skip right to the gameplay. The gameplay must be better than the intro movie or the investment will only succeed on YouTube, if at all.
Take-away: interactivity is key.
3. Physical interactivity in VR is not yet ready for prime time. Reason being: the closer you get to the human body for sensing our movement, the more proprioceptive skill we have and the less tolerant we are of noise and other errors.
So in the short-term, the level of interactivity will range from “almost none” to the equivalent of a gaming controller. Designers have to work around those limitations. A good example is to skip touch entirely and use voice to control things.
4. In UX research, we found that people’s levels of comprehension of things like story and character in VR were very poor, probably due to information overload and not knowing where to look for cues. Movies solve this by leading the horse to the water, so to speak, with expert cinematography and more. So the chances of a subtle cinematic narrative are slim until we develop those muscles in VR over many years.
Think more “TV Soap Opera” than “Gosford Park.” And in terms of Presence, think more “Saving Private Ryan” than “The Man from Earth.”
5. Movies are an inherently social experience, esp. going to the theater (which we said isn’t helped by VR). Perfect, you say, because Face/Rift is a social network. Actually, FB today is more a social experience of last resort. It is most social when you don’t have a better way to interact. Just imagine a group of six friends hanging out, noses down, all browsing FB on their so-called smartphones. I know it happens a lot, and they certainly think they’re being social, but who believes it? It’s at its best when it’s connecting people who won’t otherwise see each other.
6. VR movies will initially be more of solo experience that we can talk about and retroactively construct the social element, like talking about the latest episode of “Lost” or “Game of Thrones” the next day. We can feel like we watched them together for some “social backfill.” I’m guessing that the more presence we feel in the VR experience, the harder it is to later backfill in those missing friends, but the more we’ll want to try, leading to more of the feeling that we’re losing real human connections by going so virtual. Prediction.
7. Someone will therefore add avatars to these immersive VR movies to solve this. Good thinking. If captured with high fidelity, this will be a little closer to the quality of being together in person, and there’s always the cool new immersive milieu to explore together.
But here’s the dilemma. If your movie is interactive, you have to solve the holy grail of immersive interactive 3D storytelling, which the fictional Holodeck didn’t even get right. Tony and Tina’s wedding (the interactive play) is probably our best model, but that’s all about the actors making it work.
If your movie is not so interactive but you still add friends and family to the scene, the greater degree of presence ironically makes it more awkward to see them unless they’re transformed into the story, a stark reminder that you’re not actually there. It’d be good for Jurassic time travel, but not so great for Star Wars, where seeing my mother standing next to Darth Vader would change the experience a bit.
Not surprisingly, VR will likely work better for participants who are more physically remote than in the same room — exactly like FB does today. It adds to social interactions where distance makes it harder, but caps it where real proximity would make it easier. It’s no wonder FB likes this view of the future.
For Hollywood, it’s about the business of monetizing attention on one level, and the art of storytelling on another. On second thought, maybe they’re not that different after all.
I have to admit, even with 25+ years experience with computer graphics, on first viewing I thought The Lego Movie was mostly done with stop action photography.
I figured maybe 80% physical and 20% virtual. Turns out it was closer to 99% CGI with some real legos thrown in for good measure. Other than the live action scenes, I couldn’t tell you where the real legos sat.
There were some things, like the water, explosions and more, that looked way too procedural to be done by hand. But still the rendering, shading and animation were so close to perfect, so physically correct down to sub-surface scattering and extreme depth of field, that it was almost impossible to tell.
Amazing job. And especially impressive given how well they could tell the story without relying on the usual tricks of animation and CG, staying true to only what real legos can do.
The real tip-off about the CGI was in the lighting, which allowed for certain legos to emit light or light to come from no actual source. That would be pretty hard to do in reality without a really complex effects pipeline on top.
Here’s a longer video that explains how it was done:
This is a brilliant idea and striking demonstration of the effects of rising sea levels.
It has only one small problem. It doesn’t take altitude into account. It shows the same nicely rendered water level no matter where you go.
I don’t think it would be hard to do a lookup of the altitude of any address and move the water level up or down. But given the very rough depth map from Street View and the apparent lack of an “up” vector, it might be hard to properly intersect the water with the scene.
Here’s a more accurate example, without the nice immersive visuals: http://geology.com/sea-level-rise/
Still better than trying to depict a 1000ft water level near the top of the Oakland Hills. If that happens, we’d be long gone.
For at least 20 years, I’ve been telling this to anyone who might conceivably take this idea and run with it as a business. Form a real-time sphere (or cubemap) of video around an airplane using an array of cameras. Give anyone who wants one a VR HMD and index their chosen POV into said sphere of video. Viola. Invisible airplane, at least to those wearing the HMDs. It would be a much better way to pass several hours in flight than watching movies IMO.
Anyway, alas, if I want to do this today, I’d need to drive a tank. Tak, Norway.
Scoble’s love affair for Google Glass could apparently only last so long. This underscores some of the problems with developing a product out in public, or at least half-way out. Long-lead technical challenges (battery, size, cost) are still hidden below the core design and marketing issues (utility, fashion, desire).
Google at least did a good job of starting the conversation, even if they haven’t yet figured out how to finish it.
Down in the comments, he elaborates on why he doesn’t like them any more and what he thinks Google did wrong:
1. They launched it with WAY too much fanfare that the product simply can’t live up to. Jumping out of a blimp and doing live video on the way down (along with a ton of really well produced videos that promised it would be a great assistant as you walked around) set expectations VERY high and the product hasn’t gotten to that fit and finish yet, even two years later. The product, for instance, still doesn’t do live video (at least not by itself).
2. The team started out very public, with very nice collaborative team members. Then they turned secretive and can’t tell us what’s coming. It’s almost like someone told the team “turn Apple.” That secrecy made developers and influencers feel like they were no longer part of the process of developing this into a real product, which it still is not.
3. Because it was launched at a developer conference (Google IO) expectations were set that this would be mostly for developers, and that a great API would come soon. The API did come, but 18 months later. There still is no real store. The UI is way too simplistic to handle dozens or hundreds of apps. Google refused to answer questions about sensors at the next Google IO.
4. Two years ago the price was announced as $1,500. Today the price is still $1,500. How many tech products stay the same price for two years? Not many. The pricing is now at the point where it seems just wrong. The team even admits it’s artificially high to ensure “only people who really want one get one.” That creates weird distortions in belief about the product too.
5. When people got theirs a year ago we expected a ton of REAL updates to both UI and functionality. The updates we’ve gotten just haven’t met expectations. Compare my drone, which lets me take a photo, see it on my iPhone, AND post it to Facebook and Instagram WHILE IT IS FLYING to Glass. I still can’t post photos to Facebook or Instagram without plugging the Glass in and putting them elsewhere (er, Google+) first.
6. We expected new designs by now and updated electronics. I’m holding out hope that we’ll see a much better design (battery life sucks, it cuts my ears, it is starting to look quite dated, video is poorly compressed, etc etc) by now.
When I say this was launched poorly Google set way too many expectations on this product and it has failed to meet them. It should have launched much quieter and set expectations that it was only for vertical markets. Then as those showed up, they could have expanded expectations. If Google had done that then the early adopters could have said “hey, these are designed for use in very specific places, like surgery rooms.” That would have kept people from freaking out.
The other thing that sucks is that many of the explorer program members I’ve talked with are tired of being asked to demonstrate them. Google is getting US to pay to test, provide PR, AND demonstrate them to the public. THAT is NOT empathetic at all and is actually quite arrogant on Google’s behalf (a company that makes billions of dollars every year).
It’s not shocking to me that most Google employees I know that have them aren’t wearing them around anymore and that many in the community are grumbling behind the scenes (most won’t write about their concerns because they need to have good relationships with Google).