Disney VR: Redux

A few years ago, I documented some of the cool experiences I worked on at Disney Imagineering starting in 1994. Now, Inspired by John Carmack exploring Scheme as the language of VR for Oculus, I figured it would be helpful to talk about the software stack a bit. And I’ll finish with a few thoughts on Scheme for VR in the future.

First, as always, I suck at taking credit, in the company of such amazing co-workers. So for the real kudos, please thank Scott Watson (now CTO of Disney R&D) and JWalt Adamczyk (Oscar Winner and amazing solo VR artist/hacker) and our whole team for building much of this system before I even arrived. Thant Tessman esp. deserves credit for the Scheme bindings and interop layer.

This Disney gig was my first “big company” job after college, not counting my internships at Bell Labs. My one previous startup, Worldesign, tried to be a cutting edge VR concept studio about 20 years too early. But Peter Wong and I managed to scrape together a pretty killer CAVE experience (a hot air balloon time travel ride) for only $30,000, which represented exactly all of the money in the world to us. The startup went broke before we even started that work. But because we’d borrowed ample SGI equipment, it did get me noticed by this secret and amazing Disney*Vision Aladdin VR project I knew nothing about.

I had to join on faith.

I quickly learned that Disney was using multiple SGI “Onyx” supercomputers, each costing about a million dollars to render VR scenes for just one person each. Each “rack” (think refrigerator-sized computer case) had about the same rendering power as an Xbox, using the equivalent of today’s “SLI” to couple three RealityEngine 3D graphics cards (each card holding dozens of i860 CPUs) in series to render just 20fps each for a total of 60fps for each VR participant. In theory, anyway.

Disney was really buying themselves a peek ahead of Moore’s Law, roughly 10 years, and they knew it. This was a research project, for sure, but using hundreds of thousands of live “guests” in the park to tell us if we were onto something. (Guests are what Disney calls humans who don’t work there…)

I talked previously about the optically-excellent-but-quite-heavy HMD (driven by Eric Haseltine and others). Remember this was an ultra-low-latency system, using monochrome CRTs to avoid any hint of pixels or screen doors. So let’s dive into the software environment that inspired me for another 20 years.

Even with supercomputers with 4-8 beefy CPUs each (yes, sounds like nothing today), it took a while to re-compile the C++ core of the ride. “SGI Doom” and “Tron 3D lightcycles” filled some of those lapses in productivity…

This code was built on top of the excellent SGI Performer 3D engine/library written by Michael Jones, Remi Arnaud, John Rohlf, Chris Tanner and others, with customizations to handle that 3-frame latency introduced by the “TriClops” (SLI) approach. The SGI folks were early masters of multi-core asynchronous programming, and we later went on to build Intrinsic Graphics games-middleware and then Google Earth. But let’s focus on the Scheme part here.

Above the C++ performance layer, Scott, Thant, JWalt and team had build a nice “show programming” layer with C++ bindings to send data back and forth. Using scheme, the entire show could be programmed, functions prototyped and later ported to C++ as needed. But the coolest thing about it was that the show never stopped (you know the old saying…) unless you wanted to recompile the low-level. The VR experience continued to run at 60fps while you could interactively define Scheme functions or commands to change any aspect of the show interactively.

So imagine using Emacs (or your favorite editor), writing a cool real-time particle system function to match the scarab’s comet-like tail from the Aladdin movie, and hitting two keys to send that function into the world. Viola, the particle system I wrote was running instantly on my screen or HMD. When I wanted to tweak it, I just sent the new definition down and I’d see it just as fast. Debugging was similar. I could write code to inspect values and get the result back to my emacs session, or visually depict it with objects in-world. I prototyped new control filters in Scheme and ported them to C++ when performance became an issue, getting the best of both worlds.

The Scheme layer was fairly incapable of crashing the C++ side (with much effort, to be honest). So for me, this kind of system became the gold standard for rapid prototyping for all future projects. Thant even managed to get multi-threading working in Scheme using continuations. So we were able to escape the single-threaded nature of the thing.

Thant and I also worked a bit on a hierarchical control structure for code and data to serve as a real-time “registry” for all show contents — something to hang an entire virtual world off so everyone can reference the same data in an organized fashion. That work later lead me to build what became KML at Keyhole, now a geographic standard (but forget the XML part — our original JSON-like syntax is superior).

BTW, apart from programming the actual Aladdin show, my first real contribution to this work was getting it all to run at 60fps. That required inventing some custom occlusion culling, because the million dollar hardware was severely constrained in terms of the pixel fill complexity. We went from 20fps to 60fps in about two weeks with some cute hacks, though the Scheme part always stayed at 24fps, as I recall. Similarly, animating complex 3D characters was also too slow for 60fps, so I rewrote that system to beef it up and eventually separated those 3 graphics cards so each could run its own show, about a 10x performance improvement in six months.

The original three-frame latency increased the nausea factor, not surprisingly. So we worked extra hard make to something not far from Carmack’s “time warp” method, sans programmable shaders. We rendered a wider FOV than needed and set the head angle at the very last millisecond in the pipeline, thanks to some SGI hacks for us. That and a bunch of smoothing and prediction on the 60fps portions of the show made for a very smooth ride, all told.

(I do recall getting the then-Senate-majority leader visibly nauseated under the collar for one demo in particular, but only because we broke the ride controls that day and I used my mouse to mirror his steering motions, with 2-3 seconds of human-induced latency as a result).

This Disney project attracted and redistributed some amazing people also worth mentioning. I also got to work with Dr. Randy Pausch, Jesse Schell (also in his first real gig as a jr. show programmer) went on to great fame in the gaming world. Aaron Pulkka also went onto an excellent career as well. I’m barely even mentioning the people on the art and creative leadership side, resulting in a VR demo that is still better than at least half of what I see today.

Further Thoughts

So can Scheme help Carmack and company make it easier to build VR worlds? Absolutely. A dynamic language is exactly what VR needs, esp. one strong in the ways of Functional Reactive Programming, closures, monads, and more.

Is it the right language? If you asked my wise friend Brian Beckman, he’d probably recommend Clojure for any lisp-derived syntax today, since it benefits from the JVM for easy interoperability with Java, Scala and more. Brian is the one who got me turned onto Functional Reactive Programming in the first place, and with Scott Isaacs, helped inspire Read/Write World at Microsoft, which was solving a similar problem to John’s, but for the real world…

If you asked me, today I’d have to go with Javascript as the scripting language for VR. It’s come a long way from the 90s, esp. with ES-6. And, like Thant 20 years ago with Scheme, I can now make JS look like anything I want with very little performance penalty but lots of flexibility. But the single biggest benefit is there is just so much MIT-licensed code for NodeJS and browsers. The community is the single biggest benefit in the end. For rapid prototyping, nothing saves time as much as the code you don’t need to write.

Syntactically, lisp-derivatives aren’t that hard to learn IMO, but it does take some brain warping to get right. I worked with CS legend Danny Hillis for a time and he tried to get me to write the next VR system in Lisp directly. He told me he could write lisp that outperformed C++, and I believed him. But I balked at the learning curve for doing that myself. If other young devs balk at Scheme due to simple inertia, that’s a downside, unfortunately.

Eric Meijer once taught me that Javascript is the assembly language of the internet. With asm.js and Web Assembly that’s become literally true. There really isn’t anything more appropriate right now for a language to build Cyberspace.

 

No Comments

Palmer Luckey Wants to Build the Matrix

It’s worth remembering that virtual reality has never always been about gaming. Any real virtual reality enthusiast can look back at VR science fiction. It’s not about playing games … “The Matrix,” “Snow Crash,” all this fiction was not about sitting in a room playing video games. It’s about being in a parallel digital world that exists alongside our own, communicating with other people, playing with other people.

Source: Oculus Rift Inventor Palmer Luckey

Palmer Luckey wants to build The Matrix. I can totally relate. I wanted to do something similar back in my 20s, when it was called “Cyberspace,” the “Metaverse,” “the Other Plane,” or, for me, “Reality Prime.”

He’s a bit off base with a few things though. It’s not so much that VR died in the 90s, around the time he was born. The hype certainly died down, so there weren’t many media artifacts to review later. But VR thrived in many forms, including making billions in MMO gaming, sans HMDs. Immersive VR also survived at the very high end (e.g., CAVEs, etc..) for oil exploration, simulation, and more.

What really happened, apart from the H/W remaining too expensive for mass adoption, until cell phone demand drove component prices down, is that a lot of people working in VR realized that there were better ways to serve the world. In other words, we moved on to bigger and better things.

It’s nice, btw, that he gives a shout-out to his time at ICT. World-class inventor, Mark Bolas’ open-sourced HMD design was apparently instrumental for defining the first Oculus Rifts. Palmer may be aware of more design differences between his and Mark’s inventions than I am. Rift has certainly come a long way since then. It’s quite nice, if not quite done.

But what about this “Matrix” thing?

In the film, people are plugged into the global AI network, their realities (and bodies) controlled by mysterious AI entities with varying motives, all centered on control. We’re a long way from that in real life. But still, the analogy may hold for what Facebook, Oculus’ new benefactor, is already doing.

In the movie, there was a weak (IMO) plot device where the AIs were secretly exploiting humans as batteries. It’s weak because: thermodynamics. People are relatively poor transformers of food into energy. What about alternatives in geothermal, nuclear or fusion power, you ask? You have to just accept this bit of superficial fiction on faith. Fair enough.

However, if you replaced the idea of “people as batteries” with “people as wallets” connected to the grid, now you’re onto something, allegorically speaking.

It’s not energy that people collectively produce to benefit the AIs, but rather new/monetizable value, which can be dollars, attention or even new ideas and intellectual property, all fungible.

To many people working in big internet technologies, customers are already fairly abstract entities, never seen directly, but more like “wallets” and “personal data” plugged in at arbitrary endpoints. These customers somehow make things (real or virtual, doesn’t matter), they make money and charge-up their bank accounts, almost like batteries.

That’s not Facebook’s concern, for the most part, as it’s beyond their corporate capability to create so much original content and value at this scale. They can collect and connect it very well though.

But affecting how people spend their money and time is Facebook’s core business model. That is: influencing your “brand thinking,” consumption and spending habits with targeted and personalized content, especially ads, even selling your data to third parties who will. That’s their bread and butter.

To get these “internet attached wallets” to open up for advertisers for the maximum return on investment, Facebook needs your “personal data” to get to know you better. For that, they provide a socially compelling service that gets you to share your life freely without worrying your pretty little head about who owns that data you created or where it goes next.

So yes, in a strong sense, Facebook is a lot like those AIs who provide an immersive world for the humans to blithely live their lives while unwittingly producing a commodity the AIs need. The main difference is that unlike the Matrix, we don’t spend all of our time in Facebook — yet. But Facebook would very much like to improve that metric, using VR and companion mobile devices (chat, text, voice) as the medium.

In the near future, Facebook will know what you like (and want) simply by how you look at things or how you react emotionally, with no manual “like” button needed. They could continue to experiment on you, as they’ve done before, to mine your personality, and potentially even control you, most subtly, by conveniently filtering and mediating your interactions, social and otherwise.

If that doesn’t seem plausible, read my previous post about how it works. This isn’t science fiction. And it doesn’t require anyone with “bad intentions” either, just bad business models, and it will happen. The result is inevitable without adequate constraints, given the push to always make more money with a bounded set of people, roughly 7 billion. It will take much longer until Facebook gets into the business of making more customers. (I’m kidding, I hope).

To be clear, I think very highly of Facebook and Oculus’ engineering talent, product designers, and leadership. I have a healthy respect for their achievements and capabilities, which only adds to these concerns — if they succeed. They don’t seem to want (or believe) this outcome as conscientious individuals, and yet they’re already building it collectively, brick by brick.

So when people openly throw around that they’re inspired by and building towards “The Matrix,” then I think we need to ask even more emphatically about social impact and ethics and demand to know who will ultimately control this new power we’re unleashing.

Palmer is right. This is not about games. The stakes are so much higher than that.

What do you think?

 

 

 

 

2 Comments

The Internet of People: A Call to Arms

Here are the slides from my AWE 2015 talk and a link to the video on youtube. See below for the original speech in prose form. Thanks again to Ori and Dave for inviting me. And I was totally humbled to be sharing the stage with some of my heroes that day.

Funny story. I’d practiced the whole speech for a week or more. I was totally relaxed back stage. But I somehow got nervous in the moment and the speech escaped my brain about 20 seconds in. Embarassing!

Without a teleprompter or any notes, I had to wing the whole thing. So a big thank you to Travis and the A/V team for giving me a new clicker to buy time and cover for my fumble. Totally cool move.

Let me know which version you like better, “as written” or “as delivered”

Lesson: next time I’m going to just do it more spontaneously, since that’s how it may wind up anyway.

The Original Speech:

[1] In the last 23 years, I’ve worked for some really big companies and some really small ones. I’m not here to represent any of them. I’m here with the simple job title of Person. And I’m here to hopefully inspire some of you to take action, and others to at least understand what needs to be done.

We’re all here today because we recognize the game-changing potential of AR/VR. This technology brings magic into the world. It gives us superpowers. How can that not be game changing? But this new magic is so powerful, and the potential is so big, that some of the biggest companies are already vying for control.

[2] So what happens when big companies – with a variety of business models – bring what we might call “big magic” into the world?

I was a little worried about using such bold words until I heard David Brin talk so eloquently this morning. I’ll sum up. The danger zone of any big new technology is when it’s still unevenly distributed. We saw this fire to radio to books to TNT. There is no such thing as a purely good technology. It’s all in how you use it.

The good news is we get to decide how this goes down. We’re the creators, but also the customers. We can shape the world we want.

[3] I gave a talk here two years ago equating AR/VR to a host of new human superpowers. I’m pleased to see the theme of the conference this year.

That talk is on-line if you’re interested. But even then, these ideas had been percolating for a long time and I was just dying to talk about it.

[4] In 2010, I’d joined a secret project inside Microsoft to reboot the next-gen Xbox…

Leadership had concluded that cramming 10x more of everything wasn’t enough. They wanted something fundamentally more game changing, something where they could spend, say, a billion dollars to buy a strong lead. They wanted something that would normally scare them (and everyone else) from even trying.

[5] I had a few ideas…

I’ve been very lucky in my career to work with amazing people on amazing opportunities.

I got to work on Disney’s $20M Aladdin VR ride, helped craft Google Earth and Second Life. I was recruited to Microsoft in 2008 to help build social AR-like experiences into Bing. We called the project “First Life.” Alas, some folks didn’t think mobile was going to be a big deal and it stalled. So I switched tracks to work on communications, social avatars, and then interactive video holography.

That lead me to join XBox Incubations, with perfect timing, to propose and build the very first HoloLens prototypes and concept definitions, and invent about 20 new ideas in the first six months.

[6] Just to clarify:

TP is Telepresence. Holographic toilet paper == worst idea ever. The use (some might say abuse) of the word Hologram came from popular fiction, like Star Wars.

Hundreds, if not thousands of people worked on HoloLens after me, solving some very hard problems. Many of the original team have moved on. They ALL deserve credit.

[7] So AR is really coming. It’s only taken 47 years since Ivan Sutherland built the first prototype.

[8] But all of a sudden VR is exploding again. Yes. I want my holodeck too. But since my Disney VR days, I’ve come to realize that early VR is going to be mostly “Dark Rides.” Think Pirates of the Caribbean. You’ll sit in a chair and experience an exhilarating, magical, evocative but not-very-relevant journey.

On the whole, VR is:

üHigh Presence and Immersion

üLow Relevance to Your Daily Life

Not that there’s anything wrong with a little escapism, from time to time.

[9] The fundamental difference between AR and VR is not hardware. Same tech will eventually do both easily. The fundamental difference is that AR builds on Context. In other words, it’s about You and Your World. And context goes to one kind of monetization.

Mixed Reality, as a reminder, is that whole spectrum from AR to VR. You could look at it as a spectrum of reality vs. fantasy, but it’s more instructive to see it as a “Spectrum of Relevance.”

[10] Why are highly relevant experiences worth an order of magnitude more?

1)Because we spend so much more time and money in the real world

2)Because we care so much more about the real world

All good so far. AR is a goldmine of reality. VR is a goldmine of creativity.

[11] But, Beware the Dark Side

[12] You knew there had to be a dark side somewhere, right?

Fact: the more you can be swayed by a given ad, the more that ad is worth. Companies want track your desires, your purchasing intent, and your ultimate transactions to (as they say) “close the loop.” The world is moving from analyzing your clickstreams (on the web), to analyzing your communication-streams (as in chat, voice, email) and eventually to studying your thought-streams.

How do they obtain your thought streams and mine your personality without literally reading your mind?

It’s not like people would ever treat other people like lab rats…

[13] Oops. And Facebook is not alone, not by a long shot.

Note: scientific experiments are often very positive. There rely on this thing called “informed consent”

And no, EULAs and privacy notices don’t count. Let’s stop pretending people read those. Informed consent means informed consent.

[14] In 1995, I had the honor of working with Dr. Randy Pausch at Disney Imagineering to help study, with informed consent, how people experienced VR… We continuously recorded people’s gaze vectors – hundreds of thousands of people — as they flew their magic carpets through the world of Aladdin, to study which parts of our storytelling worked best.

BTW, we found that while men averaged a head angle of “straight ahead,” women, on the whole, looked 15 degrees to the left. What?

We figured out that the motorcycle-like seat of our physical VR rig forced people wearing skirts to sit side-saddle. So, statistically speaking and unintentionally, the data told us if you were wearing a skirt.

[15] More recently, VR helped reveal dangerous sex offenders before their release, even where the offender believes he’s been cured. They were shown risky scenes. I won’t elaborate on how their responses were measured…

But with coming face capture, eye tracking, EEGs, muscle sensors, skin conduction, pulse and more built into new HMDs, imagine what kind of latent inclinations can be teased out of you. Companies like Facebook and Google, betting on VR, will be able to show you something and tell instantly how you feel about it, no Like Button necessary.

[16] Did you look at the woman in the red dress? We know you did.

The thing about the Matrix is: the whole humans as batteries trope is kind of silly. But if you imagine people as wallets and credit cards connected to the internet, that seems to be exactly how some companies look at their customers.

But for the record, I don’t think we’re in danger of being grown in vats anytime soon.

[17] Tobii is a leader in using eye tracking to help understand user behavior.

The picture on the left is of a woman wearing glasses that track her gaze as she shops. The person with the tablet is studying her behavior.

Another study on the upper right tracked men and women’s gaze over various photos. Conclusion: men have no idea what they’re staring at most of the time. These are involuntary reactions. Stimulus and response.

To the extent AR or other devices track what we see and do, companies will be able to monitor our sensory inputs and emotions as we pursue our day. The thing about AR is it now gives us a compelling reason to wear it all day long.

[18] The point of all this is not to get scared, feel powerless and withdraw.

The point is that we have control. We always did.

Nothing in the world is free. You’re going to pay for stuff one way or another.

Companies that sell things can and should be the most customer-focused, protecting privacy and curbing abuses. That’s in their core business interest

Companies that sell user data, sell ads, sell you, well, they have every incentive to keep pushing the envelope on this front and keep you ignorant of it.

It’s all about their business models, not you personally. You can steer this by simply choosing who you do business with.

[19] Case in point, Apple lately has one of the better takes on user privacy, responding to latent fears over just how much data they’re collecting. They’re a product company, and even their iAd product is more privacy-friendly than most.

But can Apple bring it home? The next thing I want to hear from Apple is: “You OWN your data. You made it. It’s about you. Can we help put it to work for you, please?”

HealthKit is the closest thing to that so far, with opt-in studies. And it’s great to see them trying to figure this out.

I’d also give Cortana kudos for the notebook feature, letting you easily see and edit what Microsoft knows about you. That comes from consumer demand.

[20] Recapping so far:

Big Companies are bringing “Big Magic” to the world

Big Magic can either Liberate or Enslave us

We get to pick. Here’s how…

[21] Basically, we need to build the AR equivalent of the World Wide Web. And I don’t mean just boxes in space.

You own your content, your little part of the graph.

You create the world you want to live in.

[22] All of these statements may be true, to some extent. But they don’t have to be true. We’ve also let developers of web technologies largely off the hook. We can demand parity of browsers and native experiences. Apple, Microsoft have for years let their browsers, especially on mobile, lag the native side.

Now, it’s true that having a free and open web today doesn’t guarantee privacy or lack of exploitation. Just look at web bugs and cookies and Facebook. And security is the primary reason cited for the lack of features in web tech.

But having a free and open web does at least make it very hard for any one big company (or government) to eliminate your choice unilaterally. You get more options the more open the field is. And you get more voice. That’s the point. Just look at the fight over net neutrality. Could that have happened if AT&T provided everyone’s internet service? No way.

[23] So consider what made the WWW a winner. Why didn’t the web take off as a series of native “apps” and walled gardens when they’re clearly much more safe and capable?

üContent is device independent

üContent is dynamically and neutrally served

üContent is viewable, copyable, mashable

[24] Same for the next phase of evolution.

[25] Content is going to need to adapt based on the chosen device, its resolution, perf, field of view, depth of field. And for AR it’s going to also need to adapt to real-world location, people and activity.

Baking this all into native code and statically packaged data is problematic. It has to be adaptable, reactive at its core.

There are millions of self-taught web developers out there who live and die by View Source and Stack Exchange. It will take an army of AR/VR enthusiasts to likewise capture the real world and build new worlds that we want to see.

Or it could follow TV, Movies, Games and big Media down a content-controlled narrow mind-numbing path. I hope not.

[26] In AR, content has to adapt to the user’s environment, including other people in view.

Here we see just the furniture playing a role. That’s pretty cool to see in action.

Mapping the world is far less invasive than mapping our brains.

[27] Business instincts will naturally drive companies to have app stores, to protect all IP and mediate access from the irrational mob, i.e., you.

Resist the urge. It’s not good for them and it’s not good for you.

The value of copying and remixing content far outweighs the loss of control. Look at YouTube vs. the App Store.

I look at App Stores and see more clones and less inspiration. DRM doesn’t prevent copying. It just makes everything suck.

[28] Most Importantly: We need a way to link people, places & things into a truly open meta graph.

Here, I’ll praise Facebook for Open Graph and Microsoft for following with their own kind of graph. What we need next is the meta-version of these that spans companies to build a secure graph of all things or GOAT.

Open experiences need to understand the dynamic relationships among people, places, and things. But information about people should be considered private, privileged, and protected. Those links can’t even be seen without authentication, authorization and auditing, aka user’s informed consent.

Users will live in a world where they subscribe to layers of AR based on levels of trust. Do I like Facebook’s view of the world? If yes, then I can see it. Do I like Microsoft’s. Ok, then that’s visible too. Do I trust Facebook with my data? If yes, then they can see me too.

We can build this. We built the web. It need not be owned by any one company. And we have just enough time to get it right.

[29] This is the key. You already own the content. Copyright is implicit in the US and beyond. If you published it, you own it.

If you express yourself on Facebook, right now, they own it, or at least can use it anyway they want. That’s because you clicked a EULA. But that’s not the natural state of affairs

We need a markup language for reality, letting us describe what IS in a semantically rich way.

We also need an expression language for content, that lets this content adapt to the environment.

There are some great starts in open standards. We can build the next steps on top of those.

[30] Ask yourselves: why are we doing this AR/VR stuff? For the technology itself? For the money?

It’s not an internet of things or a web of sites or a graph of places. it’s about people.

We do this because we ARE those people, building amazing things for ourselves and others to enjoy. And the things we build next are going to knock their socks off.

So our focus must always be on the people, our customers, and how to help and not hurt them. Because, even if we’re selfish, they and we are one and the same and our choices matter.

[31] We live in and make up an internet of people.

Thank you for inviting me here today.

 

No Comments

People Actually Care About Privacy

Key findings on American consumers include that — 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing” 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.” 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”Source: The Online Privacy Lie Is Unraveling | TechCrunch

I’ve had this same argument for years.

The “smart money” says that people no longer care about privacy. They point to millennials who post tons of embarrassing crap about themselves on Facebook. They say it’s a cultural shift from my generation to the next. Privacy is dead or dying.

I say that teenagers are generally reckless, nonchalant about their own futures, as an almost rite of passage. However, teenagers, for the most part, grow up, become responsible, and have concerns like the rest of us. So I figured the pendulum would swing back towards privacy as soon as these kids got older, saw the pitfalls. The new kids would become the reckless ones.

This study shows that people do actually care about privacy. But cynicism about how much power we have to protect it is a third factor to consider. If people are resigned to lose their privacy, it becomes less vital. It doesn’t mean they care less or are any less harmed. If people felt more empowered, they might even fight for their rights.

For me, this is pretty simple. If I create data by my activities, it’s the same as creating a work of art. It doesn’t matter that my phone is the tool vs. a paint brush or keyboard. This data would not exist except for my actions. I made it and I own it, unless I choose to sell it.

It’s perfectly fine for any adult to trade or sell their own data, as long as there is informed consent and people are in control of their own information.

No Comments

VR Takeaways

HTC Vive vs. Oculus Crescent Bay: My 10 VR Takeaways – Tested.

It’s amazing to see everyone getting involved in VR development. This used to be relegated to a small set of academics and foolhardy startup veterans like me.

I especially can’t wait to try the Vive in conjunction with what they’re here calling environment mapping. It’s the closest thing we have to high quality AR right now.

I have a few nits with the article though, based on some 20 years experience with VR. Most of these made it into the Metaverse Roadmap doc.

Here’s an old glossary of VR that summed up the research as 10 years ago (when I wrote it). In general, I think folks are overloading “presence” too much and only just beginning to grok what else is missing.

What’s really going on is a virtuous cycle that starts with Presence, requires Interactivity, such that people can affect the world. The extent to which the world is mutated by your interaction or mere presence is called Reflectivity. This includes just seeing your own body and seeing it move the way you imagined, or vice versa (also called proprioception, which is greatly hindered by having opaque goggles on). If we get this far, we can develop a kind of Resonance (a self-reinforcing signal) that builds with each positive interaction in this cycle. OTOH, every time this cycle is broken, like when you put your hand through a wall, or something doesn’t move when you expect it to, it degrades the resonance and erodes the effect, sometimes severely.

When it works, we get a two-way synchronization between the entirely computational virtual world inside the computer and the entirely mental virtual world in your brain. This mimics what happens naturally, when we model the real world and our models are validated or refined by each interaction.

It’s not much more complicated than that.

But for those working on latency, resolution, etc.., that’s only the first 25% of the job. Physical interaction is also key, eventually haptic. But so is making the virtual world mutable enough for participants to impact it, so their interactions are credible across low and high-level systems in our brains. Sticking to how things work in nature is always a good starting place.

1 Comment

How to Design a Product for Billions: 7 Lessons

It’s not often one gets to work on a product with a billion anything. I’m extremely fortunate to have had the opportunity to design/invent/iterate in the earliest days of at least three products in that league, and I’ve learned from others who have had more.

Here, I’m going to focus on Google Earth, which has well over a billion users, thanks to Google’s scale. I’ll pick examples from my experience to help illustrate the lessons.

Read the rest of this entry »

3 Comments

Microsoft Hololens

I am proud and excited that Microsoft has finally announced a project I started working on in 2010, before anyone called it “Fortaleza” or “720” or “Hololens”. When I started architecting AR systems and designing the very first experiences around what we first called “Screen Zero”, I didn’t care about credit (lucky me, because I wouldn’t get any). I just wanted to help change the world.

And so it will…

One of the most fortuitous aspects of this is that I now get to work (at Amazon) with two of my original teammates from the early Screen Zero days: Rudy and Sheridan. Andy is still at Microsoft, maybe Katie too. But there are many more veterans working with us now. Amazing people make amazing things!

No Comments

Oculus Sift

Virtual reality may find use in assessing sex offenders.

Currently, the most common testing technique (for men) is what’s known as penile plethysmography. This involves placing a ring-style sensor around the offender’s penis, then measuring any changes in its circumference as they’re subjected to a variety of visual or auditory stimuli. One problem with this approach is that subjects can skew the results by diverting their eyes from the images.

Holy Clockwork Orange, Batman. I can understand the wish to determine if sex offenders are likely to offend again, when determining their parole. But what we’re talking about here is effectively using VR to enable unavoidable thought crime, and entrapment thereof. Look away from the virtual child and you can still be guilty, because what we want is not what you did (legally or otherwise) but what you will do, aka your latent intent, even if you don’t consciously know it.

Read the rest of this entry »

No Comments

VR Hackathon 2014

I had a lot of fun helping to judge several dozen lively entries in the SF VR Hackathon this Sunday. I guess I’m at a point in my career where I’m qualified to judge but don’t have enough time to actually create fun projects (outside of work).

For “only” three days of work, there were some amazing entries. A number of them won prizes. I wish we’d had more categories and prizes to give out to some of the other notable efforts, like a mind-bending recursive/immersive zombie game from one of the sponsor teams and a really interesting virtual world built entirely in a pixel shader.

The grand prize went to a ghostbusters riff that did an amazing job solving (for narrow use cases) user input in VR, which I think is still one of the biggest unsolved challenges. Here’s some video of their experience. The controller had great haptic feedback, and the weak cardboard backbone connecting the two pieces actually added more value than it took away.

via Who You Gonna Call – VR Hackathon 2014 – YouTube.

No Comments

Miku

Nicely done, but I can’t figure out how to embed properly:

http://ak.c.ooyala.com/h4OTd4azoOLRlJhJ1B55LxeICRMbb3rU/DOcJ-FxaFrRg4gtDMwOjFpaDowODE7X4

No Comments