Archive for category Featured

Disney VR: Redux

A few years ago, I documented some of the cool experiences I worked on at Disney Imagineering starting in 1994. Now, Inspired by John Carmack exploring Scheme as the language of VR for Oculus, I figured it would be helpful to talk about the software stack a bit. And I’ll finish with a few thoughts on Scheme for VR in the future.

First, as always, I suck at taking credit, in the company of such amazing co-workers. So for the real kudos, please thank Scott Watson (now CTO of Disney R&D) and JWalt Adamczyk (Oscar Winner and amazing solo VR artist/hacker) and our whole team for building much of this system before I even arrived. Thant Tessman esp. deserves credit for the Scheme bindings and interop layer.

This Disney gig was my first “big company” job after college, not counting my internships at Bell Labs. My one previous startup, Worldesign, tried to be a cutting edge VR concept studio about 20 years too early. But Peter Wong and I managed to scrape together a pretty killer CAVE experience (a hot air balloon time travel ride) for only $30,000, which represented exactly all of the money in the world to us. The startup went broke before we even started that work. But because we’d borrowed ample SGI equipment, it did get me noticed by this secret and amazing Disney*Vision Aladdin VR project I knew nothing about.

I had to join on faith.

I quickly learned that Disney was using multiple SGI “Onyx” supercomputers, each costing about a million dollars to render VR scenes for just one person each. Each “rack” (think refrigerator-sized computer case) had about the same rendering power as an Xbox, using the equivalent of today’s “SLI” to couple three RealityEngine 3D graphics cards (each card holding dozens of i860 CPUs) in series to render just 20fps each for a total of 60fps for each VR participant. In theory, anyway.

Disney was really buying themselves a peek ahead of Moore’s Law, roughly 10 years, and they knew it. This was a research project, for sure, but using hundreds of thousands of live “guests” in the park to tell us if we were onto something. (Guests are what Disney calls humans who don’t work there…)

I talked previously about the optically-excellent-but-quite-heavy HMD (driven by Eric Haseltine and others). Remember this was an ultra-low-latency system, using monochrome CRTs to avoid any hint of pixels or screen doors. So let’s dive into the software environment that inspired me for another 20 years.

Even with supercomputers with 4-8 beefy CPUs each (yes, sounds like nothing today), it took a while to re-compile the C++ core of the ride. “SGI Doom” and “Tron 3D lightcycles” filled some of those lapses in productivity…

This code was built on top of the excellent SGI Performer 3D engine/library written by Michael Jones, Remi Arnaud, John Rohlf, Chris Tanner and others, with customizations to handle that 3-frame latency introduced by the “TriClops” (SLI) approach. The SGI folks were early masters of multi-core asynchronous programming, and we later went on to build Intrinsic Graphics games-middleware and then Google Earth. But let’s focus on the Scheme part here.

Above the C++ performance layer, Scott, Thant, JWalt and team had build a nice “show programming” layer with C++ bindings to send data back and forth. Using scheme, the entire show could be programmed, functions prototyped and later ported to C++ as needed. But the coolest thing about it was that the show never stopped (you know the old saying…) unless you wanted to recompile the low-level. The VR experience continued to run at 60fps while you could interactively define Scheme functions or commands to change any aspect of the show interactively.

So imagine using Emacs (or your favorite editor), writing a cool real-time particle system function to match the scarab’s comet-like tail from the Aladdin movie, and hitting two keys to send that function into the world. Viola, the particle system I wrote was running instantly on my screen or HMD. When I wanted to tweak it, I just sent the new definition down and I’d see it just as fast. Debugging was similar. I could write code to inspect values and get the result back to my emacs session, or visually depict it with objects in-world. I prototyped new control filters in Scheme and ported them to C++ when performance became an issue, getting the best of both worlds.

The Scheme layer was fairly incapable of crashing the C++ side (with much effort, to be honest). So for me, this kind of system became the gold standard for rapid prototyping for all future projects. Thant even managed to get multi-threading working in Scheme using continuations. So we were able to escape the single-threaded nature of the thing.

Thant and I also worked a bit on a hierarchical control structure for code and data to serve as a real-time “registry” for all show contents — something to hang an entire virtual world off so everyone can reference the same data in an organized fashion. That work later lead me to build what became KML at Keyhole, now a geographic standard (but forget the XML part — our original JSON-like syntax is superior).

BTW, apart from programming the actual Aladdin show, my first real contribution to this work was getting it all to run at 60fps. That required inventing some custom occlusion culling, because the million dollar hardware was severely constrained in terms of the pixel fill complexity. We went from 20fps to 60fps in about two weeks with some cute hacks, though the Scheme part always stayed at 24fps, as I recall. Similarly, animating complex 3D characters was also too slow for 60fps, so I rewrote that system to beef it up and eventually separated those 3 graphics cards so each could run its own show, about a 10x performance improvement in six months.

The original three-frame latency increased the nausea factor, not surprisingly. So we worked extra hard make to something not far from Carmack’s “time warp” method, sans programmable shaders. We rendered a wider FOV than needed and set the head angle at the very last millisecond in the pipeline, thanks to some SGI hacks for us. That and a bunch of smoothing and prediction on the 60fps portions of the show made for a very smooth ride, all told.

(I do recall getting the then-Senate-majority leader visibly nauseated under the collar for one demo in particular, but only because we broke the ride controls that day and I used my mouse to mirror his steering motions, with 2-3 seconds of human-induced latency as a result).

This Disney project attracted and redistributed some amazing people also worth mentioning. I also got to work with Dr. Randy Pausch, Jesse Schell (also in his first real gig as a jr. show programmer) went on to great fame in the gaming world. Aaron Pulkka also went onto an excellent career as well. I’m barely even mentioning the people on the art and creative leadership side, resulting in a VR demo that is still better than at least half of what I see today.

Further Thoughts

So can Scheme help Carmack and company make it easier to build VR worlds? Absolutely. A dynamic language is exactly what VR needs, esp. one strong in the ways of Functional Reactive Programming, closures, monads, and more.

Is it the right language? If you asked my wise friend Brian Beckman, he’d probably recommend Clojure for any lisp-derived syntax today, since it benefits from the JVM for easy interoperability with Java, Scala and more. Brian is the one who got me turned onto Functional Reactive Programming in the first place, and with Scott Isaacs, helped inspire Read/Write World at Microsoft, which was solving a similar problem to John’s, but for the real world…

If you asked me, today I’d have to go with Javascript as the scripting language for VR. It’s come a long way from the 90s, esp. with ES-6. And, like Thant 20 years ago with Scheme, I can now make JS look like anything I want with very little performance penalty but lots of flexibility. But the single biggest benefit is there is just so much MIT-licensed code for NodeJS and browsers. The community is the single biggest benefit in the end. For rapid prototyping, nothing saves time as much as the code you don’t need to write.

Syntactically, lisp-derivatives aren’t that hard to learn IMO, but it does take some brain warping to get right. I worked with CS legend Danny Hillis for a time and he tried to get me to write the next VR system in Lisp directly. He told me he could write lisp that outperformed C++, and I believed him. But I balked at the learning curve for doing that myself. If other young devs balk at Scheme due to simple inertia, that’s a downside, unfortunately.

Eric Meijer once taught me that Javascript is the assembly language of the internet. With asm.js and Web Assembly that’s become literally true. There really isn’t anything more appropriate right now for a language to build Cyberspace.

 

No Comments

People Actually Care About Privacy

Key findings on American consumers include that — 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing” 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.” 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”Source: The Online Privacy Lie Is Unraveling | TechCrunch

I’ve had this same argument for years.

The “smart money” says that people no longer care about privacy. They point to millennials who post tons of embarrassing crap about themselves on Facebook. They say it’s a cultural shift from my generation to the next. Privacy is dead or dying.

I say that teenagers are generally reckless, nonchalant about their own futures, as an almost rite of passage. However, teenagers, for the most part, grow up, become responsible, and have concerns like the rest of us. So I figured the pendulum would swing back towards privacy as soon as these kids got older, saw the pitfalls. The new kids would become the reckless ones.

This study shows that people do actually care about privacy. But cynicism about how much power we have to protect it is a third factor to consider. If people are resigned to lose their privacy, it becomes less vital. It doesn’t mean they care less or are any less harmed. If people felt more empowered, they might even fight for their rights.

For me, this is pretty simple. If I create data by my activities, it’s the same as creating a work of art. It doesn’t matter that my phone is the tool vs. a paint brush or keyboard. This data would not exist except for my actions. I made it and I own it, unless I choose to sell it.

It’s perfectly fine for any adult to trade or sell their own data, as long as there is informed consent and people are in control of their own information.

No Comments

Why I Re-joined Amazon

I just got outed on Techcrunch. So I’ll come clean. :)

I’ve recently (April 2014) rejoined Amazon as a manager and developer on the Prime Air team.

We’ve set up a new team in downtown SF to focus on some interesting aspects of the project. We’re growing rapidly. If you’re interested in the project and love the Bay Area, feel free to reach out or apply directly via the Amazon website (here or here)

So why did I re-join Amazon?

The simplest answer is that I really admire this team, this project, and this company. I’m not one to gush or blush — if anything I excel at finding fault. But this job is really fun. We have trained professionals who love to do the stuff I don’t.

The project doesn’t need any more hype from me. JeffB already talked about it on 60 minutes. You may have heard me talk about various superpowers in another context… This is a similar level of game-changer IMO.

Speaking personally, this project meets a number of important requirements for me:

First, it needs to be fairly green-field. I did early AR/VR in the 90s. We built an entire Earth in 2000. I worked on massive multiplayer worlds and avatars after that. I moved onto robotic parachutes in 2004, designed geo-social-mobile apps in 2008, then telepresence and more stuff I can’t talk about after that.

I like to learn fast, often by making mistakes, with a whole lot of guessing and path-finding until the way is clear. By the time 100,000 people are working on something, there are up to 100,000 people who are potentially way smarter than me, plus ample documentation on the right and wrong ways to do anything.

Second, I want to work on projects that use new technology in the most positive ways, sometimes as an antidote to the other negative ones out there. I’ve left companies on that principle alone…

I’ve both given and received some criticism over this – even been called a “hippie.” But I didn’t inhale that sort of sentiment. I just moved on. At the end of the day, I always try to do the right thing and help people wherever I can.

That’s based on what I like to think of as “principles.” Many of the reasons I like Amazon as a company are due to its principles.

At Amazon, I saw these principles come up almost every day on the job and I was suitably impressed. Naturally, they’re used as a kind of lens for job candidates, esp. as a way to efficiently discuss their leadership skills. But these concepts are used and reinforced almost daily for things like professional feedback and taking responsibility, above and beyond our job specs.

I’ve seen senior leaders uphold the “vocally self critical” principle in meetings, where at other companies such behavior might be called a “career-limiting” move. This principle alone meant that even in my earliest interviews, I could be blunt about learning from my past mistakes without worrying if I should say things like “my biggest fault is that I work too hard.” What a relief.

The first Amazon value on the list is, of course, “customer obsession.” There’s no other value that rises above this, not expedience or profit. And in my opinion it shows.

Companies that stick to their principles tend to be consistent and well-trusted. Having clear and understandable principles, reinforcing them and even working through when they seem to be in internal conflict leads to making better decisions overall and avoiding really bad ones.

That’s especially true when you don’t have the luxury of seeing the full repercussions of your choices in advance. These principles are there for when the choices are hard or unclear, not just when they’re easy.

I believe that companies that get this, and especially those that put their customers first, are the ones that will succeed.

 

BTW, there’s still some perception out there that “the FAA nixed Prime Air.” Here are a few articles that addressed that question directly.

2 Comments

Why I donated to MayOne.us

If Money equals Speech, as the Supreme Court believes, then Money trumps Votes, in terms of sheer influence and power.

Except… there are more of us that can vote with our wallets than all the vested moneyed interests can muster.

First, we have to be united to combat money in politics. That means us collectively spending enough to turn the ship.

It doesn’t matter what you believe in, left or right. If you ever want it to count, then make yourself count here.

No Comments

Unreal4 runs in the browser. Epic!

Epic Partners With Mozilla To Port Unreal Engine 4 To The Web | TechCrunch.

Using the amazingly cool emscripten and asm.js, it’s now possible (with minimal effort) to port original C++/OpenGL code to run in Javascript/WebGL in many web browsers. That’s amazing. And it almost completely fulfills the vision we discussed many years ago with the likes of Vlad Vukicevic and others. Even the 1.5x performance penalty is not a big deal, considering most of the work in a 3D app is done by the same hardware whether it’s via a browser or native OS. If you write it correctly, it should run 100% as fast.

This could be catastrophic to companies like Unity3D, who solve cross-platform 3D by making you work in their time-tested little sandbox, except for the poor level of support WebGL has on mobile. That’s the last remaining bottleneck to real Web 3D.

Apple only officially supports WebGL inside iAds, proving it’s not a technical problem at least. Android support is variable, but within reach. These conditions are mostly IMO functions of the current lucrative business model for apps, not any lingering hardware or security limits. Consider: if mobile browsers improve, then cool 3D apps are once again free, unchained by “app” and “play” stores and their up to 30% markups.

On the other hand, the web is what built the digital economy that’s fueled mobile growth. Mobile phones have gone back to the pre-browser era to make some money, but it’s inevitable that we’ll all return to a more open ecosystem, esp. on Android. Closed ecosystems like AOL only lasted until people found the door.

Nicely done, Vlad, Tony, Ken, and more.

No Comments

In the shadow of Google Glass

Here’s fun Verge article from last spring that mentioned me nicely. I don’t seek out much press, but it’s nice to get a good review.

If you haven’t seen the video, you can watch me battle the stage lights below.

In an inspirational speech, Avi Bar-Zeev of Syntertainment, a startup in stealth mode, suggests that [Augmented Reality] could change the world.

Every game-changing technology can be recast as a human superpower,” he suggests, likening the television to primitive clairvoyance, the telephone to telepathy, and the wheel to telekinesis. “If I decide I want that rock to move, I have the power to make it move with much less effort,” he says. But if you could reshape your reality at will, could “teleport” elsewhere, he asks, what would it mean to be in jail? Bar-Zeev also points out that the difference between augmented reality and virtual reality is purely semantic if you imagine screens built into contact lenses. “What’s the difference between AR and VR? Open and close your eyes. That’s it.”

No Comments

Microsoft shows off WebGL for IE11

This is awesome news.

Microsoft shows off WebGL, touch-capable features in Internet Explorer 11 | Ars Technica.

As Will Wright noted earlier about a different “reversal,” it’s always a good thing when the world’s largest software company actively listens to its users and what they want.

Great things will happen as a result.

No Comments

Google’s Michael Jones on How Maps Became Personal

Here’s a great interview with my former CEO/CTO, the brilliant Michael T. Jones, in the Atlantic magazine. Link goes to the extended version.

[BTW, The Atlantic seems to be on a tear about Google lately (in a good way), with John Hanke and Niantic last month and lots on Glass recently as well. With that and Google getting out of federal antitrust hot water, it seems they’re definitely doing something right on the PR front.]

Quoting Michael on the subject of personal maps:

The major change in mapping in the past decade, as opposed to in the previous 6,000 to 10,000 years, is that mapping has become personal.

It’s not the map itself that has changed. You would recognize a 1940 map and the latest, modern Google map as having almost the same look. But the old map was a fixed piece of paper, the same for everybody who looked at it. The new map is different for everyone who uses it. You can drag it where you want to go, you can zoom in as you wish, you can switch modes–traffic, satellite—you can fly across your town, even ask questions about restaurants and directions. So a map has gone from a static, stylized portrait of the Earth to a dynamic, inter-active conversation about your use of the Earth.

I think that’s officially the Big Change, and it’s already happened, rather than being ahead.

It’s a great article and interview, but I’m not so sure about the “already happened” bit. I think there’s still a lot more to do. From what I see and can imagine, maps are not that personal yet. Maps are still mostly objective today. Making maps more personal ultimately means making them more subjective, which is quite challenging but not beyond what Google could do.

He’s of course 100% correct that things like layers, dynamic point of view (e.g., 2D pan, 3D zoom) and the like have made maps much more customized and personally useful than a typical 1940s paper map, such that a person can make them more personal on demand. But we also have examples from the 1940s and even the 1640s that are way more personal than today.

For example, consider the classic pirate treasure map at right, or an architectural blueprint of a home, or an X-ray that a surgeon marks up to plan an incision  (not to mention the lines drawn ON the patient — can’t get much more personal than that).

Michael is right that maps will become even more personal, but only after one or two likely things happen next IMO: companies like Google know enough about you to truly personalize your world for you automatically, AND/OR someone solves personalization with you, collaboratively, such that you have better control of your personal data and your world.

This last bit goes to the question of the “conversation,” which I’ll get to by the end.

First up, we should always honor the value that Google’s investments in “Ground Truth” have  brought us, where other companies have knowingly devolved or otherwise strangled their own mapping projects, despite the efforts of a few brave souls (e.g., to make maps cheaper to source and/or more personal to deliver). But “Ground Truth” is, by its very nature, objective. It’s one truth for everyone, at least thus far.

We might call the more personalized form of truth “Personal Truth” — hopefully not to confuse it with religion or metaphysics about things we can’t readily resolve. It concerns “beliefs” much of the time, but beliefs about the world vs. politics or philosophy. It’s no less grounded in reality than ground truth. It’s just a ton more subjective, using more personal filters to view narrow and more personally-relevant slices of the same [ultimately objective] ground truth. In other words, someone else’s “personal truth” might not be wrong to you, but wrong for you.

Right now, let’s consider what a more personal map might mean in practice.

A theme park map may be one of the best modern (if not cutting edge) examples of a personal map in at least one important sense — not that it’s unique per visitor (yet) — but that it conveys extra personally useful information to one or more people, but certainly not to everyone.

It works like this. You’re at the theme park. You want to know what’s fun and where to go. Well, here’s a simplified depiction of what’s fun and where to go, leaving out the crowds, the lines, the hidden grunge and the entire real world outside the park.  It answers your biggest contextual questions without reflecting “ground truth” in any strict sense of the term.

Case in point: the “Indiana Jones” ride above is actually contained in a big square building outside the central ring of the park you see here. But yet you only see the entrance/exit temple. The distance you travel to get to and from the ride is just part of the normal (albeit long) line. So Disney safely elides that seemingly irrelevant fact.

Who wants to bet that ground truth scale of the Knotts Berry map is anywhere near reality?

Now imagine that the map can be dynamically customized to reveal only what you’d like or want to see right now. You have toddlers in tow? Let’s shrink most of the rollercoasters and instead blow up the kiddie land in more detail. You’re hungry? Let’s enhance the images of food pavilions with yummy food photos. For those into optimizing their experience, let’s also show the crowds and queues visually, perhaps in real-time.

A Personal Map of The World is one that similarly shows “your world” — the places and people you care most about or are otherwise relevant to you individually, or at least people like you, collectively.

Why do you need to see the art museum on your map if you don’t like seeing art? Why do you need to see the mall if you’re not going shopping or hanging out?

The answer, I figure, is that Google doesn’t really know what you do or don’t care about today or tomorrow, at least not yet. You might actually want to view fine art or go shopping, or plan an outing with someone else who does. That’s often called “a date.” No one wants to “bubble” you, I hope. So you currently get the most conservative and broadest view possible.

How would Google find out what you plan to do with a friend or spouse unless you searched for it? Well, you could manually turn on a layer: like “art” or “shopping” or “fun stuff.” But a layer is far more like a query than a conversation IMO — “show me all of the places that sell milk or cheese” becomes the published “dairy layer” that’s both quite objective and not much more personal than whether someone picks Google or Bing as their search engine.

Just having more choices about how to get information isn’t what makes something personal. It makes it more customized perhaps… For truly personal experiences, you might think back to the treasure map at the top. It’s about the treasure. The map exists to find it.

Most likely, you want to see places on the map that Google could probably already guess you care about: your home, your friends’ homes, your favorite places to go. You’d probably want to see your work and the best commute options with traffic at the appropriate times, plus what’s interesting near those routes, like places that sell milk or flowers on the way home.

Are those more personal than an art layer or even a dairy layer? Perhaps.

Putting that question aside for a moment, an important and well known information design technique focuses on improving “signal to noise” by not just adding information but more importantly removing things of lesser import. You can’t ever show everything on a map, so best to show what matters and make it clear, right?

City labels, for example, usually deter adjacent labels of less importance (e.g., neighborhoods) to better stand out. You can actually see a ring of “negative space” around an important label if it’s done properly.

In the theme park map example, we imagined some places enlarged and stylized to better convey their meaning to you, like with the toddler-friendly version we looked at. That’s another way to enhance signal over noise — make it more personally relevant. Perhaps, in the general case, your house is not just a literal photo of the structure from above, but rather represented by a collage of your family, some great dinners you remember, your comfy bed or big TV, or all of the above — whatever means the most to you.

That’s also more personal, is it not?

Another key set of tools in this quest concerns putting you in charge of your data, so you can edit that map to suit and even pick from among many different contexts.

Google already has a way to edit in their “my maps” feature. But even with the vast amount of information they collect about us, it’s largely a manual or right-click-to-add kind of effort. Why couldn’t they draw an automatic “my maps” based on what they know about us already? Why isn’t that our individual “base layer” whenever we’re signed in, collecting up our searches in a editable visual history of what we seem to care about most?

Consider also, why don’t they show subjective distances instead of objective ones, esp. on your mobile devices? This is another dimension of “one size fits all” vs. the truly personal experience to which we aspire.

A “subjective distance” map also mirrors the theme park examples above. If you’re driving on a highway, places of interest (say gas stations) six miles down the road but near an off-ramp are really much “closer” than something that’s perhaps only 15 feet off the highway, but 20 feet below, behind a sound wall and a maze of local streets and speed bumps.

How do you depict that visually? Well, for one, you need to start playing more loosely with real world coordinates and scale, as those cartoon maps above already do quite well. Google doesn’t seem to play with scale yet (not counting the coolness of continuous zoom — the third dimension). I’m not saying it’s easy, given how tiled map rendering works today. But it’s certainly possible and likely desirable, especially with “vector” and semantic techniques.

For a practical and well known example, consider subway maps. They show time-distance and conceptual-distance while typically discarding Cartesian relationships (which is the usual mode for most maps we use today).

I have no idea where these places (below) are in the real world, and yet I could use this to estimate travel time and get somewhere interesting. And in this case, I don’t even need a translator.

Consider next the role of context. Walking is a very different context than driving to compute and depict more personalized distance relationships. If I’m walking, I want to see where can I easily walk and what else is on the way. I almost certainly don’t want to walk two hours past lunch to reach a better restaurant. I’m hungry now. And I took the train to work today, don’t you remember?

Google must certainly know most of that by Now (and by “Now” I mean “Google Now”). So why restrict its presence to impersonal pop up cards?

Similarly, restaurants nearby are not filtered by Cartesian distance, but rather by what’s in this neighborhood, in my interest graph, and near something else I might also want to walk to (e.g., dinner, movie, coffee == date) based on the kinds of places we (my wife and I) might like.

Context is everything in the realm of personal maps. And it seems context must be solicited in some form. It’s extremely hard to capture automatically partly because we often have more than one active context at a time — I’m a husband, a father, a programmer, a designer, a consumer, a commuter, and a friend all at the same time. So what do I want right now?

Think about how many times have you bought a one-time gift on Amazon only to see similar items come up in future recommendations. That’s due to an unfortunate lack of context about why I bought that and what I want right now. On the other hand, when I finish reading a book on my Kindle, Amazon wisely assumes I’m in the mood to buy another one and makes solid recommendations. That’s also using personal context, by design.

The trick, it turns out, is figuring out how to solicit this information in a way that is not creepy, leaky, or invasive. That same “fun factor” Michael talks about that made Google Earth so compelling is very useful for addressing this problem too.

Given what we’ve seen, I think Google is probably destined to go the route of its “Now” product to address this question. Rather than have a direct conversation with users to learn their real-time context and intent and thus truly personalize maps, search, ads, etc.. , Google will use every signal and machine learning trick they can to more passively sift that information from the cumulative data streams around you — your mails, your searches, your location, and so on.

I don’t mean to be crude, but it’s kind of like learning what I like to eat from living in my sewer pipes. Why not just ask me, inspector?

I mean, learning where my house is from watching my phone’s GPS is a nice machine learning trick, but I’m also right there in the phone book. Or again, just ask me if you think you can provide me with better service by using that information. If you promise not to sell it or share it and also delete it when I want you to, I’m more than happy to share, esp if it improves my view of the world.

So why not just figure out how to better ask and get answers from people, like other people do?

If the goal is to make us smarter, then why not start with what WE, the users, already know, individually and collectively?

And more importantly, is it even possible to make more personal maps without making the whole system more personal, more human?

The answer to what Google can and will do probably comes down to a mix of their company culture, science, and the very idea of ground truth. Data is more factual than opinions, by definition. Algorithms are more precise than dialog. It’s hard to gauge, test, and improve based on anyone’s opinions or anything subjective like what someone “means” or “wants” vs. what they “did” based on the glimpses one can collect. Google would need a way of “indexing” people, perhaps in real-time, which is not likely to happen for some time. Or will it?

When it comes to “Personal Truth,” vs. “Ground Truth” perception and context of users are what matter most. And the best way to learn and represent the information is without a doubt to engage people more directly, more humanely, with personalized information on the way in and on the way out.

This, I think, it what Michael is driving at when he uses the word “conversation.” But with complete respect to Michael, the Geo team, and Google as a whole, I think it’s still quite early days — but I’m also looking forward to what comes next.

via Google’s Michael Jones on How Maps Became Personal – James Fallows – The Atlantic.

No Comments

Mapping The Entertainment Ecosystems of Apple, Microsoft, Google & Amazon


Without commenting on the companies themselves, this is definitely worth reading for yourselves (Thanks Daniel!):

A convergence towards Apple’s business model

It’s interesting to note that all four of the companies listed have various different core business models (hardware, search, retail, software) but they have all in recent years come to create personal computing devices with their own operating system running on top of the device and additionally these entertainment ecosystems. Five years ago, Apple was the only one doing this complete trio of device + OS + entertainment services.

Mapping The Entertainment Ecosystems of Apple, Microsoft, Google & Amazon.

No Comments

3D Photo Booth

The ultimate vision among 3D printing enthusiasts is the Replicator from Star Trek (perhaps combined with the Teleporter for the live scanning part, if not the “beaming” itself). For others, it’s all a big fax machine or laser printer, just in 3D, designed to save us time, travel, and money. For most of us, it’s a way to build things that never existed before, a supreme reification of intangible ideas into physical reality.

The state of the art is still somewhat short of all of those goals, but advancing rapidly, focusing on cost, speed, resolution, and even articulation of parts. Making 3D figurines of you and your loved ones is an interesting stop along the way.

The truth is that people have thought about 3D scanning and printing for decades, and this is often a top request (I can’t tell you how many people thought they came up with this idea).

The devil is always in the details, at least for now. For example, how does the 3D printer in this Japanese “3D photo booth” apply subtle color gradiations to make your skin look real? Some affordable commercial 3D printers can do a small number of matte colors, one at a time. High end full color 3D printers are coming down in price. How does the software stitch a solid 3D likeness from multiple stereoscopic images? (hint: they say you need to stand still while they take multiple photos or video)

But it doesn’t really matter, as long as it’s economical and people want to buy these at some price, which I figure they will. FWIW, 32,000 yen = about $400 by my math. What would you pay?

Process | OMOTE 3D SHASIN KAN. (via Gizmodo)

No Comments