Archive for November, 2008
People wonder why Google pulled Lively after such a short run. After all, it couldn’t have been too expensive to run, at least not compared to Google’s monthly income. And at least some people were using it.
As a learning sandbox, I thought it would actually last longer, providing valuable intelligence and insight into if, how, and why people use virtual worlds to interact and spend time.
In the end, I’d speculate that’s why it was killed. They learned that griefing sucks. They learned that teenagers only hang out at the mall because it’s where their friends go and–let’s face it–they have nothing better to do. And they learned that user behavior in a semantically-limited virtual world can’t easily be mined for clues as to which ads will make the most money.
People wonder whether this withdrawal signals a downturn for virtual worlds. I hate to break it to you. The economy signals a downturn for virtual worlds — at least for those companies that require cash to operate. Google could have afforded to let this experiment run for 1000 years. But Google, like most companies, probably likes to focus on fundamentals when they anticipate a material change in the bottom line.
Google did, after all, announce they’d focus on their core businesses well before Lively even launched. There was apparently a sense that too many diverse projects were pulling the company in too many directions. I take it that on-going projects at least got to finish and maybe launch. But the bar got that much higher for the more speculative projects, I figure, which frankly happens at companies like Microsoft too…
From my perspective, the time to invest in speculative projects is exactly during a downturn — but only if you have the cash. This is your prime advantage if your competitors are stuck making ends meet or failing. Why take a pit stop if you have the gas? On the other hand, when your competitors pull back and slow down, even the leading forumla-1 car might ease up and coast for a while.
Bottom line, and just to be clear: virtual worlds do have a future. Unfortunately, it’s still in the future. Those that find their niche to survive deserve kudos. But no one — no one — has yet cracked the code on making virtual worlds ubiquitious and, frankly, useful, in the sense that cell phones, sneakers, or even shoe laces are.
The tension between engineering and design is felt in software development in one place more than any other: user experience and user interface design. The fact that the discipline is called ‘design’ shows the bias clearly. UIs are voodoo art, they say. Users are entirely unlike computer hardware — fuzzy, irrational, and wet — and no compiler will validate your design for you.
That much is true. But like much of software engineering, experimentation, validation, and testing is the name of the game. And even the best design firms in the world do not resort to one genius-level artist, alone in a tower, handing tablets from on high. They mock-up designs and test them out internally and with impartial human testers, which I’ll argue is far more science than art.
Which is exactly why engineers can be great interface designers too — if they learn the language. Freehand drawing skills and a photographic eye may be largely constitutional. Design skills, on the other hand, can be learned both directly and through experience, except we often don’t.
CNN’s "Holographic Interview" technology seems like a lot of fluff — all visual tricks no doubt, and not at all the magic Princess Leah hologram we all expect. Right? So goes the blogosphere.
But first of all, the Princess Leah hologram — even assuming Star Wars really happened (a long, long time ago…) — is also just a trick of light. But what people should realize is that there is some cool technology going on here.
Remember that experiment with the SuperBowl a few years back, where they could virtually rotate the view using a fixed array of cameras? Well, it kind of sucked because the cameras weren’t aligned very well. And the interpolation wasn’t really working.
But with this techology, the same approach seems to actually work right. The company doing the work here is VisRT and SportVu, which means it may involve my old friend Ran Yakir (hi, Ran).
From what I understand, the way it works is to have 35 cameras in a circle at the transmitting end. A single virtual viewpoint, based on the real-time studio camera position, is synhesized from those 35 cameras and sent to the CNN studio for compositing into the final image.
Does Wolf really see the Hologram? Yes, probably, but not in open space as we’d expect. It’s probably on a monitor. That part is the trick, as with virtual sets in general. But the real-time interpolationof a circle of 35 cameras is still very cool.
Why not 36 cameras, you ask, such that each slice is exactly 10 degrees? No one knows for sure. But since the Hebrew calendar has 13 months instead of 12, maybe this circle has 350 degrees to compensate… (kidding)
Anyway, what would be even cooler is if they could send those 35 channels of HD video as a bundle and have your home PC do the interpolation. With some head-tracking and maybe some stereoscopic viewing, you too could see a cool a floating 3Dish hologram, with the synthesized views indexed to your actual viewing angle(s). 35 HD video channels is all it takes, and a really fast PC. No problem!