Archive for July, 2008
Stefan at OgleEarth has [apart from stirring up some minor controversy over just how many Google Earth users there are — my take: 400M "unique installations" is an amazing metric, but the numbers I’d love to know (and probably never will) are min, max, and mean concurrent users…] done a great job rounding up multi-touch interface tech over the past few months.
Read the rest of this entry »
Randy once said that he was grateful that with all of the world-wide attention he’d received, everything that could possibly be done to prolong his life was being done and that he had no regrets.
My deepest consolences to his family.
Last fall, I told you about my family’s big move from New York to Pasadena so I could join a cutting-edge startup. Well, this spring, I experienced Job Search 2: Electric Boogaloo.
It’s not easy to do two potentially life-changing job searches in any given year. And it isn’t exactly what I’d planned when I started out. But, as always, things seem to work out well in the end. So here’s the even bigger announcement…
Call me a cynic, but I have still not bought into the notion that 3D Avatar Chat (called the 3D internet by some) will, by itself, change how we live. Still, futurlologists (a typo I’m going to keep) like to point to investment in virtual worlds as a barometer of how big they’ll be. Big, huge, and 3D!
Well, I take investment in virtual worlds as a much more basic barometer of how much money people think can they can make in the next 2 years, assuming, of course, their particular project turns into the 3D equivalent of MySpace, Facebook, or even Orkut (at the lower end of success, at least in the US). I don’t know how many people invest $7-10m in a 3D startup hoping it’ll become the next BlabNote, for example, though many wind up that way — goofy and marginalized. Read the rest of this entry »
In writing about Google Lively, I’ll go a little further and explain what I think is coming from Google and others in the next few years. I’ll call this service "Google Me," which is the natural extension of behavioral advertising. Here’s how it’ll probably work:
By using this service (or related services), you’ll give the company permission to maintain and improve an "anonymized" version of you. It’s anonymized in name only, because the point of the system is to know you inside and out — what you like, what you do, and what you’re going to do. It’s a virtual you, way beyond your 3D avatar or search history. It can even predict your future, or at least your choices, which is not that far off.
The mythical Star Trek Tricoder would seem to need a few basic technologies to be brought into existence. This is one of them. And it’s being miniaturized to the point where field-diagnosis might be possible. It could theoretically detect any molecule for which it has trained nanoparticles. Couple this with other nanoparticles that could attach to the bad matter and serve as a lightning post for a strong RF field (as is being done for cancer research), and you have yourself an external immune system to detect and treat a host of illnesses on the spot.
Fascinating, isn’t it, that the first basic applications of nanotech simply use nanoparticles as glue. Imagine what’ll be possible when te nanoparticles give way to nanomachines, capable of actually repairing and/or changing basic biological processes, and then changing matter itself.
In conventional NMR spectroscopy machines, powerful fields are necessary to line up individual nuclei.
However, Ralph Weissleder at Harvard Medical School in Cambridge, Massachusetts, US, and colleagues have found that magnetic nanoparticles generate a much larger signal than single nuclei, and can thus be detected using the weaker fields from small permanent magnets.
The trick that Weissleder and colleagues have perfected is to coat these nanoparticles with molecules that bind to specific biomolecules, or bacteria and viruses.
This binding process causes the nanoparticles to clump together, producing a measurable change in the signal they produce. In this way, the team says it can identify a large variety of biological targets.
The team has squeezed the electronics that detect and interpret the signals onto a chip just 2 millimetres square (pdf format).
[I decided to move this update from the previous post to its own post, since most of you come here via RSS and might not see those edits]
I got a few emails and saw lots of comments on various blogs essentially saying "WTF?" about why Google is releasing this virtual world. Seems nuts, right? It’s not search. It’s not organizing the world’s information. Some people are thinking about whether this will hurt Second Life or not (my take: Second Life doesn’t need any assistance in the positive or negative sense — it’s up to them to succeed or fail since they’re still very unique).
Here’s the deal, and why this is a smart move for Google (not that I’m always going to defend them). First, their game is to sell advertising. They do talk about "organizing the world’s information." And this fits — if you count advertising as information that needs to be organized…
Recall, they went from clever and minimalistic "contextual" advertising to more useful "behaviorial," increasing their CPMs and market share. The next step is to have complete virtual versions of you (behavioral models,and ones that you directly control) that they can exploit to help advertisers sell you stuff.
The most basic form of that is what SceneCaster, Vivaty, and IMVU try to do already — product marketing — offering virtual versions of real-world objects you can buy, e.g., a Lay-z-boy couch or a Ray-ban pair of sunglasses (Big Stage was doing that too) or entire sets designed to sell you some brand identity. It’s not particularly smart in a computer-science way — people simply self-select the content they like and that’s that. But the bigger step is to have enough of an active user base to be able to extract trends and behavior on a personal, network (social), and aggregate level, which Google can then use to better target ads at each individual user and make more money.
For example, they could track the adoption of some cool, new virtual accessory through their worlds and see who are the trend setters and who are the sheep — perhaps even tracking what you look at on your computer screen (and for how long) against what you buy. Imagine what you could do with that information.
There’s a reason Google unilaterally keeps your personally identifyable data for 18 months when they shouldn’t be keeping it at all as far as I’m concerned. It’s their bread and butter, though it sucks for us (for reasons that should now be obvious, given the Viacom suit).
Anyway, lots of people have thought about how to monetize free virtual worlds. But no one has the secret sauce to making non-game virtual worlds fun (for everyone), not even Linden Lab, and they’ve been at this almost as long as ActiveWorlds now. Google has an opportunity, a big sandbox, to figure out what to do next. But I’m still skeptical — trying to make compelling virtual worlds today is like trying to read a book through a straw — interface is still going to limit everything.
Still, the more they integrate this product with their other "free" services, like chat, mail, and apps, social networking, the more likely they are to figure it out.
[Update 2 : a very intelligent post by Virtual World News with lots of good info ]
Ah, finally, we get to see the long rumored Google Virtual World, or one of them anyway. There’s also rumored to be another one with a remarkably similar aesthetic that’s only 2D and doesn’t require a plugin and install. [I imagine they could use this 3D technology on the server side to flatten their avatars into isometric sprites that could be rendered in Flash or AJAX, ala google maps on the client. Any bets?]
Well, I have to say, I like the cartoony look a lot. I think that aspect is a winner, if not entirely new. Kudos to the art directors and modelers. Interaction still needs some work though. And, as with most avatar chat worlds, I quickly got bored.
Alas, that’s going to be the big test of all of the new web-based 3D worlds (and for some reason, many of them decided to go open beta today).
I never quite got into Second Life on a personal-time-spending level. But you have to admit, being a long-lived virtual theme park/carnival has afforded it a few solid years to build up a wealth of dynamic content.
Whether any one of these new 3D worlds will do the same or better will depend on penetration among the target audience (I imagine: people who grew too cool for Club Penguin or Habbo) — whether they can attract the numbers of people that will in turn attract the talent they need to make the community vibrant.
It’s the same problem as with any night club, I suppose, and a surprisingly similar experience (minus the alcohol). Expect a lot of the same solutions: Mass Marketing (or the opposite, Hidden Doorways for the really cool kids), Celebrity Gravity, Product Tie-ins, Special Events, ad nauseum.
And don’t forget ye old Happy Hour. Ladies drink free. Or perhaps not.
P.S. I wonder if the name Lively will rile Microsoft’s trademark ire. It’s not like we’d ever see a Lively.Live.com moniker due to the sheer awkwardness (or maybe we would? — just kidding). Strange too that competitors like Vivaty are all riffing on the same "alive" or "vibrant" sounding names when their products are not quite as "alive" or "vibrant" or "lively" as one would hope. Perhaps it’s just premature, or perhaps it’s just wishful thinking.
Here’s a decent Guardian article about work being done on the VR contact lens — a magical mythical interface device I’ve been writing about since 1994 at least.
Raph Koster, at the Virtual Worlds Summit last Feb, used a story about this device to say "look how fast science fiction becomes reality." Except he missed a few key facts (he’s a friend, so I can tease him): the experimental device doesn’t actually work — it doesn’t even have display elements yet — and has only been tested on rabbits for wearability, i.e., did the rabbit successfully wear a circuit-laden device for 10 minutes without dying…
I used the exact same technology example for the opposite intent — to point out long it takes for this stuff to actually become reality — 10-15 years quite often. People were just starting to think about it back in the 90s and the circuit printing technology wasn’t even close to being viable until a few years ago. In 1994, the Laser VRD (a low-power laser shined directly onto your retina) was the size of a coffee table. I had a demo of it back then at the University of Washington. It was one color, just a simple grid. And if I close my eyes, I can still see the ghost image…
I’m kidding. Anyway, that stuff lead to Microvision, which is now about to make money apparently on pico-projectors for cell phones etc.. using an offshoot of the same tiny laser technology.
Speaking of lasers, the article goes on and on about the difficulties of focusing LEDs sitting right above the cornea down onto the retina. Duh. Let’s not waste time on known issues. The solution is coherent light, in the form an on-chip laser with micro-mirrors, or, at worst, that same LED array, but with an under-layer of holographic film to simulate a physical lens assembly, collimating the display. The benefit of the laser is it’s infinite resolution, limited only by the speed of the mirrors.
That’s not even the biggest hurdle, alas. Power is a big problem. But simple eye movement is going to cause major headaches for decent AR registration. Contact lenses float around the eye constantly, sliding, rotating. The lenses would need at least two precisely tracked points with very low latency to properly align virtual light with real images and account for rapid eye movements (some very rapid, some very slow). And that’s assuming any drift in the display element from the pupil center can be tolerated.
This all boils down to 5-7 years of work or some major investment by a big company, which could reduce it to maybe 3-5 years. However, it’d be worth it. I don’t wear contacts today, but I would if they gave me what this device promises. VR glasses can’t even match them in some ways — if you close your eyes, you see your eyelids. But contacts would still be visibile, providing a natural distinction to jump between the augmented world and the purely fictional metaverse. Just close your eyes, and you’ll be transported.
A senior executive at Electronic Arts (EA), the company which owns the Sims franchise, said that in light of the popularity of virtual worlds and other computer games which allow players to compete with each other via the web, the Sims may soon become a multi-player game.
It has to be a joke, right? Nancy Smith, if anyone, should be familiar with the now-euthanized EA-Land, nee The Sims Online, which was, in fact, a MMO experience based on The Sims.
But I’m thinking maybe the reporter missed the story here. I’m thinking that the point is not to turn the Sims 2 or 3 into another MMO, but to take a page from Facebook and Spore and make the Sims more of a massively single player experience, but with extensive social elements.
In other words, the doll houses are not all in the same shared town — you might visit a friend’s house, but don’t expect shared public spaces, since that’s most likely where TSO went wrong. The point is to put our creations in a shared, explorable experience (not necessarily one big space), but leave the personal drama to someone else.
Which only goes to bolster my "anti–social" theory of on-line fun. Yes, some people want to discover strangers randomly. But given how badly strangers can and do act in on-line spaces, ranging from usenet to Second Life, I think my theory works pretty well for the other 90% of us who prefer to meet new friends more through mutual interests, activities, and introductions, than via common coordinates and/or curious costumes.