Archive for May, 2008
Turns out, the API is extremely expansive, letting you control most functions of the app. And while that gives you many options for how to build your own web-based mashup of GE plus, what it also theoretically does is let you embed GE into other PC apps, e.g., games.
You could load your 3D objects via Collada/KML, make them move via the API, and even see them interact with each other, if you maintain game state in your own code. But beyond that, it gets a little tricky. The cute Milk Truck example has my truck disappear or flicker when I jump high enough — this is most likely due to a near clipping plane being set without considering the dynamic object so close to the viewer. Expect lots of those sorts of issues to work around, as best you can.
But the one 3D mashup people may really clamor for is famed Second Earth or Google Earth meets Second Life mashup, as previously discussed. I think it would be much easier now, at least at the basic level.
Here’s the general outline:
Wrap the new GE DLL in your own pseudo-browser as above (preferably using a compiled language like C++). Combine that with the now-open SL client code, sitting "outside" and running the show. Intercept the calls in the SL client code that create objects. Add some export code such that these calls now save the geometry, prims and avatars for the most part, as Collada. Have GE re-load those into the world (preferably asynchronously).
It would be great if there was a more direct path to get gobs of 3D user data into GE, not yet from what I saw. You’d also intercept the regular SL spatial network updates from the server for objects and avatars and turn those into positional updates in GE. Better yet if you can grab the final (post-smoothed, extrapolated) positions, since you don’t want to see the latency, do you now?
Trickier would be handling the animation of avatar face, skin and joints, since it requires real-time bones and morph support to exist in GE, which I’m sure isn’t exposed. You’d be left with loading a replacement avatar geometry file every frame, which will absolutely be slow. You’d probably settle for avatars that can only move, but not animate.
In fact, the whole thing would be too slow for my tastes, but it’d most likely work as a proof of concept, or at least a cool demo.
I give it a month or two before someone shows a working prototype. Just putting a dozen or so avatars in the world this way would be pretty simple. But don’t try a thousand or more unless you’re a glutton for punishment.
For mashups, this will be an amazing enabler. Expect to see it anywhere you might see Maps today, assuming it’s popular, and why not? And I’m sure this is meant to be a swift kick in the pants to Microsoft’s VE, which already runs in browser and does similar mashups.
So today was my official last day working for Big Stage. I’d given notice about a month ago, but had some loose ends to tie up to make sure stuff I’d been directing was working well for the upcoming launch.
I’m not going to get into any details publicly. But things are as amicable as they can be. I enjoyed working with the folks there, had a lot of fun, and wish them all the greatest success in this product and in all future endeavors.
As for me, I’m planning on starting some interesting short-term consulting work this week, which could lead to more. I’ll continue to consult while I diligently research a number of interesting full-time and startup possibilities. I’m hoping to get into something very cutting edge, as always, and to play a foundational and/or leadership role.
I’ll let you all know when the time comes, but it could take a while to find the right opportunity this time around.
In the meantime, I might have some interesting and unrelated news to blog about soon.
I did an interview for the very respectable journal Cartographica with Jeremy Crampton last fall and it’s just come out.
Seems to cost $12 to view, alas Jeremy has kindly provided a direct link to the PDF for free. Enjoy.
It covers some history with Keyhole my thoughts about GIS (even Net Neutrality, though I don’t know how that came up).
Adobe Drops Licensing Fees, Gives Away Flash For Devices | Compiler from Wired.com
Well, maybe Queen was overstating it a bit. It’s not even Savior of the Web3D just yet. But Adobe is making some very important moves this month. First was the news that Flash — the format — will be opened to anyone, royalty free. Adobe will make its money off the development tools, not servers and license fees. The code may or may not be opened as well. There was some talk of donating the JIT compiler code to the Mozilla foundation.
Second, they’ve put out a pre-release version of Flash 10, which contains native 3D rendering. Download and try out the demos.
What this means is that companies who already put their eggs in the Flash basket for delivering 3D to the web have been fairly well vindicated, vs. the ones that painfully went with their own proprietary ActiveX controls and whatnot.
Will Flash 10 be as fast as compiled C++ code? Not a chance. But for pushing lots of polygons, it won’t matter as much anymore, as long as we can send big vertex arrays in one call (let’s not ask about physics and simulation though) — the card does all the work. I’ll be curious to see if they allow shaders and therefore GPGPU code, but that’s a side point right now.
The key thing is, if you want to deliver a 3D app to the most number of customers without a new download and install, Flash is certainly an attractive option, especially compared to Java and Silverlight. If it becomes part of the browser, as I expect in the next few years, even more so.
Back in 1997, I left a fun job at Disney R&D making VR rides and interfaces to move back to Seattle at the behest of an old friend of mine. He had a "hot internet startup." Frankly, I wasn’t too keen on internet startups, even at that point — I figured the bubble would burst "any second" (it took three and a half more years) — but the promise was to use the "proceeds" from their "revolutionary" internet "load balancing" "product" to "spin-off" a "VR company," of which I’d be a "co-founder."
The quotes in that last sentence were all discovered after I’d made the move, which gives you an idea of how it turned out.
However, if things had actually turned out as planned, the first thing I was to do was to write the "Sea floor Visualizer," based on a demo that friend had written on the old Kubota 3D workstation, which I was going to greatly expand for the PC and the first real crop of 3D video cards. The idea was to let you virtually fly over underwater terrain, at least for a very small swath of sea floor, given lidar and other reconstructed "elevation" data. Cool stuff, and very cutting edge for 1997. Read the rest of this entry »
So my friend Cory Ondrejka (co-creator of Second Life) started an interesting thread last week that I didn’t see covered as widely as it should. Here are his slides — alas I didn’t get to hear the narration that went with it, but I can guess.
What he seems to be describing is apparently not too far from what I’ve been writing about for a while. The part I’m still skeptical about is the life-logging, and probably because of my own preference for privacy. You’ll notice I don’t twitter. I have a hard time believing anyone would even care to follow what I do from moment to moment. And I think careful editing is the secret to any compelling narrative. I just don’t want to put gigabytes of sub-standard, often mundane, prose out there into the digital firmament.
But putting that aside, the germ (and/or gem) of what he’s saying, and the part I totally agree with, is this notion of a pervasive synthesis of augmented, mirror, and alternate realities — no need to distinguish between those arbitrary categories. Turns out, there’s an old word for this which I think we can now safely revive to summarize the intent: