Archive for November, 2006
If you don’t know what 6DOF means, it’s shorthand for Six Degrees of Freedom — there are the basic six that correspond to the three axes of movement (along X, Y and Z), plus the three axes of rotation (around X, Y and Z). Your mouse, for example, has two: horizontal and vertical (X and Y).
Many pseudo-religious wars have been fought among game and UI designers about the best way to map the mouse’s two limited degrees of freedom to the proper six, using combo buttons and control keys and everything under the sun to make those two act like more. But in reality, it can never work perfectly. Too much thought is involved and users invariably get lost, until they become old pros, and then the software becomes obsolete, meaning UI designers wind up copying older software just because people have gotten used to it, not because it’s better (hence the wars).
What we really need is hardware that has all six degrees of motion, movement and rotation. And until recently, that was just too expensive. I remember using a fairly clunky, rigid strain-gauge device called a SpaceBall in 1994 that cost $3000 and offered no tactile feedback on motions at all. They got the price down to a couple hundred eventually. And finally, 3DConnexion offers a conceptually similar product that costs only $60 for the non-commercial version using optical sensors to make it cheaper and more natural feeling than its ancestors. That price is low enough for it to really take off. Stefan has a review. And I will be buying two.
The Nintendo Wii takes a different approach, putting the sensor in your hand in mid air — the Wiimote. It can also move in all six degrees of freedom, though the precision for some motions may not be high enough for all tasks. The Wii is meant as a gestural device primarily. It can tell what kind of motion you’re doing and how well or strongly you’re doing it. I’ll reserve judgment until I see it applied to CAD. But in theory, a high-precision version of the idea can work there too.
There was a similar product using ultrasound that augmented a standard-looking mouse to let you pick it up and oriented in six DOF. The difference with the Wii is that you could easily use one in each hand (two or four come standard), for an exponential increase in abilities.
The big change from when I started working in VR is now people are genuinely excited. The Wii is a huge hit, partly due to its controller and partly due to its simplicity and ease of use. And that’s going to translate to PCs and virtual worlds sometime soon. Graphics and networking were once bottlenecks, but lately, the user interface has really been the only thing holding virtual worlds back.
The big question is whether 6DOF is enough. I’ve hinted that using both hands is important. Why? For manipulating 3D objects, you want to move, rotate, but also scale and change properties. If one hand is used for precise control, the other can be used for choosing actions. If single handed, you have to drop an object while you go to a menu somewhere and that slows work-flow. Both hands need not be 6DOF necessarily.
But I’m curious to see how these 3DConnexion drivers work. I’m guessing they’re meant to work solo, which would be bad for what I have in mind, but not impossible. (the CNET version has the info — it’s meant to be used side-by-side with the mouse, not replacing it, which in this case is good).
By the way, if you’re curious as to what advances this sort of technology allows, consider that the Virtual Jungle Cruise (Disney Ride) was made possible by 6DOF controllers. The challenge there was taking a motion platform made for pre-recorded movies and making it interactive with 3D graphics. And the problem there was that we had four people sit in a rubber raft at once. You’d be hard pressed to find a single steering wheel that works for four people. But four oars in virtual water worked well for collaborative steering. And those oars were enabled by 6DOF controllers taped to each handle (more like the Wii kind, not the 3DConnexion kind).
BTW, with the popularity of the Wii, I’m bracing myself for some young researcher or Nintento patent attorney claiming to have invented collaborative steering controls using 2 or 4 Wiimotes to drive one shared vehicle. That’s more Disney’s problem, as they really hosed my patent application as I was leaving. But memories are short.
Consider also that all of those cool multi-touch displays we’ve seen videos of use two hands. But two hands gives you more than 12 apparent degrees of freedom (depending on how they’re combined, they overlap or augment). It engages your brain a bit differently than just using a mouse. A mouse uses a little bit of our natural proprioception, but the position of the hands relative to our body is really what kicks our natural spatial sense into high gear. The old cybergloves tried to take advantage of this, with moderate success. They not only had 6 degrees per hand, but also had some extra overlapping degrees of freedom from the flexing of the fingers and gestural modes.
Anyway, to make a long story short, what we really need is to kick the mouse off the desktop. A $60 6DOF controller can do that. But it can also spur those who have been waiting for the landscape to shift to move into high gear too. It’ll be as if the ice has melted and summer has finally arrived.
P.S. the one issue people never seem to address with these controllers is RSI. There is both an advantage and disadvantage to using a mouse in a narrow space. Our wrists can take a lot of repetitive motions better than, say, our backs. But that narrow movement is what induces repetitive motions in the first place. The best thing is to design an input device that avoids repetitive motions and mixes it up a bit over time. 6DOF controllers can be better at that. But now you have to deal with stresses on your shoulder and back from lifting your arm in the air. We’ve already seen injuries from using the Wiimote, mostly collisions, but perhaps some backs or elbows have been wrenched too. We have tennis elbow. I’m guessing we’ll settle on Wii-shoulder, though it could be anything. Still, better to have us moving around than sitting still for 8 hours a day. I’ll be buying a Wii too.
Previously: Second Life and the Post Scarcity World
I’m not convinced a reputation or peer-pressure-based system plus timestamps is enough to stop digital cloning of objects in SecondLife. I hope they’re doing more than that. The Wired article cites fashion and cuisine as areas where there is very little IP protection, but innovation is still high. The thing is, new dresses are created all the time. New food is made for each meal. That creativity is going to happen anyway. And it’s a mark of pride for these artists to innovate. It’s a way of life.
I guess the same could be said for any art form — as long as the artists are the ones creating, they generally care about being original, even when they knowingly or unknowingly copy from other influences. The difference is that they can’t simply copy designs in digital form — they can only emulate, so it’s just as expensive work-wise to make something new as it is to copy (thought-wise, copying is easier), and more rewarding to innovate.
The people who do the mass copying are not the artists, but the consumers and wholesalers of art. If cuisine could be digitally copied, the threat would come from mass producers and consumers in the supermarket, not a prime restaurant. And it already does for fashion. Popular dresses for each season are copied and fakes are sold wholesale. To say the designers don’t copy each other is to miss one of the key problems with sweatshops in Asia (apart from the labor issues), who really don’t care about peer pressure or reputation, and not even that much about law.
So these changes for SL are worth a shot. But it’s a good idea to start planning the next steps too. I think it’s more likely that Second Life can kill copyright and replace it with something better than merely saving it. But it’s not going to be easy by any means.
The SL blog is down, but I’ll include a link to Cory’s post about CopyBot when I can. This is interesting.
There’s some open source software called LibSL that can effectively act like the Second Life PC client, but which also allows subtle changes to the data it handles, like removing the bits that say "you can’t copy this."
The result is that a LibSL client can effectively stream objects back into the shared virtual world that are perfect copies of anything, albeit without the original copy protection.
Why does this matter? It’s a microcosm of the coming post-scarcity world.
This French company is getting a lot of press lately, so let’s talk about procedural textures for a bit. The first thing to note is that the company would be the first to admit that procedural textures aren’t new, so let’s take the hype down a notch. Some of us have been using them since before Photoshop was around, often of sheer necessity over personal preference.
What this company claims to have built is "better math" to represent more complex, organic phenomena, using Wavelets vs. Fouriers, I gather. That’s good. But the thing that typically holds procedural anything back isn’t the math, it’s the art tools, and those I haven’t seen. Perhaps they’ve solved that too. But that’s what they need to be showing the world, not the benefits of small game size.
There’s no reason someone couldn’t make a great photoshop-like app that computes everything procedurally. I mean, photoshop is actually somewhat procedural already, though the procedures are very basic, like "apply-airbrush-at-(100,200)" repeated a million times. It’s only for convenience and speed that we save files in JPEG form. Even JPEG is procedural, given that the format already stores the parameters needed to reconstruct your image, and not the raw image bits. The magic compression actually happens on those parameters as well.
The tradeoff, as with any kind of compression, is saving space vs. spending time. And one way to re-balance that equation is to have a game where textures change gradually over time. With standard texturing, you’d have a hard time of it, perhaps needing hundreds of variations built ahead of time. With procedural methods, you could expose certain parameters (it’s better called parametric texturing then) and get vastly different results with very little work.
The question is how many games need to morph their textures on the fly? And the answer is that this is useful not only for time-variant parameters, but also space-variant, e.g., creating instant diversity, making a forest where no two leaves are alike. Ultimately, though, you come up against the limitations of the hardware, where even procedural textures need to be expanded and cached as big textures for optimal rendering. Only so many can be stored and/or produced on the fly (there’s another drawback, in that most hardware now supports one or more pre-compressed texture formats, which are incompatible with procedural texturing, and so we waste valuable texture memory too.)
However, even with all of those technical issues, my thesis is that procedural modeling tools are indeed the future, as they solve a lot of important problems. But my sense is that the art tools aren’t quite there yet.
Making smaller textures may not matter much for games loaded from disk, as disk formats grow and the CPU time spent expanding the texture may match closely with the load times for less compressed versions. Where procedural texturing makes the most sense is where they need to be dynamic, or where they need to fit in a really small space or narrow pipe, like, for example, for on-line. That’s one reason some on-line worlds use procedural objects already — the CPU cost is worth it because the bandwidth is the bottleneck.
Ultimately, what artists really want is control of the final image quality, because that’s what they do for a living. So if you really want to sell them on the technology, the thing to emphasize is how they can re-use their work and do more in a day. If I can make 2X the number of textures because some of them are derived from others, that’s a big time saver, as long as I like the results.
So I arrived home from Toronto today and got a chance to test drive the new Microsoft 3D Earth. As with all Microsoft products, the first version isn’t quite usable. But there are some very promising elements. First among those are the textured/shadowed buildings, which look amazing. It looks like Microsoft is working with Harris Corp, a military contractor specializing in communications. I understood they’ve done some good work on 3D reconstruction from satellite photos, which makes me wonder which part the Microsoft Vexcel acquisition has played… But either way, the buildings look great from a fly-over distance. There’s no level-of-detail to zoom in close yet, so don’t expect good street-level views in the near term.
Read the rest of this entry »
This takes me back a few years. I remember designing and trying to sell something like this back in 1993. Personally, I think this company could make the "boxes" a lot thinner using an extra mirror bounce. Rear-screen TVs have a much better ratio of screen size to depth, but it requires special lenses and very good mirrors (mylar, usually). I don’t see whether they put angle sensors between the boxes or they require manual calibration for adjusting the rendering to match the screen configuration, but that’s a no-brainer.
The reason I bring this up is that it’s along the lines of what I think the immersive home theater experience will look like in ten years. I hope this company didn’t try to patent the idea — there’s plenty of prior art.
Here’s how it might look. Imagine these boxes as super-thin — like a centimeter, mostly just for support. And imagine they fold at at least two crease points, but preferably they fold everywhere, so as to convert from flat to cylindrical and anywhere in-between. In flat or near-flat mode, you could maximize the number of viewers. In cylindrical mode, one person could stand inside the cylinder and be immersed in 360 degrees. The most common configuration would be somewhere in-between.
The reason the cylindrical is ultimately better is mainly because the vertical creases create a visual artifact. Right now, with rear-projection, there isn’t perfectly uniform brightness at the edges vs. the center, there are refraction angle issues, and more importantly, light bleeds from one screen to the next once it leaves the screens. Those can be compensated for eventually, and cubes and cylinders can be made mathematically equivalent for rendering purposes, but long-term, cylindrical sections are the way to go.