Archive for November, 2012

Carbomorph

 

Why do we need one kind of printer to make microchips and another kind to make plastic housings, with robots or cheap labor to assemble the results when we can simply print the whole device, circuits and all? Brilliant.

“Carbomorph” material to enable 3D printing of custom personal electronics.

No Comments

Star Trek Comm Badges

There’s much to admire about the Star Trek technological imagination. There are some glaring problems too.

Take the Comm Badge. Apart from turning your collar bone into a public speaker phone, the fictional Star Trek badge has another unseen problem that I hope this real badge (pictured above) doesn’t — there’s a natural delay that makes the conversation not quite as “real-time” as the show perhaps depicted.

No matter how good your technology is, if you’re waiting for someone to finish saying “Geordi,” before you open the connection to say “Geordi” to Geordi, your conversation is going to be somewhat asynchronous in nature. However, it’s not a deal breaker. We can play clever tricks by speeding things up to catch up to real-time and maybe no one would notice most of the time.

Now, a Star Trek invention that has truly insurmountable problems is the Transporter. Why in the world would we destroy the original copy of YOU when all we really need to do is beam or otherwise cause to exist a remotely operated copy of you — i.e., a robotic and/or biological “remotely operated person”. There’s so much less risk to your person, say, if you’re wearing a red shirt and/or you’ve been recently written off the show.

We can solve sensing and effecting all of the ROPs systems remotely, lag notwithstanding. But can we ever prove or disprove that destroying the original copy of a person during beaming is not, you know, death? I don’t care if the copy swears it’s you and can answer trivia questions. We’re still destroying the original.

I’m with McCoy on this one.

By the way, why does the woman in the picture look so unhappy? She’s not wearing a red shirt.

CommBadge Bluetooth wearable smartphone speaker invokes Star Trek.

No Comments

Software Enables Avatar to Reproduce Our Emotion in Real Time – YouTube

Software Enables Avatar to Reproduce Our Emotion in Real Time – YouTube.

No Comments

Neuroscience + VR + Real-Time Modeling


Neuroscience Research Technology – Dr. Eve Edelstein – YouTube.

No Comments

Death to Poly

Polygons/triangles/quads are great for efficient low-level 2D/3D rendering — they’re butt simple graphics primitives that don’t require overly complex shaders or hierarchical composition languages to represent and render.

However (and no offense to the legendary Hughes Hoppe), they truly suck for representing dynamic levels of detail, such as you get with significantly zooming close and far. They don’t compress as well as other forms because the detail is way too explicit and often wrongly expressed for the given need.

It’s like trying to express the function a*sin(b) as a long series of undulating (X, Y) sample points instead of, well, just “a*sin(b).” The sample points are invariably not the ones you really want to ideally reconstruct the curve. And god help you if you want to change the key parameters to alter the waveform on the fly. The sample points are missing the essential mathematical (trig, in this case) relationship.

Mostly, polygons suck for later editing the objects we create, It takes years of training for good results in the first place vs. the functional equivalent of play-dough — anyone should be able to do it. And parametric approaches, as above, are more easily mutated on the fly, which is the key element for easy editing.

The most success to date I’ve had in my 20 year dream to obsolete polygons was with Second Life, where I wrote their 3D Prim generation system, still in use today. I wanted to do much more than the simple convolution volumes we ultimately shipped, but it was a good step in the right direction and at least proved the approach viable. However, one doesn’t create technology for its own sake — you always need to do what’s right for the product.

The ultimate vision I was hoping for then was more like what Uformia is now doing — giving us the ability to mash up and blend 3D models with ease. And fortunately for all of us, Uformia has found a real use case that obviously needs true volume modeling: 3D printing.

3D printing is notoriously hampered but not pampered by the polygonal meshes one tries to feed to these systems. Polygons have zero volume and can cut, tear, and inter-penetrate each other without violating any rules of physics. Real material is just the opposite. Using polygons is like trying to make a tasty vodka martini using only origami (and even then, paper has real volume, even if we don’t think of it that way).

Uformia can apparently prove their models are viable, and even aid in building supporting micro-structures. I’m guessing they do some sort of guided parametric evolution to fit their model to the input polygons, but it could easily be smarter than that. I ordered my $100 copy, so I intend to find out.

The main downside of procedural/parametric modeling is, as always, the quality and availability of the tools. So I fully support this company giving a run at getting that part right.

What’s the next step? Blending arbitrary models is a good start, not entirely unseen for ye old polygonal modelers. The real kicker comes when we can take two models and say “make A more like B, right here in this part but not that other part.” If we solve that, then we can then imagine a real open ecosystem for 3D designs that truly credits (and rewards) the creators original designs while allowing easy mashups of the results.

(Evolver is a good example of that trend for humanoid avatars at least. I met those guys maybe 7 years ago when they were just deciding to form a company.)

I’ve long been hoping I didn’t have to write this stuff myself — it’s quite hard, probably way over my head — and I just want to use it for some future projects I have in mind. Also, it’s notoriously difficult to make money selling 3D modeling tools. The most successful business model to date is “sell to Autodesk and let them figure it out.” But I’m rooting for this one to get the UX right and hit it out of the park.

MeshUp: Mashup for meshes by Uformia — Kickstarter.

 

1 Comment

Mapping The Entertainment Ecosystems of Apple, Microsoft, Google & Amazon


Without commenting on the companies themselves, this is definitely worth reading for yourselves (Thanks Daniel!):

A convergence towards Apple’s business model

It’s interesting to note that all four of the companies listed have various different core business models (hardware, search, retail, software) but they have all in recent years come to create personal computing devices with their own operating system running on top of the device and additionally these entertainment ecosystems. Five years ago, Apple was the only one doing this complete trio of device + OS + entertainment services.

Mapping The Entertainment Ecosystems of Apple, Microsoft, Google & Amazon.

No Comments

3D Photo Booth

The ultimate vision among 3D printing enthusiasts is the Replicator from Star Trek (perhaps combined with the Teleporter for the live scanning part, if not the “beaming” itself). For others, it’s all a big fax machine or laser printer, just in 3D, designed to save us time, travel, and money. For most of us, it’s a way to build things that never existed before, a supreme reification of intangible ideas into physical reality.

The state of the art is still somewhat short of all of those goals, but advancing rapidly, focusing on cost, speed, resolution, and even articulation of parts. Making 3D figurines of you and your loved ones is an interesting stop along the way.

The truth is that people have thought about 3D scanning and printing for decades, and this is often a top request (I can’t tell you how many people thought they came up with this idea).

The devil is always in the details, at least for now. For example, how does the 3D printer in this Japanese “3D photo booth” apply subtle color gradiations to make your skin look real? Some affordable commercial 3D printers can do a small number of matte colors, one at a time. High end full color 3D printers are coming down in price. How does the software stitch a solid 3D likeness from multiple stereoscopic images? (hint: they say you need to stand still while they take multiple photos or video)

But it doesn’t really matter, as long as it’s economical and people want to buy these at some price, which I figure they will. FWIW, 32,000 yen = about $400 by my math. What would you pay?

Process | OMOTE 3D SHASIN KAN. (via Gizmodo)

No Comments

Congenitally blind learn to see and read with soundscapes

We put so much emphasis on visual AR that we often ignore the power of the other senses to convey information. This certainly matters for anyone who is visually impaired, but it will also translate to better methods of conveying spatial awareness for everyone else in time.

link: Congenitally blind learn to see and read with soundscapes | KurzweilAI.

No Comments

Indoor GPS/Mapping Advances

Meridian seems to be doing some interesting work with easy-to-make indoor geospatial experiences for museums, tours, shopping, and so on. Their site is quite sparse on exactly how the tracking part works, but I’d guess it’s the usual wi-fi triangulation with some accelerometer-driven “dead reckoning” or they’d be bragging about it.

The good news for AR enthusiasts is that the more attention paid to solving more precise location and orientation, indoor and out, the easier it will be to augment our perception of and interaction with the world, regardless of what device it’s rendered on. A rising tide floats all boats here.

This also intersects nicely with where I’d always hoped KML would go — we desperately need a standard markup language for the real world that’s location-aware — and not just lat/long. Indoor location demonstrates the need for ‘location’ to work in multiple different coordinate reference systems, not just the common Mercator or WGS-84 projections.

I’m glad to see ARML 2.0 is moving along in that direction as well, and asking for comments from what I can see. But now imagine when things like Meridian’s editor can output some standard ML that any webapp or maps app could consume and render without requiring a dedicated app. Now we’re talking!

link: Never Get Lost In Macy’s With Its New Indoor GPS App – PSFK.

No Comments

Optical camouflage turns car’s back seat transparent

The goal of making the car invisible is a great one. Blind spots create significant hazards. For example, just last night, I almost opened my car door into a speeding biker in SF (a headlight on the bike might have helped).

But I think this is one of those applications that may work better with HMDs than projectors. Imagine what happens when you put kids in the back seat. Imagine just trying to keep these seats clean enough or clear enough so the projection works properly.

Sure, it’ll be a few years before HMDs are acceptable enough to drive with them, but the near-term version is something much simpler to prove out. Here’s the idea. [I originally spec’d this out back at Worldesign 20 years ago and always wanted this purely for the thrill. It’s not entirely economical on its own, but it will happen someday, I guarantee.]

Imagine an airplane fitted with a full 360*180 degrees of video capture, more or less like these researchers want to do, such that we can digitize a complete spherical video feed. That’s a few wide angle lenses with sufficient overlaps and computing hardware to stitch in real-time.

Imagine an HMD per (willing) passenger that can index into that video based on where you’re looking. Boom. Your airplane is now invisible. You’re flying free and alone at 35,000 feet. With mixed-reality, we can exclude or occlude your body, your family, and maybe the seats beneath you from the video composition to enhance the realism without ruining the view.

The same trick will work better for the car, once HMDs are legal to drive with. Forget the back seat. Let’s make the whole car invisible, enhance the road while we’re at it, and reduce any other visual noise that might distract you from driving well.

Now, even wearing earphones is presently illegal in most places — their purpose is to distract the listener from the world, not enhance their driving. Practically speaking, it’s hard to hear that on-coming semi to your left when you’re blasting “Highway to Hell.”

See-through HMDs should (and can) no doubt be aware of whether you’re driving or not and limit your activities to only the helpful ones. No “AR Tetris” for you unless the car is on autopilot. The whole point is to actually improve your situational awareness, not to diminish it, so don’t expect this to be a literal view of the world as much as a visually enhanced one. Such a system should be helping you become aware of that same semi, finding obstacles, warning of dangers, highlighting your path and so on. Blade-runner had it half-right. There’s no reason to limit this display to a monitor.

Of course, we may have those self-driving cars before then. Personally, I’m a bigger believer in augmenting people’s abilities instead of putting us in the back seat, so to speak. But we can and will have both, I expect.

Optical camouflage turns car’s back seat transparent. (via gizmag)

 

No Comments