Archive for October, 2006

Wired Roundup

Two interesting stories from Wired this week:

Wired News: Runner-Up Takes on YouTube

This story, in light of another story yesterday about teens fleeing MySpace for Facebook because MySpace is no longer "cool," makes me really wonder how the Google YouTube acquisition will turn out a year or two from now. Did Google pay $1.65B for YouTube’s technology, but not its market share? Of course not. They wanted the eyeballs they were lacking for Google Video.

The problem, of course, is once YouTube has money, there’s some meat for copyright holders to sue. And the pressure has already begun. YouTube has started removing tens of thousands of videos — most of them were the kinds of videos I went to YouTube to watch (Daily Show clips especially). And the problem with that is that YouTube’s success was built on ignoring copyright. If the purges drive users away, Google may get stuck, like News Corp, holding an empty dated bag.

But they’re not quite that naive. The $1.65B dollar figure is thrown about like it’s a big deal, and it is. But remember it’s an all-stock transaction. This means if Google’s share price goes down, YouTube’s investors and founders lose money too. The deal seems almost as if Google backdated YouTube’s launch as if it was its own spin-off video service two years ago and took the resulting financial charge (unlike some other companies with their execs). That makes the $1.65B seem somewhat more reasonable, given Google’s stock growth (~6x since IPO) and makes it more like $280 million in my mind.

Wired News: Feds Leapfrog RFID Privacy Study

Either people get it by now or they don’t. The companies implementing RFID, like those making election machines, have shown that they simply don’t care about your rights, or they are simply too ignorant of good security practices to do anything about it. We are being treated just like any bottle of shampoo. We’re essentially just products, who also happen to be consumers, as if GE sold a "buy it, eat it, trash it" machine that sometimes breaks down or sometimes dares to think for itself.

This Wired story highlights some work within the government that was meant to fix the problems before they become widespread. But the powers that be would rather brush it all under the rug. The study, like the media ownership study at the FCC, gets buried or ignored.

And when we meet the privacy apocalypse, where crooks with gray-market scanners will be able to track you and even BE you at their whim, those same powers that be will undoubtedly claim "how could we have foreseen these problems?" just as the same people claimed "how could we have foreseen the levees breaking?"

The only solution is to throw them out of power.

No Comments

Wiki Earth

webkuehn.de – Homepage of Stefan Kühn

I came across this interesting bit of work. The author has collected a bunch of geocoded Wikipedia entries and created a single big KML file to load those into Google Earth. So if you sift through the listings, you can jump right to any wiki entry on the map, and one more click brings up the Wiki page.

This points to how Google and others can continue to integrate the web (esp. web searches) with the Earth. But the next step is for the Wiki entries themselves to contain the geocoded references that could drive GE (or another Virtual Earth, if you prefer) to the right spot to provide geospatial context for each article. That same idea applies to Google Search, which could easily add a “see it on Earth” link for any geocoded results (news especially).

That two-way link is key. Ideally, the Wiki entry could show in a pop-up bubble over the 3D world, though that’s currently limited by browser technology (see the series of articles on Web3D).

The ultimate integration is to have the 3D Earth and the 2D web live in the same big application space, synchronizing both space and time (for historical entries, for example), growing and shrinking in visual importance based on relevance to the task.


Here are some more links
, including a version of the above Wiki-KML that streams the data as needed to avoid loading it all at once.

1 Comment

How to Reform the Patent System in 3 Easy Steps

With all of the changes Congress has pushed and all of the rhetoric flying around, one could be forgiven for thinking the problem with patents is that they’re either too hard to get or too costly to defend. I have a slightly different analysis, which is based more on what patents are supposed to accomplish, and how they are currently failing us as a society.

The original purpose of patents was to grant an inventor a limited monopoly on some useful invention for some period of time, to prevent someone else from coming in and simply copying a novel technology. In exchange for this protection (and this is critical), the inventor would serve the public good by disclosing the invention in sufficient detail that a third party could legally license the invention to make their own product, thus benefiting the inventor, the competitor, and society, serving both creativity and commerce. It was a powerful idea then, and it can be once again.

What happens today is that patents are granted for abstract ideas, inventions that have no working prototypes, or inventions that are kept secret in their details, more as a way to stifle competition by throwing up roadblocks than as a reward to the original inventors. This serves neither commerce nor creativity. Lone inventors can’t risk being sued for any of a million vague patents lurking out there. And a company wishing to research a new product may be told by their lawyers to actually ignore the vast patent database due to fears of triple damages for willful infringement. Their approach is to just deal with any "submarine" patents that later emerge as a risk of doing business.

If those patents do emerge (and they increasingly do), the threat of expensive litigation often forces some outcome other than justice, because it comes down to who has more money to fight. And with the changes Congress favors, it also comes down to who patents something first, not who actually invents. That last item is most troubling to me, because rushed patents can’t possibly be complete enough to serve as anything more than a placeholder, stifling competition in favor of the company with the best lawyers, not the best inventors. And again, society is ill served.

So what do we do about it, other than throwing out the whole system? Here are three easy steps.

Step one requires that all new patents must meet three basic tests, derived from the original purpose of patents:

  1. Is the patent novel and non-obvious to a skilled person in the trade? In other words, is it really new? Just as scientific papers are peer reviewed before publication, require peer review under NDA. The goal is to prevent any company from patenting something that is already common knowledge to people in the domain.
  2. Does the patented invention actually exist? Is there a working prototype that solves the problems claimed? The goal here is to prevent overly broad patents or patents that are still highly speculative. [such as the Sony patent on mind-control using sound waves or a few others I could mention.]
  3. Does the application include all of the information that a licensee would need to recreate the invention? This question presumes the purpose of patenting is to actually license technology, which should always be the case. So the licensor must, at some point, fully disclose the invention and give detailed guidelines for recreating it. This test, then, simply requires that this work is made public as the final monopoly is granted. It does not compel licensing. Those terms are subject to market forces.

Step two retroactively applies these rules to existing patents, starting with the most recent patents first. Companies should be given at least a year’s notice to begin revising their original patent application to conform to the new rules and another year to finish. If they don’t complete this task, they would lose patent protection after two years. Patents that are set to expire within two years of such notice would be exempt. If a patent exists and has been licensed, then meeting all of these tests should be relatively easy. Detailed licensee instructions should already be available. For patents that haven’t ever been licensed or which have been sold to IP holding companies, there may be some added work, but it is for the public good. And any company that feels disclosing the full IP would damage them could withdraw the patent at any point.

Step three, coincident with step two, publishes clear guidelines for new inventors as to what has been patented already and what technology is available for license. A simple keyword search is not sufficient. It really needs to be a database of ideas, techniques, and problem domains. The goal would be something that doesn’t just tell an inventor what not to re-invent, but also how to obtain those existing pieces and at what price. Inventors and business people can then sit down and judge how to best approach the introduction of a new product or service using both novel and licensed technology as appropriate. This database should also include systems for peer review of existing patents, such that concerns can be publicly logged (especially concerns that a disclosed patent doesn’t meet its claims) and the status of any legal actions regarding individual patents can be seen by all.

These are the steps to a sane patent system, one that helps inventors, protects investments in R&D, and advances the public good by fostering creativity and a more level playing field for inventors and innovative companies.

No Comments

Google Co-Op

If you’ve done a search of Brownian Emotion recently, you noticed a new look. I switched over to using Google Co-Op for intra-BE searches. The results show up in the main blog window above the blog entries. The downside is that Google inserts a few text ads and I’d hope to avoid ads unless the site got expensive (it costs me only $6/mo now). But I think we can live with ads only on the search results, since Google is providing me/us with a useful service and the WordPress blog search feature kind of sucks.

You can give it a shot in the “google custom search” box at right. Works just like google, but with the nice Brownian theme.

I thought briefly about adding some AdSense ads for people who come here via regular Google searches, since most of those are for “NVidia Dawn Nude Hack” or “American Boobs” or some equally silly search that ends a millisecond later. I don’t think ads would help anyone much in that case. But I decided to leave it alone. If people want to visit for only a second before cutting away, that’s fine by me.

The only thing I ever objected to was that myspace guy who deep linked to my doctored image of a chicken for about 10,000 hits. And my only complaint about Google Co-Op is that I can’t seem to set the font size to slightly smaller.

No Comments

Google Gets Political

Google Gets Political – Post I.T. – A Technology Blog From The Washington Post – (washingtonpost.com)

As Stefan notes, this is more of a political story than an IT one. I’ve been hoping for something like this for some time. Many thanks to the young googlers who bubbled this up in their 20% time. The Earth is a perfect metaphor for visualizing just about any geospatial information, and voting districts are certainly one important kind.

Back in June, I tried to challenge game and 3D world developers to go one step farther, and use the kind of cutting edge UI design and visualization found in games (esp. strategy games) to connect the dots between votes and fund-raising. I think the GE version will be an amazing tool to start. But for the next step, we should realize this is often more abstract than geography. Tobacco companies used to be fairly regional, and you could see their influence in a heat-map of the mid-Atlantic US. But it’s not as true anymore, as they expand farther into foreign markets with less protective laws. Oil and gas companies, for example, are already multi-national. So seeing their effects in Congress may not make as much sense on a literal map. Geography is an important piece of the overall puzzle, but only a piece. And visualization can help us see more abstract relationships too.

What I’d love to see next (perhaps from Google, perhaps not) is a dynamic charting app (2D or 3D) that can group officials by both regional and abstract issues, where brand new maps are created based on the best available information.

For example, if we clicked on Energy policy, we could see officials sorted spatially by how much money they receive from various Energy interests (imagine each lobby or issue is given a corner of the screen, with the candidates position as a function of the weighted averages of the relative contributions). We’d see a picture of two or more sides of an issue from the money-flow perspective, also showing how one side dwarfs the other in scale. And then if we organized the data by voting record instead of money, we could see how much each official’s icon moves on screen. This is a visual measure of how “independent” he or she is. An official who receives money but doesn’t vote in lock step may be one we can tolerate. An official who votes with the money every time is essentially a paid puppet. We might be able to combine these two views and show, in a single glance, how much each official is working for the voters vs. the sponsors.

Anyway, those young googlers are certainly right that the web-based UI for this political information isn’t working as well as it should. But that doesn’t mean Google can’t also go farther with this approach. I’ve always believed the “zoom and pan a sphere” UI that powers GE would do well to explore any set of information, even for more abstract relationships.

For example, consider the beautiful Budget Graph we’ve all probably seen. Now, it’s very packed with information, and the placement of each line item is hand-massaged to make it all fit. But imagine this same data wrapped onto a non-geographic Virtual Earth, and relaxed a bit to allow for more detail in the black-spaces between the data. Imagine zooming into this at lower and lower levels, flying around the data like some alien geography. Visual-size on the new graph still probably represents budget size. But now we can perhaps move circles around with our mouse, compare and contrast. Imagine being able to reorganize the data on the fly automatically too, but still keeping that nice pan and zoom paradigm.

That’s what I’d really like to see. We may yet come up with the killer app for politics, that shows in a glance where your money is going, and who’s putting it there, and in an easy-enough UI that anyone can use.

No Comments

The Gravity Train

Damn Interesting » The Gravity Express

This is one of those old ideas that’s just too interesting to let go. Tunnel through the Earth to link any two points. A vehicle (say, a train) released at one end will fall through and pop up at the other end about 42 minutes later, regardless of the distance and regardless of the weight of the train. The only energy added is to compensate for friction (incl. drag). Everything else is free.

Unfortunately, tunneling through the Earth is not only impractical, given our understanding of the core, it’s downright impossible. The heat and pressure are only the first problems to encounter. Radiation and magnetic flux are another. The linked article doesn’t go into the current best understanding that the inner and outer cores are molten, moving, and incredibly hazardous. Imagine trying to support a rigid tunnel in a swirling field of viscous magma. And keeping a vacuum in the tunnel would be key, because the air pressure and therefore drag would be much higher than on the surface, even though gravity near the core would go to zero.

However, the principle could work without such drastic tunnels. A tunnel from NY to Washington wouldn’t need to go very deep, perhaps not even past the outer crust. And a gravity tunnel from 14th St. to 42nd St. in NY would be extremely shallow. Of course, taking 42 minutes for that last trip would be a bit of a setback, so a steeper slope would be advised. Our subways could use only the energy they need to overcome friction. They’d work just like a roller-coaster, without the loops.

In wondering why they didn’t do that in the first place, I only came up with two reasons: one is that many subway tunnels are actually just sub-streets, not bored into rock and earth (Broadway, for example, was excavated and then re-covered at street level); and two, putting two trains on the same gravity well is dangerous, since if you lose power and don’t make up for friction to reach the next level station, both trains would swing back and forth like a pendulum, resulting in a potential collision. I think backup brakes could prevent that, but it’s a risk. Of course, I guess the third issue is that the tracks are so bad in NY, that a constant-but-low speed is better than a sprint to the middle of the gravity well.

No Comments

Toshiba’s Head Mounted Washing Machine

The system exhibits a wide viewing angle of 120 degrees horizontally by 70 degrees vertically without head tracking, and 360 degrees x 360 degrees with head tracking. We assume the head tracking feature is afforded by the fact that it sits right over your head.

We jokingly came up with a similar idea in my first startup. We called the “VR Lampshade.” I can’t believe anyone’s actually doing it. It combines the worst of HMDs and projection environments in one inconvenient package.

One benefit of big projection environments is that the imagery is always there, even 360 degrees of video is possible. And there’s zero latency for simply turning your head (linear movement is another matter). HMDs can have the advantage of portability. But this combination makes no sense in the long term.

BTW, the augmented reality version of this same technology would be to put a projector on your head that puts virtual imagery on whatever you’re looking at. I wonder if they’ll try that next.

No Comments

Do virtual globes distort the Earth?

Ogle Earth: A blog about Google Earth. « Do virtual globes distort the Earth? »

Stefan tagged an interesting question about whether the 3D globe on your desktop (say, Google Earth) is a truly accurate portrayal of the real world, or if there is distortion introduced by computer graphics. The answer is that ignoring sampling quality (e.g., from the satellite and aerial photos) and output image quality (from your 3D hardware), the 3D math forms a near-perfect reconstruction of reality, better, IMO, than any 2D map can do.
Read the rest of this entry »

2 Comments

Cheoptics360

YouTube – Cheoptics360 show Holographic Ads

I think we may need a new word, because “holographic” has been co-opted to mean “floating in air” as opposed to a true 3D reconstruction of an object’s light.

The way this deceive (oops, typo: device) seems to work is that four standard projectors on the big truss arms aim into the “empty” space and hit an invisible screen, probably some sort of thin wire mesh of a clever shape to allow one to walk around and not see it. So the major achievement is that you can watch 2D video from any angle on a fairly invisible screen (seemingly in mid air), albeit inside a fairly bulky truss.

As to the claims of this being volumetric or 3D, I don’t see any evidence of it. To be 3D, it needs to deliver a different image to each eye. And to be truly 3D, those images would need to vary depending on where you stood. So if you walk around a 3D car, you’re not always looking at the same angle, whether it’s stereoscopic or not. That’s hard to tell in a gootube video. But as the camera moved around, I noticed the objects (though generally moving, spinning, etc..) followed us as if our perspective didn’t matter. You can see it if you look carefully.

There are systems on the market that can do true 3D (though still not true holography) using rapidly spinning mirrors, very fast projectors and a big-ass computer to pump all that imagery through. But the speed we’re talking about is something like 10,000 RPM and the four projectors would need to update at at least 1/4th of that rate. There’s a reason those systems are small and are contained inside a protective glass hemisphere, probably with a vacuum pulled to reduce turbulence. Something this big would create a mini tornado if it spun at anything close to that speed.

This system seems like it would be nice in the middle of a convention center or big retail store to attract attention, but not much more.

No Comments

Reality Check: Teleportation: Going Nowhere Fast

Howstuffworks “How Teleportation Will Work”

I’m a big fan of anything that reduces travel time and of “super-high-tech” in general. But teleportation, as it’s popularly portrayed, is a very silly concept. And the popularization of the modern scientific experiements in the quantum realm has taken on proportions of mass delusion. It’s time for a reality check.

The linked article is perfect example. The author glosses over the key problem with teleportation experiments thus far (and even theoretical constructs). You can’t use this phenomenon to send useful information. Scientists can show that two or more entangled particles remain entangled and seem to “send” information instantaneously about their state. But when the remote locations eagerly discover the spin of their particle, that does little good, because we can’t control the source particle’s state without destoying the entanglement. In other words, we can’t use this device to send even a one-bit telegram. And that’s not changing anytime soon.

But assuming that problem can be solved eventually, the article glosses over the idea that copying the roughly 10^28 (his estimate) atoms in a human body would make a perfect copy. After scanning, you most importantly have to re-assemble those atoms into the original configuration. If we can do that reliably and quickly, we have much more important technology than a teleporter (see below). Moreover, we are each more than a collection of atoms. There is a dynamic electro-chemical flow throughout our bodies that must be captured and recreated, or else the teleported copy is quite literally a corpse. So it’s more like 10^28 atoms and who knows how many buzzing electrons in four dimensions, not three, plus field effects, and perhaps other unknown processes floating about. Good luck.

Even assuming we solve that problem and make a living human copy at the other end, it’s still, after all, a copy. We haven’t done anything to transfer your consciousness (not to mention your soul, if you believe in it). Now, the copy may look like you. If we solve the electrical problem above, it might even think it’s you. But if you two ever met, you’d be left to duke it out for the property rights and who gets the wife. Plus, consider me cowardly, but I have no intention of destroying my original body simply to go traveling. The only possible way I can imagine using such a device is to esacpe an exploding building or planet, where I’d likely die anyway. Would you allow your original body to be destroyed just because your perfect copy says he or she is really you? Without witnessing myself take the travel, I’d say no.

But there’s an even more fundamental flaw in the popular delusion of future teleportation: if we have the ability to send instantaneous messages over great distances and can rapidly arrange that many atoms at once, then why would we ever want to go anywhere?

I mean, that same technology, plus some clever software, is more than good enough to build the best possible holodeck we can imagine. Why not just send holographic “cameras” to remote destinations and beam the scene back to our living rooms using our nice instant communication? That would undoubtedly seem as real to us as being there. Plus, if we wanted, we could still send some atomic body-arrangers to the remote site to create a walking avatar of ourselves–not a distinct living person, but simply a virtual reflection of us, our actions, in real-time.

To the extent we allow the remote environment to affect that avatar and us, we are at some potential risk. But if we limit our exposure to the normal sensorium and, for example, prevent bullets and rock slides from affecting our source bodies, that’s a hell of a lot safer than teleporting there. But, by any reasonable measurement, it’s just as good as being there. I imagine some combination of sensor/avatar we can send in a very small package. Perhaps we can teleport that (or at least mail it or rent one on site) and let it represent us and send the scene back to us.

That’s what teleportation will look like, IMO. Not beaming us down to the planet like in Star Trek, but sending our virtual selves out to explore at minimal risk and much less cost than dying every time we step on the pad. One would think that even Captain Kirk would see the benefits of sending a virtual Kirk down to the dangerous planet surface instead of himself.

2 Comments