Archive for February, 2009

Facebook Sputters

Amidst the backlash against Facebook’s new terms of service, the company tries desperately to clarify that they never intended to claim ownership of its user’s content. It makes analogies to email, where a copy of a message would be kept even if the sender wished for it to be deleted. Read the rest of this entry »

2 Comments

Only 39% Believe in Evolution?

According to a Gallop Poll, only 39% of Americans believe in evolution. There’s no number cited for how many, including the Gallop pollsters, don’t understand that evolution is not a belief, not even a theory, but an observable fact. Saying you don’t believe in evolution is like saying you don’t believe the sun came up today.

One can believe that a supreme being exists and, at any point in time (including now), created the known physical universe as we find it. But one can not easily deny that the universe we find is physical, or that it follows laws that we can study and understand.

Evolution is fact. We can observe its effects, and even watch it happen. What is theorized about evolution are the many mechanisms by which it works. We understand quite a bit. But we don’t and probably never will understand every last minute detail. Our understanding itself evolves over time.

But you don’t need to count every gaseous molecule to tell you there’s an atmosphere. And you don’t need a Ph.D. in history or education to tell you that these statistics may be indicative of a tendency toward devolution as well.

 

No Comments

Key Study Linking Autism and MMR Vaccine May be Fraudulent

The [British] Sunday Times reports:

THE doctor who sparked the scare over the safety of the MMR vaccine for children changed and misreported results in his research, creating the appearance of a possible link with autism, a Sunday Times investigation has found.

If true, the research paper should be clearly refuted, and, I would hope, the doctors scientists involved sued into oblivion for gross malpractice.

Read the rest of this entry »

2 Comments

Real-Time Privacy

There’s some significant buzz about Google’s new Latitude service, and not all of it good. Personally, I think real-time location services have a lot of potential. But my concerns go way beyond these privacy groups, and what happens if a specific person tracks me without my permission — people can already do that quasi-legally, though it isn’t anywhere close to free.

First, one the good tidbits is that they claim the service will be coming to iPhone soon — that’s great if it means iPhone background services (IBS?) should be on the way. I mean, location service apps aren’t really useful if my location stops broadcasting when the app closes, changes, or the phone goes to sleep. I would think they’d want to periodically wake up the phone, so as not to waste my battery completely. There’s some speculation that Apple will abandon its Push technology push and move background apps off the back burner, but perhaps only for a blessed few, of which I’d normally expect Google to be one.

Of course, they other way Google can do this is if it has a hard-to-get deal with AT&T. If it had that, it wouldn’t need any software on the phone, just my phone number and a way to time packets from specific cell towers near me. Seems unlikely, as AT&T could just do that itself, if it wanted.

Anyway, here’s the main concern that even the privacy groups fail to grok. If my location history is stored for 1 week or 18-24 months, or if someone, say a government agency, has "physical" access to the raw packets Google uses to send my location around, then what we have is a way for the government to track its citizens, whether or not we invite it as a friend.

That’s probably a risk in any event, if you’re the subject of a warrant. There have been cases of GPS devices being placed on cars of suspected criminals. My concern is not with that, since it’s court ordered and theoretically for the best. My concern is with methods that catch everyone else in their net, like the unwarranted wiretapping scandal from the last administration that never quite got resolved. I just don’t see how Google or anyone could prevent mass spying on people via this kind of service, even with encryption and obfuscation — if AT&T is letting the NSA scoop everything, then that includes my location updates. And if Google stores anything, it’s subject to seizure without notifying you, just like your search histories (the seizure of which Google publicly defied — but I didn’t hear if it really won in the end…).

I do see a way out of this mess, legal and political. The key thing Google (or anyone doing this sort of thing) should do IMO is set up its terms of service such that the information it’s collecting is considered private (owned) property of the person being tracked. It can carve out whatever exemptions and waivers it needs to not get sued — I don’t care. But we need to establish that any information I create, whether directly or indirectly, is owned by me, as much as any artwork, or even this blog post (which automatically gets full copyright protection the instant I create it). Google can broker it for me, make money on it for me, taking a cut. But it’s real property, the loss of which does actual harm.

Government can of course still seize our property, but there’s a much higher bar. Google has a tremendous opportunity to help set precedent through a simple terms of service change. This is the same ToS change that Second Life made, and it’s arguably one of the most important things they ever did for their business.

But I doubt Google will follow suit. The main points of services like this are to obtain data and then only insofar as it helps make money (making money is not evil, in and of itself). Google may agree to never divulge your current location without your permission, at least to the extent it can help it, as above. But did it agree never to tell advertisers which of their ads resulted in more people going into their physical stores? Because if it can determine which ads are more effective or resulted in real, almost, or likely sales, it can charge and make more money for each ad shown, which is their bread and butter, after all.

Connecting a specific person to a specific sale might be even more lucrative, but that would more clearly count as "divulging your location" in my book at least. And it wouldn’t strictly require your real-time location if the ad view and credit card transaction can be connected by your name, for example.

More likely is that the [now-cliche] "Starbucks coupon" scenario may appear at some point — by tracking your location, you may someday agree to see ads or inducements targeted to your location, such that your view-to-purchase cycles can more anonymously be aggregated into more significant cash flow, all via your phone.

The bottom line of all of these philosophical wanderings if that for a real-time-location feature to be successful, for people to (currently) trade personal privacy for some great benefit, that benefit must be pretty strong. I guess Google will find out shortly just how strong in the next year or so.

Hey, if I had all the answers, I wouldn’t need to work for a living.

 

1 Comment

The Inevitable Clarification

Frank Taylor of the fine Google Earth Blog writes (first link goes back to my previous post):

Changes to KML – to enable some of the new features in GE 5, Google had to make extensions to the standard KML. However, some have expressed concerns that Google has done this, saying it will be harder for other companies to adopt the standard if Google changes it. Google has heavily documented the changes, they say they’ve used provisions provided by the KML standard to make changes. And, they’ve updated libkml for the new features. I would expect they would like to see these new extensions become a part of the standard in a future update.

My concerns are not about extending the standard — standards are made to be improved over time. I’ve also not accused Google of violating protocol by adding extensions or using namspaces or of providing insufficient documentation after the fact. I hope I was amply clear about those points, and that it needs no further clarification.

I should also add that my only aim here is to see more companies, including Microsoft, more readily adopt KML, not just for import into silo’d apps, but to build a living breathing geoweb. I went through a lot of pain to write KML’s crappy unnamed predecessor at a time when it was not a very safe or popular thing to do. The code and schemas may be new, my power to affect things is greatly diminished, but the goal remains the same, and I’d imagine that all of Google shares it by now.

However, here’s my concern with some of the more defensive comments I’ve read. As I understand it, Google proposed this new OGC MMWC fast-path last fall and the body accepted, including Microsoft, btw. Perhaps it’s the best path, given how hard it is for OpenGL, for example, to advance collaboratively. I’m sure it’s at least logical and can be defended on the merits. But implying it’s the only possible path or one the mysterious OGC dictated against Google’s wishes just seems wrong to me, and not a very effective way to meet criticism. Own the decision and convince people, if you want to convince people.

The kind of path that I would personally recommend, not that anyone asked, is one in which companies seeking to improve standards (in general) might first put out their ideas, as public requests for comment, have a review period for changes before they’re launched, and unless it violates some vital corporate security interest, just be more open about the process, which will hopefully put their collaboratios/competitors more at ease. The other downside of the current approach is that there’ll be lots of new KML that will need to get tossed, partially deprecated, or somehow converted whether or not the new features get debated and adopted. Worse yet, if three or four companies want to add new features. The secrecy doesn’t seem necessary, given that lots of people are already on the same page and happy to help.

I’d love to hear an argument as to why RFCs won’t work, or why this new KML documentation, for example, can’t be released before the new version of the app comes out even without asking for comments. I mean, I wrote about the new ocean data last spring, but yet it still made a big splash (pun intended) on official launch this week. Few people care about this stuff until it’s released.

Let’s just leave it at this deep ponderable thought: if you swapped ‘Microsoft’ for ‘Google’ in this scenario, what would the slashdot commentary have read?

Microsoft unilaterally alters ‘open’ geospatial standard to support proprietary new features, blames standards body for process.

Google should be greatful, in many ways, that it doesn’t have Microsoft’s baggage to carry around. But why start to collect?

 

No Comments

Earth5

Congrats to the Google Earth team on the launch of GE5. The ocean feature is especially nice, though I’m not sure why anyone would be surprised. I remember stories from last spring (I reflected on this then, so I won’t repeat) about Google collecting the data, and this is the logical conclusion of that work. The water rendering is very clean, and frankly, the only thing it’s missing is sound effects and a dry towel. Mars is also very welcome.

Only thing I’m frankly not too excited about is the gx: namespace for new KML features. Why? Well, I (and others at MS) have been trying to encourage my friends on the various Virtual Earth teams (2D, 3D, 20D…) to more fully embrace and support KML, open standard that it is. Speaking only for myself, I think this makes it harder for MS and other companies to do so, not that I’m speculating on official future support either.

To be clear, it’s legitimate for any company to add custom extensions to its own implementations of KML and for its own users when it feels it’s the best or only option. and namespaces are the best way to do that. Also to be clear, I believe MS voted to support a recent Google-backed rule change at the OGC that would make these sorts of optional incompatibilities more fast-tracked to standardization, retro-facto. This is pretty much what I guessed we’d see as a result, though maybe not so readily. I would have hoped for a more collaborative approach to advancing the spec.

Anyway, it’s not earth-shattering, so to speak. In fact, the new work I’m doing has very little to do with any of this (I don’t ever want to do the same thing twice, in any event). So I’m quite happy to see Google continue to push the envelope in delivering an increasingly perfect mirror of the natural world, and an impressive structure for hanging all of your important geospatial data.

Congrats again to the whole GE team.

 

 [edited middle section for clarity]

4 Comments