Researchers at IBM Zurich Research Laboratory claim a novel approach to accessing patient medical records — using the human body as the 3D framework in the same way that Google Earth uses the Earth as a framework to fuse and navigate geospatial information. Spin the body, click on a body part, and zoom in closer to get more information.
The Big Picture
Just to be clear, I’m going to move the discussion beyond the IBM system above to talk about the "big picture" of what’s coming in the future. Despite the press release, the IBM system is not quite "Google Earth" just yet, because GE is much more than spinning a globe with URLs to click — it’s really a framework for fusing just about any geospatial data together, especially user-generated content, as well as a 3D search engine for the same. But building a true "Google Earth" for the human body turns out to be a challenge many times harder than building the actual Google Earth itself (which was not easy either). Any naive implementation is doomed to be a flash in the pan, a cute but limited toy, if it works at all.
How can I be so bold in my prediction? Because I’ve been researching this beast for a number of years and I have a pretty good idea of what it will take to build it.
For starters, the human body is volumetric, Google Earth isn’t — it’s only mostly 3D — most things live on the surface of our lumpy oblate sphereoid, divided up into essentially 2D zones, with altitude added on top. You can’t (yet) fly inside the Earth and see the layers: crust, mantle, and so on or add any data there. You can’t even see a cross-section, though at least that would be easy enough to add with some graphics engine tricks.
But the biggest problem, beyond dimensionality, scalability, data storage and streaming, and even beyond 3D navigation metaphors (which are all hard enough on their own) is the one most people don’t even think about until they sit down and try to actually build something like this — topology, geography, cartography — coming up with the equivalent of latitude and longitude or even X,Y,Z, for the human body. Oh, that.
Think about it. Cartography in the geographic context has been developing for many centuries, first in a crude approximation ("here, there be monsters…") and growing closer and closer to an accurate representation of the real world ("here, there be McDonalds…").
Cartography has had time to mature, to work out solutions for problems such as: how do we best project our lumpy oblate sphereoid onto a 2D piece of paper to most accurately convey relative size and distance? And, it turns out, there are a dozen or so significant coordinate systems to answer that very question, for various purposes and to varying degrees of success. And now 3D virtual globes bring us back to a more spherical real-time reality and the field evolves (some would say, revolutionizes) yet again.
But "human body mapping" has a technically much harder problem to solve: there is only one Earth, but there isn’t even one official human body to refer to — more like seven billion unique ones, and not just superficially, in terms of height, weight, or sex… The location, orientation, shape and size of body parts, organs, even blood vessels, can vary even within a single person over time, as when we move or sit, are injured or sick, pregnant or tense, and especially as we age. The biospatial mapping problem is effectively four dimensional, not three.
The general solution medicine has come up with to deal with that dynamism and individuality is an overly crude and ambiguous mapping system using words like anterior and posterior, medial and lateral with simple counting systems (C1, D6), paired with names that are the stuff of nightmares for first year med students, patients, and software engineers alike. Anatomy textbooks typically resort to artistically rendered, idealized drawings and a few sample photos to help teach what goes where. And then students spend time with actual cadavers getting their gloves dirty to really understand.
Here’s an analogy to help you understand how coarse and confusing the current mapping system can be. It would be like trying to find a specific apartment at the plexus of the posteromedial canal of the Islets of Manna-hata and the posterior broad main artery of New Amsterdam. Sure, you could show up at the corner of Broadway and Canal St. in Manhattan, given that description… and a Latin-to-English dictionary… and perhaps a history book… But without an actual street address and apartment number — or the equivalent of latitude, longitude, and altitude — you’d still have to look around for a while until you found it, and then you’d have to remember its location for next time.
However, that and a magic marker are how many surgeons specify where they’ll place incisions on your body after looking at a few 2D x-rays and doing some hefty mental gymnastics. That’s one reason surgeons get paid so much, and also why there are as many dumb mistakes as there are (which goes back to cost). Just as a map of the coast could save your royal armada from doom, a correct digital dynamic map of the human body would save many lives and who knows how much money.
And the problem is inordinately harder when you talk about the human nervous system. Compared to the brain, most of the body is relatively uniform from person to person, even accounting for size — and simple too. The cortical folds, or convolutions (known as gyri and solci), are as unique as any fingerprint and apparently their topology is functionally significant as well. And if there’s any place where precision really matters, it’s in the brain. We don’t want neurosurgeons stomping around our brains like conquistadors, exploring if you will, to determine if they’re planting the national flag in the correct tissue.
So researchers have been trying to come up with ways to map the brain that can apply to any number of people, despite our many differences and the complexity of the information. It’s an active area of research, with the ultimate goal of solving that important problem for the whole body too, from birth to death and moment to moment. It can and will be done — and hopefully, in an open way, especially if the goal is to unify doctor-patient communication, medical teaching curricula, and even scientific discourse under one framework and with one free-to-use application.
Now, that won’t stop a few eager startups from offering up solutions under dream of being "The Next Google Earth," hoping to be snatched up by Google or go public. That can’t be avoided, I suppose. But when we start talking about mapping your MRIs, CAT scans, x-rays, surgeries, dental work, and so on, managing all of your health information in a way that works from person to person and time to time, we really need to do it right from the get-go. Lives are on the line, and trillions of dollars of vested interest are looking on with intense interest. The problem is just too big, and too complex for any single entity to solve or especially own, even Google, Microsoft and IBM — and they know it, or at least they say they do. Time will tell.
Remember, Keyhole and later Google only patented certain key technical features of how it works, not the fundamental "how to map the Earth" solution. And Google’s goal is now to make KML an open standard, which is the right approach to building such a comprehensive framework to unify all geospatial data. The same kind of approach would need to apply to biospatial data, with even more work on filtering and privacy controls for the personal specifics.
But when the time comes that researchers and hobbyists alike find easy access to that fully-annotatable Google-Earth-like application for the human body, on which they can post HML ("human markup language," which has so far been focused on superficial, external, or expressed traits) or find answers, you’ll really have something special. You’ll see the same kind of emergence and new market potential we see with Google Earth and the like.
So, for example… The software might show you and your doctor your virtual body, with your own medical history of course, but now with all sorts of added information re-mapped from external data sources to match your personal details. Click on your liver, and you’ll bring up expert systems to help diagnose concerns, or reveal the latest experimental treatments, drug suggestions, or links to Alcoholics Anonymous. Medical students would have access to the standard course-work as well as the latest research, contradicting the 30-year old information that’s still being taught (and occasionally killing people) around the world.
I know several of the leading academic researchers who are working on the biospatial problem (outside of Google, of course) — especially in the brain — and it’s a been keen area of interest for me and my wife (who is a neuroscientist and neuroanatomy lecturer, as it turns out). But it will take time and patience and hard work to see the results from the various teams and companies and work through the inevitable phases of secrecy and hype to get to the other side.
While it’s powerful and timely, the IBM Zurich project is not exactly a new idea. Their claim is somewhat of a stretch, as is the comparison to Google Earth just yet, though their progress thus far is still impressive. In reality, some of the same people who were thinking ahead about using the Earth as the most intuitive interface for geospatial data ten, fifteen years ago were well aware of (and even doing research on) how the human body would also make the ideal interface to things like… say, patient medical records…
And that was before the general public started seeing the benefits of a 3D spatial browser like Google Earth. But what’s described from IBM research is only a sliver of the potential win here. I would consider this demo to be the "low hanging fruit" compared to what’s possible and coming in the next 3-5 years…
Keep in mind, when I wrote the old code to draw names on curving roads and near placemarks for the very first version of Keyhole’s software, I could read a few basic textbooks on cartography and apply my engineering skills to make it work in a dynamic, real-time context. When I wrote the early pre-cursor to KML (for adding UI elements and new features mostly), I chose to make up a simple lexical grammar because XML was barely known and not yet standard. In the case of mapping the human body, we’re still in 1492, very little is settled, and it’s still is a whole new world we’re trying to conquer and understand.
P.S. apologies to Native-Americans for being on the metaphorical butt-end of my Columbus references.