Now here’s a camera I’d love to buy. It’s certainly not the first in the field of computational photography. But it’s nice to see companies start supporting this at the application level too.
Now, there are lots of ways to get 3D images via digital photography. You can shoot lasers or IR signals out from your camera to measure distance per pixel based on return trip time. You can take stereo pairs and infer depth. But this method goes much farther — it takes many similar pictures from slightly different angles and can compute the whole field of images from any point nearby.
In other words, it’s as close to real-time digital holography as we current can get. And the result is a photograph that not only knows the distance to each pixel, but can also tilt the image in 3D to see around objects, as well as modify the focal plane across the image.
And when companies start using cameras like these and computational photography techniques to compute novel viewing positions (i.e., positions that might sit between or outside of actual sample), we’ll have a truly 3D photographic world to explore.