Imaging with vectors

Even though it took way too long (I had been admiring it for quite some time), I recently became the first kid on the block to own a Lytro. The Lytro, if you haven't heard, is sort of like a camera, except that it definitely isn't. Apart from a viewfinder on one end, a piece of glass on the other, and a shutter release button on top, it doesn't really look or feel like a point-and-shoot or SLR either. It actually bares a closer resemblance to a pocket-sized telescope. So don't you dare call it a camera. Indeed, the thing that the Lytro is built to do is what makes it completely different than any camera, and this perhaps, is the best mark of its identity. It captures not only the intensity of the light rays hitting the sensor (or film), but the directionality of those light rays as well.

So what. Right? What does this mean? Why is this interesting? It means that with a light-field camera, the focal point and depth of field are parameters that can be controlled by the viewer. It is interesting because of freeing up of space and of the physical atoms of hardware by deliberately removing the motorized auto-focus mechanism, and placing instead into the capable and powerful hands of software. I find it particularly elegant that this technology was acheived as a result of harnessing light's true nature better than any other camera that came before it. A device designed to to record light as light is; a physical property defined by both a magnitude and a direction.

How do I interact with this picture? 

Normally this would be a weird question to ask, but with the Lytro the viewer can take part in the imaging process in three ways. Try it out on the samples above:

  • Point to focus: collecting the light field from a scene is a technical thing. Creating images by deciding what to focus on, and what to not focus on is an artistic thing. It is an interpretive thing. It's a narrative that the viewer has with the data. The goal of the light field camera is not to impose a narrative, but instead get entirely out of the way.
  • Extended focus: for artistic reasons, the viewer might want to have some parts of the image in focus, other parts out of focus. It's how our eyes work; our peripheral vision. But in cases where you want to see the full depth of field, where everything is in focus, the software has an algorithm for that (to try it out you can press 'E' on your keyboard).
  • Stereo viewing: speaks to the multidimensional nature of the vector field data. In the real world, when we move our head, the foreground moves faster than the background. So too with light-field images, you can simulate parallax, by moving your cursor and better understand the spatial relationship between objects in the scene.

These capabilities aren't just components of the device, they are technological paradigms embodied by the device. That, to me, is what is so incredibly beautiful about this technology. It's the best example of what technology should be: a material thing that improves the work of the mind.

A call to the seismic industry

The seismic wavefield is what we should be giving to the interpreter. This probably means engineering a seismic system where less work is done by the processor, and more control is given to the interpreter through software that does the heavy lifting. Interpreters need to have direct feedback with the medium they are interpreting. How does seismic have to change to allow that narrative?