Home / Blog / Light field Technology – the future of photography?

Light field Technology – the future of photography?

A graphic representation of light field photography: light rays are intercepted by a number of microlens arrays before hitting the digital sensor. This latter measures the intensity of the light, the microarrays detect the direction of the light. These two data can be processed by computational photography algorithm for amazing results. Image credit: PictureCorrect

I got intrigued on light field photography some 10 years ago. The concept is quite old, going back to the 1930ies, although the first camera exploiting this idea was a prototype back in 1992. The first commercial camera, that I am aware of goes back to 2011 (see clip) when Lytro presented their strange device. That was what caught my attention.

The idea is easy to catch. A normal camera has a sensor (in the past it was chemical grains of a film, now is digital sensors, pixels) that measures the intensity of the light in a specific point. The lenses works by concentrating all ray beams reflected by an object onto a very specific point on the sensor. In general the more points on the sensor the better resolution (detail) we have. The resulting image is a flat representation of a 3D space where only a plane of those space is in complete focus, and the more we move from that plane the less details can be worked out (de-focus). In light field photography the camera in addition to capture the light intensity captures the direction of the incoming light rays. This cannot be done at the sensor detecting their intensity since at that point all rays are merged into one. Additional sensors are needed (placed at some distance from the intensity measuring sensors) to capture the light direction.

Lytro was able to create a portable camera that by using light field photography, and computational photography, made possible to take one photo and then set the focal plane afterward. In other words, when you took a photo with Lytro everything was “in-focus” and you could decide afterward what focus plane you wanted! That was an amazing result for photographers who have spent most of their time in practicing the art of focussing, Now it didn’t matter anymore.

However, Lytro camera did not deliver the same kind of quality (resolution, sensitivity, contrast…) photographers have come to expect and the company went out of business in 2018 but, and it is a big but, most of its key researchers ended up in Google because their expertise was very important for the blooming field of computational photography.

By moving a smartphone camera on the three axes, left-right, up-down, forward-backward, is is possible to capture the same scene from different viewpoints and recreate a 3D representation using computational photography. Image credit: Apple

Now Apple has applied for a patent on a device (read smartphone!) that can use computational photography to create, in software, light field photography.

Now, if you think this is turning the problem upside down, that is exactly what it is. And it is ingenuous!

Rather than using additional sensors to capture the direction of the ray beams Apple is proposing to apply computational photography to a stream of images capture as the “photographer” moves the camera (the smartphone). The movements can be of any type, as long as they occur in a 3D space. The computational algorithm will use the positioning data (and accelerometer data) provided by the smartphone sensor to tag each image in a precise point of capture. That provides the required data to create a 3D representation of the space in front of the camera. Using this representation it is possible to take any kind of snapshot you may want, changing the focal plane as desired, looking at a scene from above and even looking behind an objet (provided the movements done during the scene capture have been sufficiently spaced to let some of the image capture what is behind an object).

Of course, this is not just about the possibility of selecting a specific image rendering, rather it is about the capturing of a 3D space. Once you have it you can apply augmented reality streams as well as create real models for virtual reality spaces. Not surprising that Apple is pursuing this evolution. It is in synch with their recent inclusion of LIDAR in the new iPad pro and their storng interest in AR, very possibly the next “Big Thing”!

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.