
I got intrigued on light field photography some 10 years ago. The concept is quite old, going back to the 1930ies, although the first camera exploiting this idea was a prototype back in 1992. The first commercial camera, that I am aware of goes back to 2011 (see clip) when Lytro presented their strange device. That was what caught my attention.
The idea is easy to catch. A normal camera has a sensor (in the past it was chemical grains of a film, now is digital sensors, pixels) that measures the intensity of the light in a specific point. The lenses works by concentrating all ray beams reflected by an object onto a very specific point on the sensor. In general the more points on the sensor the better resolution (detail) we have. The resulting image is a flat representation of a 3D space where only a plane of those space is in complete focus, and the more we move from that plane the less details can be worked out (de-focus). In light field photography the camera in addition to capture the light intensity captures the direction of the incoming light rays. This cannot be done at the sensor detecting their intensity since at that point all rays are merged into one. Additional sensors are needed (placed at some distance from the intensity measuring sensors) to capture the light direction.
Lytro was able to create a portable camera that by using light field photography, and computational photography, made possible to take one photo and then set the focal plane afterward. In other words, when you took a photo with Lytro everything was “in-focus” and you could decide afterward what focus plane you wanted! That was an amazing result for photographers who have spent most of their time in practicing the art of focussing, Now it didn’t matter anymore.
However, Lytro camera did not deliver the same kind of quality (resolution, sensitivity, contrast…) photographers have come to expect and the company went out of business in 2018 but, and it is a big but, most of its key researchers ended up in Google because their expertise was very important for the blooming field of computational photography.

Now Apple has applied for a patent on a device (read smartphone!) that can use computational photography to create, in software, light field photography.
Now, if you think this is turning the problem upside down, that is exactly what it is. And it is ingenuous!
Rather than using additional sensors to capture the direction of the ray beams Apple is proposing to apply computational photography to a stream of images capture as the “photographer” moves the camera (the smartphone). The movements can be of any type, as long as they occur in a 3D space. The computational algorithm will use the positioning data (and accelerometer data) provided by the smartphone sensor to tag each image in a precise point of capture. That provides the required data to create a 3D representation of the space in front of the camera. Using this representation it is possible to take any kind of snapshot you may want, changing the focal plane as desired, looking at a scene from above and even looking behind an objet (provided the movements done during the scene capture have been sufficiently spaced to let some of the image capture what is behind an object).
Of course, this is not just about the possibility of selecting a specific image rendering, rather it is about the capturing of a 3D space. Once you have it you can apply augmented reality streams as well as create real models for virtual reality spaces. Not surprising that Apple is pursuing this evolution. It is in synch with their recent inclusion of LIDAR in the new iPad pro and their storng interest in AR, very possibly the next “Big Thing”!