I wrote a post some time ago on the shift towards computational photography. Now I stumbled onto an article pointing out what computational photography might bring in the coming decade (we are just a year away!).
Computational photography is the way to make use of data collected by the camera to create an image, a photo. This has been done since the advent of digital photography using a variety of post processing applications (Photoshop was the iconic one, an extension of the original program for graphic creation that has been growing over the years to tackle digital photography reaching such a level of sophistication that eventually spun off the photography part in a new application, Lightroom, specifically targeting photographers). In a way, however, applications like Lightroom were the translation of the dark room processes in the digital space (tweaking the exposition, correcting white balance, masking….).
New applications, like Luminar and Photolemur, take a different approach. They use artificial intelligence to detect (guess?) what that photo might be about and then work out the best strategy to render the data. Could that be a sky? Well let’s make the sky a good deep blue. Are those trees? Let’s contrast the leaves a bit so that they stand out… And of course these applications still provide you, the photographer, with tools to grade the amount of blue, contrast…. meeting your taste. The big difference with Photoshop is that these applications have a knowledge about the photo, Photoshop, Lightroom. Aperture do not (but they provide you with tools to decide what to do with the sky since you can tell where the sky is).
Computations photography is more than smart, intelligent post-processing. It is sneaking in the camera itself before you take a photo. As an example an image recognition features can tell the camera there is a person in the frame and now that person is smiling: time to take the picture! I guess the next step will be to look if the eyes are open (a good percentage of photos are taken as the person move her eyelids and that is not good for the photo!).
Now, as pointed out in the article, computational photography may extend to the lens itself. A digital model of the lens can be used by the camera (it is already being used by some post processing applications) to correct the optical distortion, vignetting, loss of contrast at the edges.
In the coming years machine learning can be used to connect the post processing to the camera so that over time the camera can learn how I render the photos (I correct them) and gradually performs the same correction at shooting time so that I will get the photo as I like it. This connection of the digital model of the lens with the monitoring, and learning, of the post-processing is basically creating a digital twin.
The Digital Twin of the lens, along with the Digital Twin of the camera (and ! my Digital Twin) can work out for each scene the best way to capture data and then to render those data into a pleasing image. As an example the scene of my grand kid blowing on the candles on the cake for his birthday would require a different setting from a photo inside a cathedral lit by candle lamps… In the latter you want to convey the darkness of the ambient, in the former you want to have the grand kid face well exposed …
An interesting twist to this approach is that I will be able to lend my (photographic) Digital Twin know-how to a friend so that he can take photos like I would have taken them, or I could buy the services of the Digital Twin of a great photographer to take an amazing shot at a nature waterfall….
Computational photography has just started and we are seeing, imagining, its implications on us as photographers and on the business. Think about it: in ten years time any lens can become optically perfect, through software. The market of expensive digital lenses may be disrupted by software companies. Likewise, getting a large sensor (like the one you get in reflex cameras) may no longer be needed, since computational photography can get rid, at shooting time, of the sensor noise (way greater in smaller sensors such as the ones used by smartphones). The ISO number that is the darling of professional photographers may become irrelevant and that will disrupt companies that are today basing their business on digital cameras.
Get ready for more and more intelligent snapshots in the coming years…