Post processing lets you tweak your digital photos to improve the image (what it might mean to you: making it as similar as possible to the image you saw or enhancing some aspects to steer the image perception where you want) and that is why it is called “post processing”. You do it after you took the photo.
Now researchers at MIT have decided you could as well do your post processing before taking the shot, thus saving you time.
They developed a machine learning technology to tweak the image captured by the digital camera sensor in real time, presenting it on the screen of the smart phone you are using to take the picture. This lets you see the end result and of course you can tweak the processing to get the result you like.
Reality is usually a complex mix of shadows and lights, with different objects reflecting light in different ways (not just in terms of wavelengths -colour- but also in terms of quantity) hence each object would require a different exposure. This processing is done in our brain but a digital camera cannot do that, it just tries to average the overall image. With post processing you can finely tweak the exposure of individual objects and fine tune the white balance. The colour of the object depends on the temperature of the light illuminating it and this temperature is different at different time of the day, in different weather conditions and of course in artificial light. Again our brain is pretty good in doing this white balancing, and it does this applying “semantics”: at sunset the light is warmer -reddish- and our brain will not compensate for it because, well, it is sunset time. A digital camera, on the other hand, will detect the reddish lightning and will compensate generating an image that misses the “right” colours that we expect for a sunset scene.
These compensations and tweaking requires “intelligence” and “understanding” of what we, as end users, expect from a picture. The team of researchers has trained their post processing algorithms to learn what we expect to see by having them learning on thousands of raw and retouched photos.
The overall process is quite time consuming and computation intensive, as anyone involved in post processing knows well. The challenge for the MIT team was to find a solution that could perform in real time using the computation power of a smart phone. And they succeeded (see clip).
What I find interesting is that the taking of a simple photo may involve such complex processing and that we are using artificial intelligence and machine learning to train the digital camera on how we perceive a picture and on what we consider a “good” picture.
We have come really a long way since the first photos taken on film. Now it is all about bits and how to manipulate them. A completely different game!