Home / Blog / Changing the past … creating the one that fits you now

Changing the past … creating the one that fits you now

The just announced Cinematic mode on the new iPhones let’s you change the focus point and exposure “after” you take the shot! Image credit: Apple

Lady Macbeth said “What’s done is done” and immediately after that, to clarify the point, “What’s done cannot be undone.” That was in the early XVII century: technology has progressed to the point that, at least in some specific situations, you can change the past!

This is what Apple has done by creating the Cinematic mode on the new iPhone 13.

Basically you can shoot a clip focussing on a specific point, like a person, and with a certain exposure. Afterwards, you watch the clip and wonder if it wouldn’t have been better to focus on a different person with a different exposure to change the “mood” of the clip. Well, no problem. You just touch your finger on the screen and the point you touched becomes the “in-focus”, you move a slider and change the exposure till you get what you want.

How could this be? Well, the magic is all in computational photography. The iPhone 13 has several cameras and they capture the scene in parallel, even though you are going to see the clip as if only the camera you selected shot it. The other cameras provide (having different focal length) information to the computer inside the iPhone that allows (the software) to compute the depth of field and by using the images taken by the other cameras render the scene as if it was shot with the focal point indicated by your finger (and likewise adjusting the exposure).

This magic is unbelievably complex: it requires the place in focus of the spot you indicate and the de-focusing of all the rest, depending on how far it is from the new focus plane. Of course, this has to be repeated in real time for each photogram making up the clip (30 to 60 per second). It would have taken 15 years ago a supercomputer and it wouldn’t, probably, be enough since you need AI software (to understand the scene) that was not available 15 years ago.

The Lytro Cinema is massive. The sensor is housed in the black box behind the orange strut, which appears to be at least a foot wide. It’s thermally cooled, and comes with its own traveling server to deal with the 300GB/s data rates. Processing takes place in the cloud where Google spools up thousands of CPUs to compute each thing you do, while you work with real-time proxies. Image credit: Lytro – 2016

It is not the first device that can perform Cinematic shooting.

In 2016, at NAB, Lytro presented its Cinema camera, that was capable of doing exactly that. With a caveat. The data crunching required was too big to be managed inside the (very big) camera so the digital movie was sent to the Google cloud to take advantage of thousands of processing CPU to perform the required data crunching.

If you were eager to get that camera Lytro was not selling it but only renting it at 125,000$ a month (the time a movie company would needed for processing a digital film).

Now you get the same type of performance at an infinitesimal price and in a device, the iPhone, that fits into your pocket. An amazing demonstration of the evolution of technology, in both performance and price/cost.

According to the very first reactions this Cinematic feature stands to revolutionise filming technology.

I am just amazed, and I should say I am probably much more amazed than most people. When I spoke in awe to some of my friends I discovered that they considered this new feature nice but nothing to be surprised about.

Fact is, the more you work in technology the more amazed you are about its progress.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.