How many times did it happen to you of wanting to take a photo of a building and remembering the golden rule of keeping lines parallel to your lens realise that the building will not fit? Sometimes, with a smartphone, I would take advantage of the “panorama” mode to roll over the phone to capture the entire building but that does not work either because as soon as you start the vertical panning your lines get slanted.
In the end what you do if to accept slanted lines and capture the whole building, than go back to photoshop to straighten the lines. You get your lines straight, fine, but you also distort the image (depending on the way you approach the fixing you can get a taller -usually a smaller- building) and you work a bit more, with some creativity, to restore proportion. Something that is also happening is that because of the image deformation parts of the canvas remain empty. Sometimes I turn again to some Photoshop tricks invoking the automative filling that will “create” ghost pixels to fix the void. Sometimes they would make sense (even though they are fake) some other times I will have to fix them manually to get an acceptable photo.
Obviously the resulting photo is no longer a … photo, just a loose representation of reality with some fake parts and some distorted ones.
Here comes the news I stumbled upon reading the latest on DPReview. The new iPhones (but probably also the ones having 2 back cameras like the iPhone X, although the article does not specify it) can use their cameras in parallel as you take a shot. You think you take the shot of the image you see in the iPhone screen but actually the iPhone takes also one (or two) more shots using the other available cameras. By doing this it can capture a bigger scene and its software can use the bigger scene to modify the photo you took. As shown in the figure when you move to the editing app of the iPhone you will see a bigger canvas than the one you used to take the picture. The app recognises the presence of a building, and the fact that it is slanted one way or another and will take action to streamline the image getting information from the other photos.
This is the magic of computational photography. It started with panoramas and it moved up one notch with HDR (mixing together several photos to preserve high lights and dark areas), then it created long exposure by actually merging specific parts of a photo (like a waterfall or the lights of cars at night time. More recently it has provided the way to de-focus part of the photo to have portraits standing out (creating the bokeh). Now it is time to provide correct images of architectures. Actually, the latest feature of computational photography is so smart that can recognise a person in your shot that has be partly cut out and by taking the info from the other camera frames will restore the full person.
For other example of computational photography, in this case applied to photo editing look at the new features of Photoshop and Premiere 2020.
What’s next? I would expect the capability to “open the eyes” of people in the photo (when you take a photo of a bunch of people you can almost be certain that one person was closing his eyelids as you took the shot). By taking few photos in sequence the computer will be able to replace the closed lids with open ones…
And of course there will be more. There are already a few apps that allows the recording of a whole day by taking thousands of photos during the day and identifying the ones that could be more significant to tell the story of the day. The adoption of artificial intelligence in image recognition is already reality and it will grow much more in the coming decade. Your app will get to know you (machine learning) and will be able to divine what would matter, and please you!