Home / Blog / Looking around the corner: non line-of-sight imaging

Looking around the corner: non line-of-sight imaging

Schematic representation of non line-of-sight imaging. A laser beam bounces on a wall scattering light all around, The object “behind” the corner reflects scattered light that partially goes back to the point of origin where an image sensors captures it, Computational photography does the rest. Image credit: Stanford Computational Imaging Lab

Several years ago I was amazed at seeing at the MIT a photo camera (well, it was, actually, a set of equipment) able to take pictures of what was around a corner using scattered light reflected by objects hidden from my line of sight.  It looked like pure magic. It didn’t actually resulted in a good image but it was enough to give a general idea of what lay around the corner.

Now I saw that researchers at the Northwestern University have invented a system that can take high resolution photos “’round the corner” by adopting laser beams, holography and computational photography.

The technique is not called, as I did, “’round the corner photography” but “non line-of-sight photography”. You can take a look at what this means in the clip below. In particular the technology used by Nortwesterns University researchers is called Synthetic Wavelength Holography.

Basically you harvest photons that are reflected by the surrounding ambient and try, through computational photography, to make sense of them, using to create the photo only those that are reflected by the objects that is not directly visible.

This is not a curiosity, it has very practical applications: the system is based on scattered light and there are several “media” that scatter light, not just surfaces. Fog is an example (and since our brain does not use computational photography to process images when there is fog we cannot focus on any image!), another one is skin. If you beam a ray of light onto your hand most of that is reflected by your skin (and you see your hand!) but a portion enters the skin and is reflected back like scattered light from the derma and even (less and less) by deep tissues and organs.

A system like the one created by the Northwestern University researchers would be able to interpret this scattered light creating an image that is made invisible to our eyes (and brain) by the fog or by the skin: it is able to see through fog and through skin. This comes handy in robotic vision, in self-driving cars and in healthcare.

Actually, the beams of light created by the lasers make possible to visualise the movement, like the beating heart: a doctor will be able to look at your heart without opening your chest (and you don’t like her to open your chest, don’t you?!).

It is a sort of XR but on a completely different level (and it is not dangerous at all). Amazing what technology (and researchers ingenuity) can deliver.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee and co-chairs the Digital Reality Initiative. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.