Home / Blog / Splitting pixels for faster autofocus

Splitting pixels for faster autofocus

Rendering of pixels on a camera sensor showing their split in two parts to let the electronic circuit in the sensor compare the incoming light beams and direct the camera lens to achieve perfect focus. Image credit: Samsung

I still remember my first film camera (I was really proud of having a camera!). It had an optical indicator that as I turned the focus ring on the lens showed a detailed area in the frame split in two that recomposed into a single image once the focus was achieved. I should say that is was pretty accurate, the only drawback was the time it took to rotate (slowly) the ring to the point that focus was achieved.

Current digital cameras use a very similar approach to focus: phase detection. Samsung has just perfected phase detection by working at the camera sensor level. Each pixel is split into two parts: the light received by each is processed by a circuit that looks at phase differences and drive the focus ring of the lens to a position where the incoming beams (in a specific area) are overlapping, meaning the are reflected from an object that is now perfectly in focus. This mechanism was developed by Canon. What Samsung announced is a perfected way of detecting phase by splitting some pixels diagonally, and others vertically (Canon splits pixels only vertically). The problem with splitting pixels only vertically is that if you are taking a shot at a surface that has only horizontal lines that vertically split pixels are not able to detect any phase displacement (i.e. they get confused and cannot focus). On the other hand, if you have both vertically and diagonally split pixels you can rest assured that either one of the other type will detect phase difference and hence will be able to focus (I had the same problem with my old camera and what I did in those -few- cases where focus was tricky was to rotate the camera to get the focus and than rotating it back to take the photo).

You can get a clear explanation of the way the new Samsung Dual Pixel Pro sensor focus works by watching the clip..

Another interesting feature of this new chip is that the 50 Mpixels of the sensor can be clustered in groups of four (thus decreasing the overall resolution to 12.5 Mpixels) to harvest more light, something that comes handy in situations, like night photography, where there is very little light and the noise becomes an issue. There is more! Since each pixel is actually split in two parts the sensors also support the transfer of data from each half part, thus providing an equivalent of 100Mpixel sensor (but, of course, the signal to noise ratio gets worse and it makes sense only when there is plenty of light). Interestingly both the clustering of four pixels into one as well as the splitting of one into two are accomplished by the sensor circuit so that it is not affecting the performance of the camera electronics. This is really important since the sensor is designed for smartphone cameras where the size of the sensor is constrained by the size (thickness) of the smartphone.
Really impressive.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.