Xiaomi, one of the big Chinese smartphone makers, has announced its intention to deliver a 108 MPixels smartphone in 2021 (and a 60 MPixels one in 2020, both sensors are produced by Samsung). I looked around at comments and most of them are pointing out the absolute nonsense of pushing the envelope to these levels. An Instagram photo is captured by their server at 1000×1000 pixel resolution, i.e. 1 MPixel and it is shown on your mobile device at a much lower resolution. Indeed pushing the resolution up to 11MPixels would seem a nonsense.
On the other hand from a marketing point of view you often hear that bigger is better. Hence, if that is what people feel why don’t push them a bigger MPixel product? In digital cameras, however, bigger is not necessarily better.
The problem is that your digital sensor (the one used to capture the photons that will be processed into an image) as a fixed dimension that basically depends on the lens. A bigger (in size) digital sensor requires a bigger lens (that in turns will be bigger in size, weight…). By the way, the bigger the lens the more complicated it is to deliver a good quality throughout the whole lens surface and this leads to bulkier lenses using corrective glass (and the more pricey).
In the case of smartphones you are also constrained by the thickness of the phone. Bigger sensors + bigger lens= thicker smarphone (there have been some attempt to bend the light via a prism and use the length of the phone but they never really succeeded.
Making a long story short: the physical size of the digital sensor is fixed and if you want to have more pixels squeezed into it they have to be smaller. The problem is that as pixels get smaller the number of photos reaching each of them decreases and the noise (both electronic and ambient noise) gets significant so that in the end your image quality decreases.
This is why for several years the best professional cameras did not have as many pixels as the amateur cameras, to ensure a low noise to signal ratio.
What is the advantage of having more pixels? Basically two: increasing the resolution (but this is counter-balanced by the increasing noise) and using a smaller part of the sensors to increase the (apparent) focal length (electronic zoom). This latter is possible with all today’s smartphone but the problem with the electronic zoom is that as you are increasing the apparent focal length you are decreasing the number of pixels being used (since you use only the central part of the digital sensor). Clearly, by having many more pixel you can zoom and still get an image with a significant resolution.
Now, all this blabbering was (almost) true when using the digital sensor as the alternative to the film. However, we are now rapidly shifting towards a new paradigm, that of computational photography. In computational photography you use the signals received by the sensor as raw data and you play with them to create an image.
As an example, one could use the 100 MPixel sensors to take 10 photos at a µs distance one from the other using adjacent pixels (look at the pixel as a matrix and think about using pixel 1,11, 21… for the first photo, 2,12,22, … for the second and so on. The pixels are so close one another that there is really no difference due to position (and software can easily compensate for that). Getting 10 photos rather than one of the same subject allows the computational software to get rid of noise, i.e. increase the image quality. So, this is an example of using a 100 MPixel sensor that makes good sense and can lead to much higher quality photos.
The point I am making is that we can no longer evaluate a digital sensor independently of the software that is using it. When you move to computational photography the more data you have the better, and this is always true, all the rest being the same.
So welcome to the 100 Mpixel photography that in the end may result in an amazing quality for a 1Mpixel resolution image to be shared on Instagram!