Researchers at Stanford Computational Imaging Lab have developed a 4D camera to help autonomous vehicles and more generally robots to assess their environment.
The 4D refers to the fact that the system is able to capture an image (2D) creating various focus planes (3D) using light field technology and providing exact information on the distance (4D) of the various object in a broad field of view – FOV (look at the clip). The system is able to capture a 138° FOV, larger than the one we can capture with our two eyes (114°), with a single camera that is mounted on a rotating arm. The rotating arm is what allows the capture of 3D images (through a software processing the parallax) and the light field technology creates several focus planes that can be processed to detect the distance of the various objects in the field.
The challenge they had to overcome was to use a wide field lens (to cover the 138°) and at the same time to work out the distance of objects in the FOV. The broader the FOV the more distance is compressed and the more difficult to separate the various objects.
With this system it is possible to provide a robot (or a self driving car) with a large FOV and with the depth information. This dramatically decreases the processing task of the robot to “understand” the image in terms of objects and their relative position.
In the industrialised version they aim at replacing the moving arm by multiple sensors using micro-lens arrays. That will further simplify the system making it cheaper and more compact, hence more usable by a self driving car or a robot.