Driving through fog can be tricky even for the best human driver. But a new camera could help vehicles navigate through tough weather on their own.
Built for just $500, the camera uses off-the-shelf LEDs that strobe at nanosecond periods. It relies on time-of-flight photography, which measures how fast a light signal is reflected back to a camera to determine the distance of an object, a bit like a bat surveying a cave with sonar.
Developed by a team led by Ramesh Raskar of the Camera Culture group at MIT's Media Lab, the 3D "nano-camera" uses a wave of strobing light to sweep across a scene, and then its software differentiates between light reflected off opaque surfaces and light moving through things like rain or glass.
Conventional motion sensors assume that one path of light is feeding the camera. A device like that would have trouble rendering, for example, both a window and the image behind it. With the nano-camera, light that reflects both off an object behind a window and on the glass itself is taken into account.
Unequal paths
The camera distinguishes between direct and scattered or diffracted light, so it could be used in vehicle collision-avoidance technology or medical imaging. Rain, fog and body tissue all send light scattering, but because the camera can accurately render the depth of translucent materials, it could give a clearer view of tissue structure or help distinguish a car's bumper from within a blizzard.
"Not all optical paths are created equal," says team member Achuta Kadambi, a computer vision researcher at the MIT Media Lab. To select which paths of light contribute to a pixel and eliminate the smearing that can happen when several rays of light converge, the team programmed the camera to pick out only a few of the finite number of photons acting on a pixel at one time.
"Our camera exploits the mathematical concept of sparsity, where we rely only on a few dominant optical paths," says signal-processing researcher Ayush Bhandari. "Our algorithms selectively choose the light that acts on a pixel to more clearly render a 3D image."
Compared to previous work in the field of computational photography, this camera "takes a clever and different approach", says Steve Marschner, who studies computer graphics and computational photography at Cornell University in Ithaca, New York. "This is an exciting technology with a lot of potential that I expect will find application in surprising areas in the future."
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.