MIT’s Depth-Sensing Imaging System Can See Through Dense Fog

Guy Satat, a graduate student in the MIT Media Lab, who led the new study. (Credit: Melanie Gonick/MIT)

A novel system capable of creating images of objects hidden by fog so thick that human vision cannot penetrate it has been built by researchers at MIT. The system can also gauge the objects’ distance.

An inability to handle foggy driving conditions has been one of the main hurdles to the progress of autonomous vehicular navigation systems that use visible light, which are superior to radar-based systems for their high resolution and ability to track lane markers and read road signs. Therefore, the MIT system could be a vital step toward self-driving cars.

The team tested the system using a small tank of water containing the vibrating motor from a humidifier immersed in it. In fog so thick that human vision was only able to penetrate 36 cm; the system could resolve images of objects and gauge their depth at a range of 57 cm.

Fifty-seven centimeters is not a significant distance, but the fog formed for the study was a lot denser than any that a human driver would have to struggle with; in the real world, a usual fog might afford a visibility of around 30 to 50 m. The crucial point is that the system did better than human vision, while a majority of imaging systems performs a lot worse. A navigation system that was even more or less as a human driver at driving in mist would be an enormous breakthrough.

I decided to take on the challenge of developing a system that can see through actual fog. We’re dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios.

Guy Satat, Graduate Student, Media Lab, MIT

A paper illustrating their system will be presented by Satat and his colleagues at the International Conference on Computational Photography in May. Satat is the paper’s first author, and he is joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.

Playing the odds

Similar to many of the projects carried out in Raskar’s Camera Culture Group, the new system employs a time-of-flight camera, which fires ultrashort bursts of laser light into a scene and measures the time taken for their reflections to return.

On a clear day, the light’s return time realistically shows the distances of the objects that reflected it. But fog causes light to “scatter,” or bounce around in haphazard ways. In foggy weather, a majority of the light that reaches the camera’s sensor will have been reflected by floating water droplets, not by the types of objects that autonomous vehicles need to dodge. And even the light that does reflect from possible obstacles will arrive at various times, having been rebounded by water droplets on both the way out and the way back.

The MIT system overcomes this issue by using statistics. The patterns fashioned by fog-reflected light differ according to the density of fog: Normally, light penetrates less deeply into a thick fog than it does into a light fog. But the MIT team was able to demonstrate that, regardless of how thick the fog, the arrival times of the reflected light follow a statistical pattern known as a gamma distribution.

Gamma distributions are slightly more complex than Gaussian distributions, the typical distributions that produce the familiar bell curve: They can be asymmetrical, and they can take on a broader range of shapes. But like Gaussian distributions, they are totally described by two variables. The MIT system estimates the values of those variables in progress and utilizes the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera’s sensor.

Significantly, the system computes a diverse gamma distribution for each of the 1,024 pixels in the sensor. That is why it is able to handle the differences in fog density that foiled previous systems: It can deal with environments in which each pixel sees a diverse type of fog.

Signature shapes

The camera counts the number of photons (light particles) that reach it every 56 picoseconds, or trillionths of a second. The MIT system uses those raw counts to create a histogram — basically, a bar graph, with the heights of the bars representing the photon counts for each interval. Then it identifies the gamma distribution that ideally fits the shape of the bar graph and just subtracts the related photon counts from the measured totals. What remains are minor spikes at the distances that compare with physical obstacles.

What’s nice about this is that it’s pretty simple. If you look at the computation and the method, it’s surprisingly not complex. We also don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions.

Guy Satat, Graduate Student, Media Lab, MIT

Satat tested the system using a fog chamber that measured a meter in length. Within the chamber, he mounted commonly spaced distance markers, which provided an approximate measure of visibility. He also positioned a series of small objects — wooden blocks, a wooden figurine, silhouettes of letters — that the system was able to image even when they were undetectable to the naked eye.

There are several ways to measure visibility, however: Objects with various colors and textures are visible through fog at varied distances. So, to evaluate the system’s performance, he used a more difficult metric known as optical depth, which defines the amount of light that pierces the fog.

Optical depth is autonomous of distance, so the performance of the system on fog that has a specific optical depth at a range of 1 m should be a good forecaster of its performance on fog that has the same optical depth at a range of 30 m. Actually, the system may even do better at longer distances, as the variances between photons’ arrival times will be more, which could result in more accurate histograms.

Bad weather is one of the big remaining hurdles to address for autonomous driving technology. Guy and Ramesh’s innovative work produces the best visibility enhancement I have seen at visible or near-infrared wavelengths and has the potential to be implemented on cars very soon.

Srinivasa Narasimhan, Professor, Computer Science, Carnegie Mellon University

Seeing through fog

Credit: Melanie Gonick/MIT

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.