Posted in | News | Light / Image Sensor

A Low-Cost Method Allows Autonomous Cars to Detect 3D Objects with High Accuracy

The current-generation of laser sensors used for detecting 3D objects in the paths of autonomous cars is costly, energy-inefficient, bulky, unpleasant but highly precise.

These are light detection and ranging (LiDAR) sensors that are attached to the roofs of cars, where they tend to raise wind drag, a major drawback for electric cars. These sensors can add about $10,000 to a car’s expense. However, in spite of their disadvantages, a majority of experts believe that LiDAR sensors offer the only reasonable means for autonomous vehicles to safely perceive various hazards on the road, including cars and pedestrians.

Now, a research team at Cornell University has found that a simpler technique, utilizing a pair of low-cost cameras located on both sides of the windshield, can potentially identify objects with almost the same precision as the LiDAR sensor but at only a fraction of the cost.

The scientists noted that when the captured images are studied from a bird’s-eye view instead of the more customary frontal view, their precision increased more than thrice, thus rendering stereo camera a feasible and cost-effective option to LiDAR sensors.

One of the essential problems in self-driving cars is to identify objects around them – obviously that’s crucial for a car to navigate its environment. Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving.

Kilian Weinberger, Study Senior Author and Associate Professor, Department of Computer Science, Cornell University

The paper will be presented at the 2019 Conference on Computer Vision and Pattern Recognition, which be held in Long Beach, California from June 15th to 21st.

The common belief is that you couldn’t make self-driving cars without LiDARs,” stated Weinberger. “We’ve shown, at least in principle, that it’s possible.”

Yan Wang is the paper’s first author and is also a doctoral student in computer science.

Lasers are used by LiDAR sensors to produce 3D point maps of their surroundings, determining the distance of objects through the speed of light. Stereo cameras generally depend on a couple of perspectives to establish depth, just like human eyes do, and they appear to be promising. However, their precision with regards to object detection has been pathetically low, and the traditional wisdom was that they were extremely inaccurate.

Subsequently, Wang and collaborators closely examined the data from stereo cameras and they unexpectedly found that their data was almost as accurate as LiDAR. They discovered that the gap in precision emerged when the data of the stereo cameras were being examined.

In the case of a majority of autonomous cars, the data recorded by sensors or cameras are examined with the help of convolutional neural networks—a form of machine learning that is capable of detecting images by applying filters that identify patterns related to them. Convolutional neural networks like these have been demonstrated to be excellent at detecting objects in regular color photographs; however, they tend to distort the 3D data if it is specified from the front. Hence, the precision more than tripled when Wang and coworkers changed the representation from a frontal perspective to a point cloud seen from a bird’s-eye view.

When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees. But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.

Kilian Weinberger, Study Senior Author and Associate Professor, Department of Computer Science, Cornell University

Eventually, stereo cameras can possibly be utilized as a key way of detecting objects in lower-priced cars, or as a backup approach in higher-end cars that are also fitted with LiDAR, said Weinberger.

The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy—which is essential for safety around the car,” stated Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and the paper’s co-author. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

The outcomes have implications beyond autonomous cars, stated Bharath Hariharan, co-author and assistant professor of computer science.

There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information. Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.

Bharath Hariharan, Study Co-Author and Assistant Professor, Department of Computer Science, Cornell University

Cornell postdoctoral researcher Wei-Lun Chao and Divyansh Garg ’20 also contributed to the study.

The study was partly supported by grants from the National Science Foundation, the Bill and Melinda Gates Foundation, and the Office of Naval Research.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.