Posted in | News | Automotive Sensors

Study Reveals Industry-Standard Two-Camera LIDAR Sensors on Autonomous Vehicles Can Be Fooled

Researchers at Duke University have shown the first attack strategy that can trick industry-standard autonomous vehicle sensors into thinking that nearby objects are closer (or further) than they appear without being detected.

Study Reveals Industry-Standard Two-Camera LIDAR Sensors on Autonomous Vehicles Can Be Fooled.
Researchers have shown that a popular method to secure LiDAR sensors against "naive attacks" is still vulnerable at longer distances and only works at short distances. Here, a LiDAR system is fooled into thinking a car is somewhere else until it becomes too late to avoid a sudden and drastic course correction. Image Credit: Duke University.

The study suggests that incorporating optical three-dimensional (3D) features or the ability to distribute data with adjacent cars may be required to completely guard autonomous cars against attacks. The outcomes will be highlighted at the 2022 USENIX Security Symposium, a leading venue in the field, between August 10th and 12th.

One of the greatest challenges scientists designing autonomous driving units have to be concerned about is guarding against attacks. A typical strategy to guarantee safety is to verify data from unconnected systems against one another to ensure their measurements make sense together.

The most typical locating technology used by present-day autonomous car companies integrates 3D data from LiDAR and two-dimensional (2D) data from cameras, which is basically laser-based radar. This combination has been established as very strong against a wide variety of attacks that try to dupe the visual system into perceiving the world inaccurately.

This issue can hopefully now be resolved.

Our goal is to understand the limitations of existing systems so that we can protect against attacks. This research shows how adding just a few data points in the 3D point cloud ahead or behind of where an object actually is, can confuse these systems into making dangerous decisions.

Miroslav Pajic, Dickinson Family Associate Professor of Electrical and Computer Engineering, Duke University

The new attack strategy functions by shooting a laser gun into the LIDAR sensor of a car to incorporate false data points into its perception. If those data points are different from those of the car camera, earlier research has demonstrated that the system can identify the attack.

But the new study from Pajic and his colleagues demonstrates that 3D LIDAR data points sagaciously placed within a specific area of a camera’s 2D field of view can dupe the system.

This susceptible area extends out in front of a camera’s lens in the form of a frustum i.e, a 3D pyramid with its tip cut off. With regard to a forward-facing camera fixed on a car, this would mean that some data points placed behind or in front of another adjacent car can change the perception of the system by more than a few meters.

This so-called frustum attack can fool adaptive cruise control into thinking a vehicle is slowing down or speeding up. And by the time the system can figure out there’s an issue, there will be no way to avoid hitting the car without aggressive maneuvers that could create even more problems.

Miroslav Pajic, Dickinson Family Associate Professor of Electrical and Computer Engineering, Duke University

Pajic explains that there is not much risk of somebody spending time setting up lasers on a car or roadside object to dupe individual vehicles traveling on the highway. That risk intensifies, however, in military scenarios where single vehicles can be extremely high-value targets.

If hackers were to discover a way of forming these false data points virtually rather than requiring physical lasers, numerous vehicles could be attacked simultaneously.

Pajic says the route to safeguarding against these attacks is added redundancy. For instance, if cars had “stereo cameras” with intersecting fields of view, they could approximate distances in a better manner and identify LIDAR data that does not match their perception.

Stereo cameras are more likely to be a reliable consistency check, though no software has been sufficiently validated for how to determine if the LIDAR/stereo camera data are consistent or what to do if it is found they are inconsistent. Also, perfectly securing the entire vehicle would require multiple sets of stereo cameras around its entire body to provide 100% coverage.

Spencer Hallyburton, PhD Candidate and Study Lead Author, Cyber-Physical Systems Lab, Duke University

Another choice, Pajic proposes, is to create systems in which cars within close vicinity to one another share a bit of their data. Physical attacks may not be able to impact many cars simultaneously, and because different car brands may have diverse operating systems, a cyberattack may not be able to attack all cars with one blow. 

With all of the work that is going on in this field, we will be able to build systems that you can trust your life with. It might take 10+ years, but I’m confident that we will get there.

Miroslav Pajic, Dickinson Family Associate Professor of Electrical and Computer Engineering, Duke University

This study received support from the Office of Naval Research (N00014-20-1-2745), the National Science Foundation (CNS-1652544, CNS-2112562), and the Air Force Office of Scientific Research (FA9550-19-1-0169).


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Azthena logo powered by Azthena AI

Your AI Assistant finding answers from trusted AZoM content

Azthena logo with the word Azthena

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI.
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy.
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.