Editorial Feature

Are Today's Sensors Ready for Fully Autonomous Systems?

The promise of fully autonomous systems, like self-driving cars and unmanned aerial vehicles, depends on a fundamental requirement that machines must perceive the world as accurately and reliably as a trained human operator. Sensors are the eyes and ears of these autonomous systems. Whether current sensor technology meets this standard has significant implications for public safety, infrastructure planning, and the pace of autonomous deployment worldwide.

Sensor illustration demonstrating the sensors involved in autonomous cars. Image Credit: Oselote/Shutterstock.com

The science shows that while fusion models are helping, sensor technology is still missing certain aspects for seamless success. 

Saving this article for later? Download a PDF here.

After the mass 'systematic failure' of Robotaxis seen in Wuhan, China, are fully automated systems really ready for use yet? 

Modern autonomous vehicles use a variety of sensors to understand their surroundings, and each sensor has a specific job. 

  • Ultrasonic sensors are used to detect objects within a range of up to 2 meters
  • Cameras capture important visual details like lane markings and traffic signs from as far as 250 meters away
  • Radar measures the speed of moving objects and works well in bad weather, reaching distances of up to 200 meters
  • Light detection and ranging (LiDAR) creates detailed 3D maps of the environment with high accuracy, usually within 2 to 5 centimeters
  • Global Navigation Satellite Systems (GNSS) provide precise, real-time location data to on-board computers for advanced driver-assistance systems (ADAS) and fully autonomous driving1,2 

This combination of sensors aims to provide a complete and reliable view of the vehicle's surroundings. Automation is classified from Level 0 to 5, according to the Society of Automotive Engineers (SAE).

At Level 0, the driver is fully in control, while at Level 5, the vehicle is fully automated and requires no human input. Current technology still struggles to perform consistently in all environments.1,2

Where Each Sensor Falls Short

a Waymo autonomous vehicle, recognizable by its distinct sensor array, parked on a residential street. Image Credit: bluestork/Shutterstock.com

No single sensor currently available can achieve Level 5 performance on its own.

LiDAR excels in fair-weather conditions, but its laser pulses scatter in rain, snow, and dense fog. This scattering causes significant performance loss precisely when reliable sensing matters most. Highly reflective surfaces pose another challenge, as laser beams can bounce off them, producing unusable range data and leading to inaccuracies in spatial mapping.1,2

Radar, while reliable in adverse weather, suffers from a fundamental resolution deficit and cannot classify objects based on appearance, with spatial resolution far inferior to cameras or LiDAR. As more vehicles are equipped with frequency-modulated continuous-wave (FMCW) radars operating in the 76-81 GHz band, shared-frequency interference among nearby vehicles is also becoming a measurable operational hazard.1,2

Cameras also have weaknesses. They're highly vulnerable to extreme lighting, rain, condensation, and fog. Degraded camera images fed into AI perception models have been directly linked to simulated autonomous-vehicle collisions, even when basic mitigation measures were in place.

Additionally, GNSS is generally accurate for global positioning but can make errors in city areas and is susceptible to jamming and spoofing, compromising navigation accuracy.1,2

Could Sensor Fusion be the Answer?

To address these collective issues, the industry uses sensor fusion, which combines data from different types of sensors to create a more reliable model of the environment.

The most widely used combinations in autonomous systems are camera-radar (CR), camera-LiDAR-radar (CLR), and camera-LiDAR (CL). The CR combination delivers high-resolution visual data alongside accurate velocity measurements, while CLR adds precise 3D distance mapping through LiDAR point clouds.1,3,4

But fusion comes with its own challenges. Different sensors may operate at different speeds, causing timing issues. If a LiDAR unit scans at one frequency while a camera captures images at another, it can result in position errors, especially in dynamic settings. Over time, the clocks in different sensors can drift, worsening these problems.

Mid-level feature fusion is efficient but insufficient for achieving higher levels of autonomy because it loses important context needed to understand complex scenes. While deep learning fusion methods can improve performance in busy urban areas, they require large training datasets and can be hard to test in unusual or extreme situations.1,3,4

Redundancy for Safety

Given the well-documented failure modes of both individual sensors and fusion pipelines, leading autonomous programs now treat hardware and software redundancy as a fundamental safety requirement. BMW’s autonomous driving system, for example, uses three separate Automated Driving (AD) channels that run in parallel. These channels share no hardware, software, or sensors.1

Research simulating three AD channels in parallel found that no single channel performed best in every scenario. Each failed under conditions where the others succeeded, while the combined multichannel system outperformed any one channel alone.

The major takeaway from these findings is that readiness for full autonomy cannot be determined by a single sensor or a single fusion pipeline. What matters is the performance of the perception system as a whole.

Multiple channels, sensor diversity, and rigorous calibration all contribute to maintaining safe operation across varied conditions. Redundancy, then, is a safeguard and a core design principle for systems aiming for Level 5 autonomy, in which vehicles operate without human intervention.1

What are Autonomous Mobile Robots and Guided Vehicles?

Emerging Technologies and Persistent Gaps

Several research projects are working to narrow this gap in performance.

The move from 905 nm LiDAR to the 1550 nm wavelength band has increased the reliable detection range from roughly 100 meters to 300 meters, while also easing the eye-safety power limits that constrained earlier systems.

High-resolution 4D imaging radar is also improving object classification, making radar more useful in poor weather. Event cameras add another advantage: instead of capturing full frames at fixed intervals, they record pixel-level brightness changes asynchronously. That approach eliminates motion blur and delivers output above 1,000 frames per second, making these cameras especially effective in fast-moving, dynamic environments.1,2,5,6

Despite these improvements, current sensors still struggle with high false detection rates. The processing costs for managing dense multi-modal sensor streams in real time can limit large-scale use. Some manufacturers opt for vision-based systems that eliminate LiDAR to cut costs. However, these systems require advanced algorithms and extensive training data, and their accuracy decreases in challenging conditions like rain or backlighting.

Experts agree that no single sensor can guarantee full autonomy in all situations. However, when combined in well-designed fusion systems, they can achieve full autonomy in specific, well-mapped areas. The main challenge is still developing technology that performs reliably across all environments and situations.1,2,5,6

References and Further Reading

  1. Matos, F. et al. (2024). A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. Sensors, 24(16), 5108. DOI:10.3390/s24165108. https://www.mdpi.com/1424-8220/24/16/5108
  2. Mohammad, S. et al. (2025). Perception Technologies for Autonomous Transportation: A Comparative Analysis of LiDAR, Radar, Camera, and Sonar. CRPASE: Transactions of Civil and Environmental Engineering, 11 (4). DOI:10.82042/crpase.11.4.2967. https://crpase.com/viewmore.php?pid=340
  3. Xie, W. et al. (2024). Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems. IEEE Xplore. DOI:10.1109/RTCSA62462.2024.00014. https://ieeexplore.ieee.org/document/10695646
  4. Wang, M. et al. (2025). Adaptive Fusion of LiDAR Features for 3D Object Detection in Autonomous Driving. Sensors, 25(13). DOI:10.3390/s25133865. https://www.mdpi.com/1424-8220/25/13/3865
  5. Liu, W. (2025). Review of Automotive Sensors Based on Autonomous Driving. Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025), Advances in Engineering Research 279. DOI:10.2991/978-94-6463-864-6_59. https://www.atlantis-press.com/proceedings/eeic-25/126016753
  6. Yang, C. (2024). New Trends in Sensors for Autonomous Driving Perception Systems. Omdia. https://omdia.tech.informa.com/blogs/2024/mar/new-trends-in-sensors-for-autonomous-driving-perception-systems

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Ankit Singh

Written by

Ankit Singh

Ankit is a research scholar based in Mumbai, India, specializing in neuronal membrane biophysics. He holds a Bachelor of Science degree in Chemistry and has a keen interest in building scientific instruments. He is also passionate about content writing and can adeptly convey complex concepts. Outside of academia, Ankit enjoys sports, reading books, and exploring documentaries, and has a particular interest in credit cards and finance. He also finds relaxation and inspiration in music, especially songs and ghazals.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Singh, Ankit. (2026, April 02). Are Today's Sensors Ready for Fully Autonomous Systems?. AZoSensors. Retrieved on April 02, 2026 from https://www.azosensors.com/article.aspx?ArticleID=3302.

  • MLA

    Singh, Ankit. "Are Today's Sensors Ready for Fully Autonomous Systems?". AZoSensors. 02 April 2026. <https://www.azosensors.com/article.aspx?ArticleID=3302>.

  • Chicago

    Singh, Ankit. "Are Today's Sensors Ready for Fully Autonomous Systems?". AZoSensors. https://www.azosensors.com/article.aspx?ArticleID=3302. (accessed April 02, 2026).

  • Harvard

    Singh, Ankit. 2026. Are Today's Sensors Ready for Fully Autonomous Systems?. AZoSensors, viewed 02 April 2026, https://www.azosensors.com/article.aspx?ArticleID=3302.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.