After the mass 'systematic failure' of Robotaxis seen in Wuhan, China, are fully automated systems really ready for use yet?
Modern autonomous vehicles use a variety of sensors to understand their surroundings, and each sensor has a specific job.
- Ultrasonic sensors are used to detect objects within a range of up to 2 meters
- Cameras capture important visual details like lane markings and traffic signs from as far as 250 meters away
- Radar measures the speed of moving objects and works well in bad weather, reaching distances of up to 200 meters
- Light detection and ranging (LiDAR) creates detailed 3D maps of the environment with high accuracy, usually within 2 to 5 centimeters
- Global Navigation Satellite Systems (GNSS) provide precise, real-time location data to on-board computers for advanced driver-assistance systems (ADAS) and fully autonomous driving1,2
This combination of sensors aims to provide a complete and reliable view of the vehicle's surroundings. Automation is classified from Level 0 to 5, according to the Society of Automotive Engineers (SAE).
At Level 0, the driver is fully in control, while at Level 5, the vehicle is fully automated and requires no human input. Current technology still struggles to perform consistently in all environments.1,2
Where Each Sensor Falls Short
Image Credit: bluestork/Shutterstock.com
No single sensor currently available can achieve Level 5 performance on its own.
LiDAR excels in fair-weather conditions, but its laser pulses scatter in rain, snow, and dense fog. This scattering causes significant performance loss precisely when reliable sensing matters most. Highly reflective surfaces pose another challenge, as laser beams can bounce off them, producing unusable range data and leading to inaccuracies in spatial mapping.1,2
Radar, while reliable in adverse weather, suffers from a fundamental resolution deficit and cannot classify objects based on appearance, with spatial resolution far inferior to cameras or LiDAR. As more vehicles are equipped with frequency-modulated continuous-wave (FMCW) radars operating in the 76-81 GHz band, shared-frequency interference among nearby vehicles is also becoming a measurable operational hazard.1,2
Cameras also have weaknesses. They're highly vulnerable to extreme lighting, rain, condensation, and fog. Degraded camera images fed into AI perception models have been directly linked to simulated autonomous-vehicle collisions, even when basic mitigation measures were in place.
Additionally, GNSS is generally accurate for global positioning but can make errors in city areas and is susceptible to jamming and spoofing, compromising navigation accuracy.1,2
Could Sensor Fusion be the Answer?
To address these collective issues, the industry uses sensor fusion, which combines data from different types of sensors to create a more reliable model of the environment.
The most widely used combinations in autonomous systems are camera-radar (CR), camera-LiDAR-radar (CLR), and camera-LiDAR (CL). The CR combination delivers high-resolution visual data alongside accurate velocity measurements, while CLR adds precise 3D distance mapping through LiDAR point clouds.1,3,4
But fusion comes with its own challenges. Different sensors may operate at different speeds, causing timing issues. If a LiDAR unit scans at one frequency while a camera captures images at another, it can result in position errors, especially in dynamic settings. Over time, the clocks in different sensors can drift, worsening these problems.
Mid-level feature fusion is efficient but insufficient for achieving higher levels of autonomy because it loses important context needed to understand complex scenes. While deep learning fusion methods can improve performance in busy urban areas, they require large training datasets and can be hard to test in unusual or extreme situations.1,3,4
Redundancy for Safety
Given the well-documented failure modes of both individual sensors and fusion pipelines, leading autonomous programs now treat hardware and software redundancy as a fundamental safety requirement. BMW’s autonomous driving system, for example, uses three separate Automated Driving (AD) channels that run in parallel. These channels share no hardware, software, or sensors.1
Research simulating three AD channels in parallel found that no single channel performed best in every scenario. Each failed under conditions where the others succeeded, while the combined multichannel system outperformed any one channel alone.
The major takeaway from these findings is that readiness for full autonomy cannot be determined by a single sensor or a single fusion pipeline. What matters is the performance of the perception system as a whole.
Multiple channels, sensor diversity, and rigorous calibration all contribute to maintaining safe operation across varied conditions. Redundancy, then, is a safeguard and a core design principle for systems aiming for Level 5 autonomy, in which vehicles operate without human intervention.1
What are Autonomous Mobile Robots and Guided Vehicles?
Emerging Technologies and Persistent Gaps
Several research projects are working to narrow this gap in performance.
The move from 905 nm LiDAR to the 1550 nm wavelength band has increased the reliable detection range from roughly 100 meters to 300 meters, while also easing the eye-safety power limits that constrained earlier systems.
High-resolution 4D imaging radar is also improving object classification, making radar more useful in poor weather. Event cameras add another advantage: instead of capturing full frames at fixed intervals, they record pixel-level brightness changes asynchronously. That approach eliminates motion blur and delivers output above 1,000 frames per second, making these cameras especially effective in fast-moving, dynamic environments.1,2,5,6
Despite these improvements, current sensors still struggle with high false detection rates. The processing costs for managing dense multi-modal sensor streams in real time can limit large-scale use. Some manufacturers opt for vision-based systems that eliminate LiDAR to cut costs. However, these systems require advanced algorithms and extensive training data, and their accuracy decreases in challenging conditions like rain or backlighting.
Experts agree that no single sensor can guarantee full autonomy in all situations. However, when combined in well-designed fusion systems, they can achieve full autonomy in specific, well-mapped areas. The main challenge is still developing technology that performs reliably across all environments and situations.1,2,5,6
References and Further Reading
- Matos, F. et al. (2024). A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. Sensors, 24(16), 5108. DOI:10.3390/s24165108. https://www.mdpi.com/1424-8220/24/16/5108
- Mohammad, S. et al. (2025). Perception Technologies for Autonomous Transportation: A Comparative Analysis of LiDAR, Radar, Camera, and Sonar. CRPASE: Transactions of Civil and Environmental Engineering, 11 (4). DOI:10.82042/crpase.11.4.2967. https://crpase.com/viewmore.php?pid=340
- Xie, W. et al. (2024). Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems. IEEE Xplore. DOI:10.1109/RTCSA62462.2024.00014. https://ieeexplore.ieee.org/document/10695646
- Wang, M. et al. (2025). Adaptive Fusion of LiDAR Features for 3D Object Detection in Autonomous Driving. Sensors, 25(13). DOI:10.3390/s25133865. https://www.mdpi.com/1424-8220/25/13/3865
- Liu, W. (2025). Review of Automotive Sensors Based on Autonomous Driving. Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025), Advances in Engineering Research 279. DOI:10.2991/978-94-6463-864-6_59. https://www.atlantis-press.com/proceedings/eeic-25/126016753
- Yang, C. (2024). New Trends in Sensors for Autonomous Driving Perception Systems. Omdia. https://omdia.tech.informa.com/blogs/2024/mar/new-trends-in-sensors-for-autonomous-driving-perception-systems
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.