They deliver direct, geometrically precise measurements of the 3D environment around a car in real time, providing range, shape, and motion cues that support object detection, planning, and control in diverse traffic and weather conditions.
Automotive LiDAR systems use pulsed or continuous laser light to measure how long it takes emitted photons to return from surrounding surfaces. The sensor records this round-trip timing and converts it to distance using:
d=c⋅Δt/2, where c is the speed of light, and Δt is the time of flight.
The sensor receives thousands to millions of these measurements per second and converts them into a dense point cloud that captures the 3D geometry of the roads, vehicles, pedestrians, and infrastructure surrounding the autonomous car.1,2
A typical automotive LiDAR system for autonomous vehicles comprises several functional blocks that must operate in concert.
The laser transmitter generates eye-safe near-infrared beams, while receiver optics, photodetectors, and timing electronics capture extremely short return pulses and convert them into digital range measurements with centimeter-level precision.1
Beam steering is the second critical block and can rely on rotating mechanical assemblies, micro-electro-mechanical systems (MEMS) mirrors, optical phased arrays, or hybrid arrangements, each with distinct implications for the field of view, angular resolution, and reliability.
The system also integrates thermal management, embedded processing, and automotive-grade communication interfaces that feed perception and localization stacks at frame rates that often exceed 10 Hz in highway scenarios.1,2
LiDAR Sensor Types in Autonomous Vehicles
In autonomous vehicles, LiDAR is categorized into two types: mechanically spinning 360° units and compact solid-state devices like MEMS scanners, flash LiDAR, and frequency-modulated continuous-wave (FMCW) systems.
Spinning units were standard in early self-driving prototypes due to their wide field of view and thorough coverage. However, they also introduced packaging challenges and required more maintenance for production vehicles.1,2
A recent study published in Sensors evaluated several solid-state sensors, including Livox Horizon, Robosense M1, Velodyne Velarray H800, and Innoviz Pro, in both static and dynamic driving scenarios. The results show that performance differs significantly among these devices.
Key factors affecting performance include detection range, measurement accuracy, how they respond to object edges, and their sensitivity to scan patterns during both standstill tests and motion at realistic speeds.2
The choice of sensor type for a given vehicle platform depends on field-of-view requirements, integration constraints in bumpers or windshields, and cost targets set by original equipment manufacturers. Design teams also consider the advantages of vertical resolution and point density.
They must balance these with power consumption, thermal load, and the number of sensors needed to achieve almost complete environmental coverage at highway speeds.1,2,3
Perception, Fusion, and End-to-End Driving
Image Credit: asharkyu/Shutterstock.com
LiDAR systems in autonomous vehicles support perception tasks that identify and monitor vehicles, pedestrians, cyclists, lane boundaries, and traffic signs.
Processing usually transforms raw point clouds into usable features using methods such as voxelization, pillar-based encodings, or range images. Deep neural networks then handle tasks such as object detection, semantic segmentation, and motion prediction.4,5,6
Several studies have tested LiDAR-only and LiDAR-camera fusion strategies in practical driving contexts.
Experiments with LiDAR “images” derived from sensors such as the Ouster OS1-128 show that these depth-rich views can assist with road-following tasks on complex rural roads with performance comparable to camera-based models while improving robustness to illumination and weather variations.7,8
Other work explores sensor fusion architectures that combine LiDAR point clouds with stereo or monocular camera data to improve 3D object recognition. This is especially useful for detecting distant or partially obscured targets in low-light conditions. Tests on datasets like KITTI show that fusion can improve detection accuracy.
However, some studies suggest that camera-only models only perform well in certain bad weather conditions, highlighting the need for application-specific fusion designs.6,7
Performance Outside of R&D
Image Credit: LeManna/Shutterstock.com
LiDAR systems for autonomous vehicles are designed to operate effectively across a variety of lighting and weather conditions that may challenge other sensor types.
Unlike passive cameras, which can struggle in low-light or high-glare conditions, LiDAR uses active illumination, enabling consistent range measurements at night and under varying ambient lighting.5,9
Adverse weather conditions can significantly affect LiDAR performance. Factors such as scattering, absorption, and false returns caused by rain, snow, and fog can lead to specific issues. Research shows that heavy rain can decrease the number of valid target points and alter intensity data, introducing noise that complicates the detection and tracking of objects.5,9
Additionally, the material properties of objects influence LiDAR performance in autonomous vehicles. Experiments that vary target material, distance, and speed report that reflective surfaces, such as retroreflective films, preserve point counts across weather conditions.
In contrast, low-reflectivity objects produce fewer returns, making detection more difficult.5
Comparative assessments of cameras, radar, and LiDAR in adverse conditions reveal that each sensor type has unique failure modes and complementary advantages. Radar can penetrate fog and precipitation, but provides lower spatial resolution.
Cameras deliver rich texture but perform poorly under challenging illumination and weather conditions. LiDAR offers accurate geometry but is primarily affected by atmospheric scattering and optical contamination.9
Saving this article for later? Grab a PDF here.
Trends and Future Directions in Autonomous Driving
Recent research on LiDAR technology for autonomous driving has identified several active areas of investigation, spanning from hardware developments to perception algorithms.
On the hardware side, current trends include higher component integration, reduced sensor costs, improved scanning patterns for better coverage uniformity, and the exploration of FMCW LiDAR for estimating both range and radial velocity.1,4
In algorithmic research, studies focused on LiDAR-based place recognition aim to enable vehicles to identify familiar locations, aiding in mapping and maintaining accurate positioning despite changes in viewpoint or appearance.
Exploration in this field focuses on methods for learning compact descriptors from point clouds and integrating them into comprehensive systems that link perception with localization and control.4
The broader research community is also studying how LiDAR systems in autonomous vehicles can perform well despite sensor interference, changes in traffic infrastructure, and mixed fleets of human-driven and automated vehicles.
Findings from multi-robot navigation emphasize the importance of reliable environmental perception, sensor calibration, and weather-adaptive algorithms to expand LiDAR use beyond controlled tests into busy urban areas.
Overall, LiDAR technology is evolving from bulky, experimental devices to integrated safety systems that help vehicles understand and navigate roads, with continuous improvements in design and algorithms.1,3,4
References and Further Reading
- Li, Y. et al. (2020). Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 50-61. DOI:10.1109/MSP.2020.2973615. https://ieeexplore.ieee.org/document/9127855
- Schulte-Tigges, J. et al. (2022). Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments. Sensors, 22(19). DOI:10.3390/s22197146. https://www.mdpi.com/1424-8220/22/19/7146
- Yuwei, Li. (2025). LiDAR Technology and Its Development in Autonomous Driving. Journal of Engineering Research and Reports 27 (9):207-17. DOI:10.9734/jerr/2025/v27i91635. https://www.journaljerr.com/index.php/JERR/article/view/1635
- Zhang, Y., Shi, P., & Li, J. (2024). LiDAR-Based Place Recognition For Autonomous Driving: A Survey. ACM Computing Surveys. DOI:10.1145/3707446. https://dl.acm.org/doi/10.1145/3707446
- Kim, J., Park, B. J., & Kim, J. (2023). Empirical Analysis of Autonomous Vehicle’s LiDAR Detection Performance Degradation for Actual Road Driving in Rain and Fog. Sensors, 23(6). DOI:10.3390/s23062972. https://www.mdpi.com/1424-8220/23/6/2972
- Khatab, E. et al. (2021). Vulnerable objects detection for autonomous driving: A review. Integration, 78, 36-48. DOI:10.1016/j.vlsi.2021.01.002. https://www.sciencedirect.com/science/article/abs/pii/S0167926021000055
- Liu, H., Wu, C., & Wang, H. (2023). Real time object detection using LiDAR and camera fusion for autonomous driving. Scientific Reports, 13(1), 8056. DOI:10.1038/s41598-023-35170-z. https://www.nature.com/articles/s41598-023-35170-z
- Tampuu, A. et al. (2023). LiDAR-as-Camera for End-to-End Driving. Sensors, 23(5). DOI:10.3390/s23052845. https://www.mdpi.com/1424-8220/23/5/2845
- Zhang, Y. et al. (2023). Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS Journal of Photogrammetry and Remote Sensing, 196, 146-177. DOI:10.1016/j.isprsjprs.2022.12.021. https://www.sciencedirect.com/science/article/pii/S0924271622003367
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.