Researchers at Fuzhou University in China developed a machine vision sensor that uses quantum dots to mimic the key behaviors of the human eye and adjust to drastic changes in light much more quickly than the human eye can, in about 40 seconds, according to a study published in Applied Physics Letters by AIP Publishing. Their findings could revolutionize autonomous vehicle safety and robotic vision.
Fabrication of nanoscale light-sensitive materials, known as quantum dots, created a device that reacts to light faster than the human eye. It could revolutionize autonomous vehicles. Image Credit: Lin et al.
Human eyes can adjust to severe lighting situations in minutes, whether they are blindingly bright or pitch-black. The human vision system, which includes the eyes, neurons, and brain, can also learn and remember settings, allowing individuals to adjust more quickly the next time they face comparable lighting issues.
Quantum dots are nano-sized semiconductors that efficiently convert light to electrical signals. Our innovation lies in engineering quantum dots to intentionally trap charges like water in a sponge then release them when needed — similar to how eyes store light-sensitive pigments for dark conditions.
Yun Ye, Study Corresponding Author, Fuzhou University
The sensor’s high adaptation speed is due to its distinctive design, which includes lead sulfide quantum dots embedded in polymer and zinc oxide layers. The device responds dynamically by trapping or releasing electric charges based on the illumination conditions, similar to how eyes store energy to adapt to darkness.
The layered design, along with specific electrodes, proved extremely successful in recreating human vision and optimizing light responses for peak performance.
“The combination of quantum dots, which are light-sensitive nanomaterials, and bio-inspired device structures allowed us to bridge neuroscience and engineering,” added Ye.
Their device design successfully responds dynamically to bright and dim lighting. It also surpasses existing machine vision systems by decreasing the vast quantity of redundant data produced by current vision systems.
Ye added, “Conventional systems process visual data indiscriminately, including irrelevant details, which wastes power and slows computation. Our sensor filters data at the source, similar to the way our eyes focus on key objects, and our device preprocesses light information to reduce the computational burden, just like the human retina.”
The research group intends to improve their device with systems including larger sensor arrays and edge-AI chips, which perform AI data processing directly on the sensor, or by incorporating other smart devices into smart cars for more applicability in autonomous driving.
Ye concluded, “Immediate uses for our device are in autonomous vehicles and robots operating in changing light conditions like going from tunnels to sunlight, but it could potentially inspire future low-power vision systems. Its core value is enabling machines to see reliably where current vision sensors fail.”
Journal Reference:
Lin, X., et al. (2025) A back-to-back structured bionic visual sensor for adaptive perception. Applied Physics Letters. doi.org/10.1063/5.0268992.