Posted in | News | Biosensors

Machine-Learning Technique Shown to Reduce Sensor Drift in Electronic Noses

A machine-learning approach known as knowledge distillation can significantly reduce sensor drift in electronic noses, researchers say, improving the reliability of gas-sensing systems used in industry, healthcare, and environmental monitoring.

Set of sensors for methane gas, carbon monoxide, co2, ultrasonic and infrared position sensors. Study: Sensor-Drift Compensation in Electronic-Nose-Based Gas Recognition Using Knowledge Distillation. Image Credit: Paolo De Gasperis/Shutterstock.com

In a study published in the journal Informatics, the team reports that the method consistently outperformed a widely used benchmark technique when tested under long-term, real-world-simulated conditions using a public dataset.

Get all the details: Grab your PDF here!

Electronic noses, or e-noses, use arrays of chemical sensors and pattern-recognition algorithms to identify gases and volatile organic compounds. They are widely used for tasks such as air-quality monitoring, food quality assessment, and medical breath analysis.

But their performance tends to degrade over time. Changes in sensor materials, contaminatio,n and environmental factors such as temperature and humidity can cause sensor drift, shifting the statistical properties of the data the system relies on.

As a result, models trained on early measurements often struggle to classify gases accurately months or years later.

The problem is particularly difficult because drift in multi-sensor arrays is often non-linear and evolves gradually, making it hard to correct using simple recalibration methods.

Testing Drift Over Three Years

To study the problem, the researchers used the UCI Gas Sensor Array Drift Dataset, a standard benchmark collected over 36 months.

The dataset contains measurements from a 16-sensor array exposed to six gases: ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene, recorded in ten sequential batches.

The team designed two testing scenarios intended to reflect real deployment.

In one, models were trained on the first batch only and then tested on all later batches, mimicking a system calibrated once and deployed long-term. In the other, the training data was updated continuously using all previously collected batches to predict the next one.

Teaching One Model to Learn from Another

The study compared three approaches. The first was knowledge distillation, a semi-supervised method in which a “teacher” model trained on labelled data produces probability-based outputs, known as soft targets.

A second “student” model is then trained to match these outputs using both labelled source data and unlabeled drifted data.

The researchers compared this approach with Domain-Regularized Component Analysis (DRCA), a commonly used technique that attempts to align source and target data by projecting them into a shared feature space. They also tested an exploratory hybrid method combining DRCA with knowledge distillation.

Results Backed by Statistical Testing

Unlike many earlier studies, the experiments were repeated 30 times using different random data splits. Performance was assessed using accuracy, precision, recall, and F1-score, and statistical tests were applied to determine whether observed improvements were significant.

Across both testing scenarios, knowledge distillation delivered the most consistent gains. The method achieved relative improvements of up to 18 % in accuracy and 15 % in F1-score compared with a baseline model. 

On test data, it produced statistically significant improvements in 15 of 36 task-specific comparisons, and in 24 of 72 comparisons when all tasks and metrics were considered.

DRCA showed significant gains in far fewer cases, while the hybrid approach did not outperform knowledge distillation on its own.

Why This Learning Approach Works Well

According to the researchers, the advantage of knowledge distillation lies in its use of soft targets, which encode uncertainty and relationships between gas classes rather than forcing hard decisions.

This acts as a form of regularisation, helping the model generalise better as sensor behaviour changes over time.

By contrast, DRCA relies on a linear transformation of the data. While this can reduce differences between domains, it may also remove information that is important for distinguishing between gases when drift is complex or uneven.

Looking Ahead at e-Nose Sensing

The authors describe the study as the first statistically rigorous demonstration of semi-supervised knowledge distillation for sensor-drift compensation on a widely used electronic-nose benchmark.

They caution, however, that the work focuses on a single dataset dominated by long-term temporal drift. Future studies will need to test whether the approach generalises to other sensor systems and to drift caused by changing environmental or operational conditions.

For now, the results suggest that model-level adaptation techniques could play an important role in extending the working life of electronic-nose systems used outside the laboratory.

Journal Reference

Lin J., Zhan X. (2026). Sensor-Drift Compensation in Electronic-Nose-Based Gas Recognition Using Knowledge Distillation. Informatics 13(1):15. DOI: 10.3390/informatics13010015

Dr. Noopur Jain

Written by

Dr. Noopur Jain

Dr. Noopur Jain is an accomplished Scientific Writer based in the city of New Delhi, India. With a Ph.D. in Materials Science, she brings a depth of knowledge and experience in electron microscopy, catalysis, and soft materials. Her scientific publishing record is a testament to her dedication and expertise in the field. Additionally, she has hands-on experience in the field of chemical formulations, microscopy technique development and statistical analysis.    

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Jain, Noopur. (2026, January 27). Machine-Learning Technique Shown to Reduce Sensor Drift in Electronic Noses. AZoSensors. Retrieved on January 27, 2026 from https://www.azosensors.com/news.aspx?newsID=16743.

  • MLA

    Jain, Noopur. "Machine-Learning Technique Shown to Reduce Sensor Drift in Electronic Noses". AZoSensors. 27 January 2026. <https://www.azosensors.com/news.aspx?newsID=16743>.

  • Chicago

    Jain, Noopur. "Machine-Learning Technique Shown to Reduce Sensor Drift in Electronic Noses". AZoSensors. https://www.azosensors.com/news.aspx?newsID=16743. (accessed January 27, 2026).

  • Harvard

    Jain, Noopur. 2026. Machine-Learning Technique Shown to Reduce Sensor Drift in Electronic Noses. AZoSensors, viewed 27 January 2026, https://www.azosensors.com/news.aspx?newsID=16743.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.