Posted in | News | Position Sensor

Smarter Neural Network Model for More Efficient Wearable Devices

Researchers have developed a lightweight U-Net model using depthwise separable convolutions. This model enables real-time, high-accuracy human activity recognition on wearable devices with minimal computational load.

Male running using treadmill machine and checking smart watch for performance, healthy lifestyle and cardio exercise Image Credit: pics five/Shutterstock.com

A recent study published in Applied Sciences introduces a lightweight U-Net model that can be used for human activity recognition (HAR).

A U-Net model is a deep neural network designed to perform dense prediction. That is, to assign a label to every element in a sequence or image. The newly developed model does so without overwhelming limited hardware resources, an essential requirement for real-world applications.

Raw Signals to Smart Insights

HAR systems rely on sensor-equipped devices to track and interpret physical activities through multi-dimensional time-series data. Benchmark datasets such as MHEALTH, PAMAP2, and WISDM are frequently used in these sensors, offering a unique mix of devices, placements, and sampling rates.

Traditional models required handcrafted features and struggled to capture subtle or rapidly changing activities. Deep learning shifted the landscape, with CNNs and LSTMs offering better performance by learning features directly from raw signals.

One standout architecture is the U-Net model. Originally created for medical image segmentation, it was adapted for HAR to label each moment in a sequence with activity labels. Its skip connections help recover information lost in downsampling, which makes it particularly effective for identifying brief or overlapping motions.

But, their performance comes with a caveat. Standard U-Nets are computationally intensive and can overload devices, limiting their use in wearable tech sensors like smartwatches or fitness trackers.

Download your PDF now!

Rethinking U-Net for Wearables

To overcome this computational overload, the researchers restructured the U-Net by replacing standard convolutions with depthwise separable convolutions, a method that significantly reduces the number of parameters.

This technique breaks the convolution operation into two parts: a depthwise pass that processes each input channel separately, followed by a pointwise pass that combines them. The result is a leaner model that uses far less memory and processing power, while still capturing the complex temporal patterns required for accurate HAR.

Their model is designed to handle raw inputs from sensors like accelerometers, gyroscopes, magnetometers, and ECGs, with a preprocessing pipeline that ensures generalizability through subject-independent data splits.

Signals are normalized and divided into overlapping time windows, then fed into the network for dense prediction, assigning activity labels to each time step.

The U-Net model's ability to retain high temporal resolution allows it to recognize not just steady-state activities but also transitions and brief actions that traditional models often miss, allowing it to work so well despite being condensed.

Lighter and Faster, but Just as Accurate

Performance testing across multiple benchmark datasets showed strong results. On the MHEALTH dataset, the new model achieved 92.59 % accuracy, surpassing older architectures like FCN and earlier U-Net variants by over 7 %. F1-scores also showed balanced performance, even in datasets with uneven class distributions.

The redesign reduced model parameters by over 50 %, without sacrificing accuracy. This means the model can run efficiently on devices with limited processing power, such as smartphones or wearables.

The network’s dense prediction capabilities proved especially useful in identifying short-duration or transitional activities, scenarios where many HAR models struggle. Ablation studies found that a four-layer depth strikes the best balance between accuracy and efficiency.

Smarter HAR for Everyday Tech

This study shows how smart architectural changes, in this case, swapping in depthwise separable convolutions, can make deep learning models like U-Net viable outside of the lab. It’s a practical advancement that supports more nuanced and accurate motion tracking, whether for health monitoring, fitness tracking, or everyday context-aware computing.

As wearable technologies become more integrated into daily life, efficient HAR models like this could be key for delivering responsive, low-power applications that don’t compromise precision.

Journal Reference

Lee Y.-K., et al. (2025). Depthwise-Separable U-Net for Wearable Sensor-Based Human Activity Recognition. Applied Sciences 15(16):9134. DOI: 10.3390/app15169134, https://www.mdpi.com/2076-3417/15/16/9134

Dr. Noopur Jain

Written by

Dr. Noopur Jain

Dr. Noopur Jain is an accomplished Scientific Writer based in the city of New Delhi, India. With a Ph.D. in Materials Science, she brings a depth of knowledge and experience in electron microscopy, catalysis, and soft materials. Her scientific publishing record is a testament to her dedication and expertise in the field. Additionally, she has hands-on experience in the field of chemical formulations, microscopy technique development and statistical analysis.    

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Jain, Noopur. (2025, August 27). Smarter Neural Network Model for More Efficient Wearable Devices. AZoSensors. Retrieved on August 28, 2025 from https://www.azosensors.com/news.aspx?newsID=16594.

  • MLA

    Jain, Noopur. "Smarter Neural Network Model for More Efficient Wearable Devices". AZoSensors. 28 August 2025. <https://www.azosensors.com/news.aspx?newsID=16594>.

  • Chicago

    Jain, Noopur. "Smarter Neural Network Model for More Efficient Wearable Devices". AZoSensors. https://www.azosensors.com/news.aspx?newsID=16594. (accessed August 28, 2025).

  • Harvard

    Jain, Noopur. 2025. Smarter Neural Network Model for More Efficient Wearable Devices. AZoSensors, viewed 28 August 2025, https://www.azosensors.com/news.aspx?newsID=16594.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.