Posted in | News | Position Sensor | Biosensors

Self-Supervised Learning on Wearable Sensor Data

In a recent article published in the journal npj Digital Medicine, researchers showcased the application of self-supervised learning for human activity recognition using a substantial dataset of wearable data.

Advancing Human Activity Recognition through Self-Supervised Learning
Overview of the proposed self-supervised learning pipeline. Step 1 involves multi-task self-supervised learning on 700,000 person-days of data from the UK Biobank. In step 2, we evaluate the utility of the pre-trained network in eight benchmark human activity recognition baselines via transfer learning. Image Credit:

This study highlighted the importance of being able to accurately monitor physical activity to understand its impact on health and well-being. The researchers tackled prevalent challenges encountered in real-world applications of pre-trained models, such as domain and task shifts, through a thorough assessment of self-supervised human activity recognition in practical settings. They also underscored the dissemination of their pre-trained models to bolster the digital health research community, particularly in fields constrained by data scarcity.


This study introduces the innovative concept of multi-task self-supervised learning, which is employed to pre-train a deep convolutional neural network using a vast dataset of accelerometer data collected from individuals living freely, sourced from the UK Biobank. The current challenges in human activity recognition stem from the scarcity of large, annotated datasets.

Traditionally, methodologies in this field have been hampered by the use of small, artificial datasets gathered under controlled lab conditions, severely limiting the models' ability to generalize. The difficulties associated with training on extensive and high-dimensional sensor data underscore the necessity for advanced techniques to fully exploit the potential of deep learning in constructing groundbreaking activity recognition models.

Moreover, by employing multi-task self-supervision on this substantial dataset, it becomes crucial to demonstrate the pre-trained models' capability to generalize across various downstream activity recognition datasets, which is pivotal for clinical and health applications. The adoption of wearable sensor data and self-supervised learning methods represents a significant shift towards more robust and scalable approaches in human activity recognition research.

The Current Study

In this study, researchers used weighted single-task training to tackle convergence issues within individual pretext tasks, observing notable performance enhancements through weighted sampling. Subsequently, they assessed various self-supervision configurations across three downstream datasets of differing sizes, showcasing the efficacy of multi-task self-supervised learning. The pre-trained network three underwent fine-tuning on human activity recognition benchmarks to evaluate representation quality across different activity types and demographics.

The study employed a rigorous methodology to leverage self-supervised learning for human activity recognition using a large-scale dataset of wearable accelerometer data. This dataset comprised accelerometer data from wrist-worn activity trackers, capturing acceleration across three axes at a high sampling rate of 100 Hz. The signals were segmented into windows of equal duration and labeled with activity classes for supervised learning.

At the heart of the methodology was multi-task self-supervised learning on the unlabeled UK Biobank dataset, encompassing days of free-living activity data from over 700,000 individuals. A deep convolutional neural network was pre-trained on this expansive dataset, with weighted single-task training employed to optimize convergence behavior.

The models were fine-tuned on eight benchmark datasets to evaluate representation quality and generalizability. The release of pre-trained models aimed to facilitate knowledge sharing and collaboration within the research community, enabling the development of high-performing activity recognition models for diverse applications.

Results and Discussion

The study revealed significant advancements in human activity recognition through the application of self-supervised learning on a large-scale wearable accelerometer dataset. The models exhibited consistent outperformance on external datasets, showcasing their robustness and generalizability across diverse populations and activity types.

Weighted single-task training proved crucial in enhancing convergence behavior, with notable performance improvements observed across pretext tasks. The evaluation of benchmark datasets highlighted the effectiveness of multi-task self-supervised learning, with the pre-trained models demonstrating superior representation quality.

The study's findings underscore the importance of explainable AI models in capturing meaningful features for activity recognition, enabling the identification of relevant patterns in raw data. By releasing pre-trained models to the research community, the study aims to empower digital health researchers to develop high-performing models in domains with limited labeled data.

Overall, the authors emphasize the transformative potential of self-supervised learning in advancing human activity recognition research and its implications for personalized healthcare and population health studies.


In conclusion, the authors emphasized the practical utility of self-supervised learning for human activity recognition, especially in scenarios with limited labeled data. The release of pre-trained models to the research community aims to facilitate the development of high-performing models tailored to specific use cases.

The study's contributions in leveraging tera-scale wearables data and addressing technical challenges in training on large datasets underscore the potential of self-supervised learning in advancing activity recognition models.

Journal Reference

Yuan, H., Chan, S., Creagh, A.P. et al. Self-supervised learning for human activity recognition using 700,000 person-days of wearable data. npj Digit. Med. 7, 91 (2024).,

Dr. Noopur Jain

Written by

Dr. Noopur Jain

Dr. Noopur Jain is an accomplished Scientific Writer based in the city of New Delhi, India. With a Ph.D. in Materials Science, she brings a depth of knowledge and experience in electron microscopy, catalysis, and soft materials. Her scientific publishing record is a testament to her dedication and expertise in the field. Additionally, she has hands-on experience in the field of chemical formulations, microscopy technique development and statistical analysis.    


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Jain, Noopur. (2024, May 01). Self-Supervised Learning on Wearable Sensor Data. AZoSensors. Retrieved on June 22, 2024 from

  • MLA

    Jain, Noopur. "Self-Supervised Learning on Wearable Sensor Data". AZoSensors. 22 June 2024. <>.

  • Chicago

    Jain, Noopur. "Self-Supervised Learning on Wearable Sensor Data". AZoSensors. (accessed June 22, 2024).

  • Harvard

    Jain, Noopur. 2024. Self-Supervised Learning on Wearable Sensor Data. AZoSensors, viewed 22 June 2024,

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.