626 research outputs found

    A machine vision approach to human activity recognition using photoplethysmograph sensor data

    Get PDF
    Human activity recognition (HAR) is an active area of research concerned with the classification of human motion. Cameras are the gold standard used in this area, but they are proven to have scalability and privacy issues. HAR studies have also been conducted with wearable devices consisting of inertial sensors. Perhaps the most common wearable, smart watches, comprising of inertial and optical sensors, allow for scalable, non-obtrusive studies. We are seeking to simplify this wearable approach further by determining if wrist-mounted optical sensing, usually used for heart rate determination, can also provide useful data for relevant activity recognition. If successful, this could eliminate the need for the inertial sensor, and so simplify the technological requirements in wearable HAR. We adopt a machine vision approach for activity recognition based on plots of the optical signals so as to produce classifications that are easily explainable and interpretable by non-technical users. Specifically, time-series images of photoplethysmography signals are used to retrain the penultimate layer of a pretrained convolutional neural network leveraging the concept of transfer learning. Our results demonstrate an average accuracy of 75.8%. This illustrates the feasibility of implementing an optical sensor-only solution for a coarse activity and heart rate monitoring system. Implementing an optical sensor only in the design of these wearables leads to a trade off in classification performance, but in turn, grants the potential to simplify the overall design of activity monitoring and classification systems in the future

    PhysioGait: Context-Aware Physiological Context Modeling for Person Re-identification Attack on Wearable Sensing

    Full text link
    Person re-identification is a critical privacy breach in publicly shared healthcare data. We investigate the possibility of a new type of privacy threat on publicly shared privacy insensitive large scale wearable sensing data. In this paper, we investigate user specific biometric signatures in terms of two contextual biometric traits, physiological (photoplethysmography and electrodermal activity) and physical (accelerometer) contexts. In this regard, we propose PhysioGait, a context-aware physiological signal model that consists of a Multi-Modal Siamese Convolutional Neural Network (mmSNN) which learns the spatial and temporal information individually and performs sensor fusion in a Siamese cost with the objective of predicting a person's identity. We evaluated PhysioGait attack model using 4 real-time collected datasets (3-data under IRB #HP-00064387 and one publicly available data) and two combined datasets achieving 89% - 93% accuracy of re-identifying persons.Comment: Accepted in IEEE MSN 2022. arXiv admin note: substantial text overlap with arXiv:2106.1190

    Wearables, smartphones, and artificial intelligence for digital phenotyping and health

    Get PDF
    Ubiquitous progress in wearable sensing and mobile computing technologies, alongside growing diversity in sensor modalities, has created new pathways for the collection of health and well-being data outside of laboratory settings, in a longitudinal fashion. Wearable and mobile devices have the potential to provide low-cost, objective measures of physical activity, clinically relevant data for patient assessment, and scalable behavior monitoring in large populations. These data can be used in both interventional and observational studies to derive insights regarding the links between behavior, health. and disease, as well as to advance the personalization and effectiveness of commercial wellness applications. Today, over 400,000 participants have had their behavior tracked prospectively using accelerometers for epidemiological studies across the globe. Traditionally, epidemiologists and clinicians have relied upon self-report measures of physical activity and sleep which, while valuable in the absence of alternatives, are subject to bias and often provide partial, incomplete information Physical behavior data extracted from wearable devices are being used to derive sensor-assessed, objective measures of physical behaviors, overcoming the limitations of self-report with the aim of relating these to clinical endpoints and eventually applying the findings to preventive and predictive medicine. Moreover, the application of artificial intelligence (AI), sensor fusion, and signal processing to wearable sensor data has led to improved human activity recognition and behavioral phenotyping. Here, we review the state of the art in wearable and mobile sensing technology in epidemiology and clinical medicine and discuss how AI is changing the field

    Accurate and Robust Heart Rate Sensor Calibration on Smartwatches using Deep Learning

    Get PDF
    Heart rate (HR) monitoring has been the foundation of many researches and applications in the field of health care, sports and fitness, and physiology. With the development of affordable non- invasive optical heart rate monitoring technology, continuous monitoring of heart rate and related physiological parameters is increasingly possible. While this allows continuous access to heart rate information, its potential is severely constrained by the inaccuracy of the optical sensor that provides the signal for deriving heart rate information. Among all the factors influencing the sensor performance, hand motion is a particularly significant source of error. In this thesis, we first quantify the robustness and accuracy of the wearable heart rate monitor under everyday scenario, demonstrating its vulnerability to different kinds of motions. Consequently, we developed DeepHR, a deep learning based calibration technique, to improve the quality of heart rate measurements on smart wearables. DeepHR associates the motion features captured by accelerometer and gyroscope on the wearable with a reference sensor, such as a chest-worn HR monitor. Once pre-trained, DeepHR can be deployed on smart wearables to correct the errors caused by motion. Through rigorous and extensive benchmarks, we demonstrate that DeepHR significantly improves the accuracy and robustness of HR measurements on smart wearables, being superior to standard fully connected deep neural network models. In our evaluation, DeepHR is capable of generalizing across different activities and users, demonstrating that having a general pre-trained and pre-deployed model for various individual users is possible

    High-Resolution Motor State Detection in Parkinson's Disease Using Convolutional Neural Networks

    Get PDF
    Patients with advanced Parkinson's disease regularly experience unstable motor states. Objective and reliable monitoring of these fluctuations is an unmet need. We used deep learning to classify motion data from a single wrist-worn IMU sensor recording in unscripted environments. For validation purposes, patients were accompanied by a movement disorder expert, and their motor state was passively evaluated every minute. We acquired a dataset of 8,661 minutes of IMU data from 30 patients, with annotations about the motor state (OFF,ON, DYSKINETIC) based on MDS-UPDRS global bradykinesia item and the AIMS upper limb dyskinesia item. Using a 1-minute window size as an input for a convolutional neural network trained on data from a subset of patients, we achieved a three-class balanced accuracy of 0.654 on data from previously unseen subjects. This corresponds to detecting the OFF, ON, or DYSKINETIC motor state at a sensitivity/specificity of 0.64/0.89, 0.67/0.67 and 0.64/0.89, respectively. On average, the model outputs were highly correlated with the annotation on a per subject scale (r = 0.83/0.84;p < 0.0001), and sustained so for the highly resolved time windows of 1 minute (r = 0.64/0.70;p < 0.0001). Thus, we demonstrate the feasibility of long-term motor-state detection in a free-living setting with deep learning using motion data from a single IMU
    • …
    corecore