902 research outputs found
AI Modeling Approaches for Detecting, Characterizing, and Predicting Brief Daily Behaviors such as Toothbrushing using Wrist Trackers.
Continuous advancements in wrist-worn sensors have opened up exciting possibilities for real-time monitoring of individuals\u27 daily behaviors, with the aim of promoting healthier, more organized, and efficient lives. Understanding the duration of specific daily behaviors has become of interest to individuals seeking to optimize their lifestyles. However, there is still a research gap when it comes to monitoring short-duration behaviors that have a significant impact on health using wrist-worn inertial sensors in natural environments. These behaviors often involve repetitive micro-events that last only a few seconds or even microseconds, making their detection and analysis challenging. Furthermore, these micro-events are often surrounded by non-repetitive boundary events, further complicating the identification process. Effective detection and timely intervention during these short-duration behaviors are crucial for designing personalized interventions that can positively impact individuals\u27 lifestyles. To address these challenges, this dissertation introduces three models: mORAL, mTeeth, and Brushing Prompt. These models leverage wrist-worn inertial sensors to accurately infer short-duration behaviors, identify repetitive micro-behaviors, and provide timely interventions related to oral hygiene. The dissertation\u27s contributions extend beyond the development of these models. Firstly, precise and detailed labels for each brief and micro-repetitive behavior are acquired to train and validate the models effectively. This involved meticulous marking of the exact start and end times of each event, including any intervening pauses, at a second-level granularity. A comprehensive scientific research study was conducted to collect such data from participants in their free-living natural environments. Secondly, a solution is proposed to address the issue of sensor placement variability. Given the different positions of the sensor within a wristband and variations in wristband placement on the wrist, the model needs to determine the relative configuration of the inertial sensor accurately. Accurately determining the relative positioning of the inertial sensor with respect to the wrist is crucial for the model to determine the orientation of the hand. Additionally, time synchronization errors between sensor data and associated video, despite both being collected on the same smartphone, are addressed through the development of an algorithm that tightly synchronizes the two data sources without relying on an explicit anchor event. Furthermore, an event-based approach is introduced to identify candidate segments of data for applying machine learning models, outperforming the traditional fixed window-based approach. These candidate segments enable reliable detection of brief daily behaviors in a computationally efficient manner suitable for real-time. The dissertation also presents a computationally lightweight method for identifying anchor events using wrist-worn inertial sensors. Anchor events play a vital role in assigning unambiguous labels in a fixed-length window-based approach to data segmentation and effectively demarcating transitions between micro-repetitive events. Significant features are extracted, and explainable machine learning models are developed to ensure reliable detection of brief daily and micro-repetitive behaviors. Lastly, the dissertation addresses the crucial factor of the opportune moment for intervention during brief daily behaviors using wrist-worn inertial sensors. By leveraging these sensors, users can receive timely and personalized interventions to enhance their performance and improve their lifestyles. Overall, this dissertation makes substantial contributions to the field of real-time monitoring of short-duration behaviors. It tackles various technical challenges, provides innovative solutions, and demonstrates the potential for wrist-worn sensors to facilitate effective interventions and promote healthier behaviors. By advancing our understanding of these behaviors and optimizing intervention strategies, this research has the potential to significantly impact individuals\u27 well-being and contribute to the development of personalized health solutions
Unstructured Handwashing Recognition using Smartwatch to Reduce Contact Transmission of Pathogens
Current guidelines from the World Health Organization indicate that the
SARS-CoV-2 coronavirus, which results in the novel coronavirus disease
(COVID-19), is transmitted through respiratory droplets or by contact. Contact
transmission occurs when contaminated hands touch the mucous membrane of the
mouth, nose, or eyes so hands hygiene is extremely important to prevent the
spread of the SARSCoV-2 as well as of other pathogens. The vast proliferation
of wearable devices, such as smartwatches, containing acceleration, rotation,
magnetic field sensors, etc., together with the modern technologies of
artificial intelligence, such as machine learning and more recently
deep-learning, allow the development of accurate applications for recognition
and classification of human activities such as: walking, climbing stairs,
running, clapping, sitting, sleeping, etc. In this work, we evaluate the
feasibility of a machine learning based system which, starting from inertial
signals collected from wearable devices such as current smartwatches,
recognizes when a subject is washing or rubbing its hands. Preliminary results,
obtained over two different datasets, show a classification accuracy of about
95% and of about 94% for respectively deep and standard learning techniques
A 'one-size-fits-most' walking recognition method for smartphones, smartwatches, and wearable accelerometers
The ubiquity of personal digital devices offers unprecedented opportunities
to study human behavior. Current state-of-the-art methods quantify physical
activity using 'activity counts,' a measure which overlooks specific types of
physical activities. We proposed a walking recognition method for sub-second
tri-axial accelerometer data, in which activity classification is based on the
inherent features of walking: intensity, periodicity, and duration. We
validated our method against 20 publicly available, annotated datasets on
walking activity data collected at various body locations (thigh, waist, chest,
arm, wrist). We demonstrated that our method can estimate walking periods with
high sensitivity and specificity: average sensitivity ranged between 0.92 and
0.97 across various body locations, and average specificity for common daily
activities was typically above 0.95. We also assessed the method's algorithmic
fairness to demographic and anthropometric variables and measurement contexts
(body location, environment). Finally, we have released our method as
open-source software in MATLAB and Python.Comment: 39 pages, 4 figures (incl. 1 supplementary), and 5 tables (incl. 2
supplementary
Recommended from our members
Recognition of quotidian activities in support of independent living using a single wrist-worn inertial measurement unit
The field of Ambient Assisted Living (AAL) is gaining increasing attention from the research community in recent years with the rapid present and future ageing of the population worldwide. This problem has been widely recognised as has the need for it to be addressed both from an economic and societal perspective. Assisted living environments incorporate technological solutions to create a better condition of life for older adults. However, in order to create a better condition of life, it is crucial to understand the specific needs of each individual. To this regard, self-assessment of daily activities has shown to be subjective and variable, presenting important discrepancies with those performed by clinicians.
The above challenges have fostered the search for alternative monitoring solutions, increasing the research efforts upon the field of Human Activity Recognition (HAR). A vast array of sensing devices, including ambient sensors, video cameras and wearable devices, has been employed for the automatic monitoring of a person in a home environment. However, the research focus is shifting towards wearable solutions, which avoid the privacy concerns related to the use of video cameras in a home environment while providing more intrinsic information about the user than ambient devices.
The focus of this research is the investigation of signal processing and machine learning techniques for the recognition of quotidian activities concerning self-neglect (a behavioural condition in which individuals, generally older people, disregard the attention, intentionally or un intentionally, of their basic needs). More precisely, the aimed group of activities include those concerning personal hygiene, namely handswashing and teeth brushing, as well as those directly related to dietary behaviour, namely eating and drinking.
The work undertaken in this thesis is divided into three different stages. First, given the continuous quasi-periodic behaviour of hands washing and teeth brushing, these are studied alongside a group of other quotidian activities which also exhibit continuity during their performance. These studies include the investigation of informative features for activity recognition as well as relevant classification models and signal processing techniques. In addition, a novel multi-level refinement approach is proposed as a way to improve the classification rate of those activities with lower inter-activity classification rate.
Second, a novel framework for fluid and food intake gesture recognition is developed. As opposed to the above activities, the nature of eating and drinking activities is neither static nor quasi-periodic. Instead, they are composed of sparsely occurring motions or gestures in continuous data streams. Given this characteristic, a novel signal segmentation technique, namely the Crossings-based Adaptive Segmentation Technique (CAST), is proposed to identify potential eating and drinking gestures while filtering out the remaining unwanted
segments of the signals. In addition, various feature descriptors, namely a Soft Dynamic Time Warping (DTW) gesture discrepancy measure and time series to image encoding techniques, as well as various deep learning architectures are explored to overcome the notable existing similarity between eating and drinking gestures.
The third stage of the work aims at the identification of meal periods through the analysis of the distribution of eating gestures along time using low-computational cost signal processing techniques, including a moving average and an entropy measure.
The novel computational solutions and the results presented in this thesis, demonstrate a significant contribution towards the recognition of quotidian activities in support of independent living
Behavioral Privacy Risks and Mitigation Approaches in Sharing of Wearable Inertial Sensor Data
Wrist-worn inertial sensors in activity trackers and smartwatches are increasingly being used for daily tracking of activity and sleep. Wearable devices, with their onboard sensors, provide appealing mobile health (mHealth) platform that can be leveraged for continuous and unobtrusive monitoring of an individual in their daily life. As a result, an adaptation of wrist-worn devices in many applications (such as health, sport, and recreation) increases. Additionally, an increasing number of sensory datasets consisting of motion sensor data from wrist-worn devices are becoming publicly available for research. However, releasing or sharing these wearable sensor data creates serious privacy concerns of the user. First, in many application domains (such as mHealth, insurance, and health provider), user identity is an integral part of the shared data. In such settings, instead of identity privacy preservation, the focus is more on the behavioral privacy problem that is the disclosure of sensitive behaviors from the shared sensor data. Second, different datasets usually focus on only a select subset of these behaviors. But, in the event that users can be re-identified from accelerometry data, different databases of motion data (contributed by the same user) can be linked, resulting in the revelation of sensitive behaviors or health diagnoses of a user that was neither originally declared by a data collector nor consented by the user. The contributions of this dissertation are multifold. First, to show the behavioral privacy risk in sharing the raw sensor, this dissertation presents a detailed case study of detecting cigarette smoking in the field. It proposes a new machine learning model, called puffMarker, that achieves a false positive rate of 1/6 (or 0.17) per day, with a recall rate of 87.5%, when tested in a field study with 61 newly abstinent daily smokers. Second, it proposes a model-based data substitution mechanism, namely mSieve, to protect behavioral privacy. It evaluates the efficacy of the scheme using 660 hours of raw sensor data collected and demonstrates that it is possible to retain meaningful utility, in terms of inference accuracy (90%), while simultaneously preserving the privacy of sensitive behaviors. Finally, it analyzes the risks of user re-identification from wrist-worn sensor data, even after applying mSieve for protecting behavioral privacy. It presents a deep learning architecture that can identify unique micro-movement pattern in each wearer\u27s wrists. A new consistency-distinction loss function is proposed to train the deep learning model for open set learning so as to maximize re-identification consistency for known users and amplify distinction with any unknown user. In 10 weeks of daily sensor wearing by 353 participants, we show that a known user can be re-identified with a 99.7% true matching rate while keeping the false acceptance rate to 0.1% for an unknown user. Finally, for mitigation, we show that injecting even a low level of Laplace noise in the data stream can limit the re-identification risk. This dissertation creates new research opportunities on understanding and mitigating risks and ethical challenges associated with behavioral privacy
Automated Tracking of Hand Hygiene Stages
The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system
- …