36 research outputs found

    A Practical Approach for Recognizing Eating Moments With Wrist-Mounted Inertial Sensing

    Get PDF
    Copyright ©2015 ACMDOI: 10.1145/2750858.2807545Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling

    A Wearable Sensing Framework for Improving Personal and Oral Hygiene for People with Developmental Disabilities

    Get PDF
    People with developmental disabilities often face difficulties in coping with daily activities and many require constant support. One of the major health issues for people with developmental disabilities is personal hygiene. Many lack the ability, poor memory or lack of attention to carry out normal daily activities like brushing teeth and washing hands. Poor personal hygiene may result in increased susceptibility to infection and other health issues. To enable independent living and improve the quality of care for people with developmental abilities, this paper proposes a new wearable sensing framework to monitoring personal hygiene. Based on a smartwatch, this framework is designed as a pervasive monitoring and learning tool to provide detailed evaluation and feedback to the user on hand washing and tooth brushing. A preliminary study was conducted to assess the performance of the approach, and the results showed the reliability and robustness of the framework in quantifying and assessing hand washing and tooth brushing activities

    Eating and Exercise Detection with Continuous Glucose Monitors

    Get PDF
    Eating and exercise detection using continuous glucose monitor (CGM) signals is key to provide recommendations for a healthy lifestyle. However, this can be challenging given imbalanced data and other contexts. Previous works have used accelerometers, gyroscopes, glucose monitors, and other sensors but not necessarily all three plus others combined. Therefore, I aim to build a model by testing various techniques and testing glucose along with different statistical body measurements, such as electrodermal activity, heart rate, blood volume, accelerometer, gyroscope, etc. A sliding window is used to extract statistical measures from each body measurement, such as standard deviation, mean, and range to look for patterns correlated to eating and exercise. I select an extreme gradient boosted decision tree algorithm with Synthetic Minority Oversampling Technique. I compare the performance of just solely using glucose and then adding more sensory data and discovered that there is not consistent change in performance. I also adjusted the window and overlap to compare eating detection performance and found that there is not a concrete impact on the performance. Furthermore, I performed exercise detection and compare with and without CGM. There appears to be no significant performance difference with or without glucose. In addition to eating detection, I also examine for correlation between glucose variation and exercise moments. I finally conclude that it is not feasibly possible to detect eating with my current methods. However, for exercise detection, I can produce better detection results compared to eating, but my current method for detecting correlations between glucose levels and exercise moments can be later improved

    MirrorGen Wearable Gesture Recognition using Synthetic Videos

    Get PDF
    abstract: In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of extensive datasets and the nature of the Inertial Measurement Unit (IMU) data, there are difficulties in applying deep learning techniques to them. Although many machine learning models have good accuracy, most of them assume that training data is available for every user while other works that do not require user data have lower accuracies. MirrorGen is a technique which uses wearable sensor data and generates synthetic videos using hand movements and it mitigates the traditional challenges of vision based recognition such as occlusion, lighting restrictions, lack of viewpoint variations, and environmental noise. In addition, MirrorGen allows for user-independent recognition involving minimal human effort during data collection. It also helps leverage the advances in vision-based recognition by using various techniques like optical flow extraction, 3D convolution. Projecting the orientation (IMU) information to a video helps in gaining position information of the hands. To validate these claims, we perform entropy analysis on various configurations such as raw data, stick model, hand model and real video. Human hand model is found to have an optimal entropy that helps in achieving user independent recognition. It also serves as a pervasive option as opposed to a video-based recognition. The average user independent recognition accuracy of 99.03% was achieved for a sign language dataset with 59 different users, 20 different signs with 20 repetitions each for a total of 23k training instances. Moreover, synthetic videos can be used to augment real videos to improve recognition accuracy.Dissertation/ThesisMasters Thesis Computer Science 201
    corecore