1,035 research outputs found

    Food intake gesture monitoring system based-on depth sensor

    Get PDF
    Food intake gesture technology is one of a new strategy for obesity people managing their health care while saving their time and money. This approach involves combining face and hand joint point for monitoring food intake of a user using Kinect Xbox One camera sensor. Rather than counting calories, scientists at Brigham Young University found dieters who eager to reduce their number of daily bites by 20 to 30 percent lost around two kilograms a month, regardless of what they ate [1]. Research studies showed that most of the methods used to count bite are worn type devices which has high false alarm ratio. Today trend is going toward the non-wearable device. This sensor is used to capture skeletal data of user while eating and train the data to capture the motion and movement while eating. There are specific joint to be capture such as Jaw face point and wrist roll joint. Overall accuracy is around 94%. Basically, this increase in the overall recognition rate of this system

    Using Hidden Markov Models to Segment and Classify Wrist Motions Related to Eating Activities

    Get PDF
    Advances in body sensing and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Measurements of dietary intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require considerable manual effort, leading to underreporting of consumption, non-compliance, and discontinued use over the long term. We are investigating the use of wrist-worn accelerometers and gyroscopes to automatically recognize eating gestures. In order to improve recognition accuracy, we studied the sequential ependency of actions during eating. In chapter 2 we first undertook the task of finding a set of wrist motion gestures which were small and descriptive enough to model the actions performed by an eater during consumption of a meal. We found a set of four actions: rest, utensiling, bite, and drink; any alternative gestures is referred as the other gesture. The stability of the definitions for gestures was evaluated using an inter-rater reliability test. Later, in chapter 3, 25 meals were hand labeled and used to study the existence of sequential dependence of the gestures. To study this, three types of classifiers were built: 1) a K-nearest neighbor classifier which uses no sequential context, 2) a hidden Markov model (HMM) which captures the sequential context of sub-gesture motions, and 3) HMMs that model inter-gesture sequential dependencies. We built first-order to sixth-order HMMs to evaluate the usefulness of increasing amounts of sequential dependence to aid recognition. The first two were our baseline algorithms. We found that the adding knowledge of the sequential dependence of gestures achieved an accuracy of 96.5%, which is an improvement of 20.7% and 12.2% over the KNN and sub-gesture HMM. Lastly, in chapter 4, we automatically segmented a continuous wrist motion signal and assessed its classification performance for each of the three classifiers. Again, the knowledge of sequential dependence enhances the recognition of gestures in unsegmented data, achieving 90% accuracy and improving 30.1% and 18.9% over the KNN and the sub-gesture HMM

    Individualized Wrist Motion Models for Detecting Eating Episodes Using Deep Learning

    Get PDF
    This thesis considers the problem of detecting eating episodes such as meals and snacks, by tracking wrist motion using smartwatch device. Previous work by our group has trained a wrist motion classifier using a large data set collected from 351 people to learn general eating behaviors. We call this a group model. This thesis investigates training the classifier with the same model architecture on new data collected by 8 people, and training the individualized classifier separately for each person. We call these individual models. The main goal in this work is to determine if individual models provide higher accuracy in detecting eating episodes, with fewer false positives, compared to the group model. By comparing their performance, we can also know if the improvement from individual models varies for each individual. In data collection, two data sets were used. One is the individual data set, which was collected from 8 participants and each participant has at least 10 days of wrist motion 6-axis timeseries data. There are 115 days, 1,064.5 hours and 246 meals collected in total in this data set. The second one is the group data set, called Clemson All-day Data set (CAD), collected in previous work. This group data set collected from 351 participants contains 354 days, 1,133 meals, 250 eating hours and 4,680 hours in total. Two data sets were first processed using smoothing and normalization techniques and then cut along time by a sliding window to generate training and testing samples for training models. In model training and evaluation, all models used the same convolution neural network architecture. Only one group model was trained on CAD group data set and this group model was used to compare with all other individual models. We used 5-fold cross validation to train and evaluate 5 individual models per individual. In model evaluation, we selected weighted accuracy (WAcc) as time metric to measure the models’ ability of classifying each window sample as eating or non-eating. We also selected true positive rate (TPR) and ratio of false positive over true positive (FP/TP) as episode metrics to measure model’s ability of detecting each meal episode. TPR measures how many true eating episode are detected correctly and FP/TP measures the ratio of wrong detection amount over true detection amount. Hence when TPR is larger and FP/TP is smaller, model performs better. WAcc, TPR and FP/TP were measured by cross validation. When measuring the time metric, we found that over 8 participants, the average WAcc on all individual models is 0.819 and the average WAcc on the group model is 0.780. On average, the individual models outperform the group model. Moreover, the improvement of individual models over the group model can vary per individual. For example, in one individual data set, individual models with WAcc of 0.897 have obvious improvement compared to the group model with WAcc of 0.774. In another individual data set, WAcc of 0.958 from individual models is very close to WAcc of 0.956 from the group model. In the measurement of episode metrics, we found that before tuning hyper-parameters Ts and Te, compared to the group model, individual models have the average improvement of 8.6% on TPR, but -14.4% on FP/TP. After tuning Ts and Te, individual models have the average improvement of 10.1% on TPR and 33.2 % on FP/TP. Tuning Ts and Te can improve the individual models’ episode metrics
    • …
    corecore