1,040 research outputs found

    Food intake gesture monitoring system based-on depth sensor

    Get PDF
    Food intake gesture technology is one of a new strategy for obesity people managing their health care while saving their time and money. This approach involves combining face and hand joint point for monitoring food intake of a user using Kinect Xbox One camera sensor. Rather than counting calories, scientists at Brigham Young University found dieters who eager to reduce their number of daily bites by 20 to 30 percent lost around two kilograms a month, regardless of what they ate [1]. Research studies showed that most of the methods used to count bite are worn type devices which has high false alarm ratio. Today trend is going toward the non-wearable device. This sensor is used to capture skeletal data of user while eating and train the data to capture the motion and movement while eating. There are specific joint to be capture such as Jaw face point and wrist roll joint. Overall accuracy is around 94%. Basically, this increase in the overall recognition rate of this system

    An Assessment of the Accuracy of an Automated Bite Counting Method in a Cafeteria Setting

    Get PDF
    Advances in body-worn sensors and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Obesity has been recognized as a target of opportunity that could particularly benefit from this approach. Self-monitoring of dietary intake is critical for weight loss/management, but currently used tools such as food diaries require users to manually estimate and record energy intake, making them subjective, prone to error, and difficult to use for long periods of time. Our group is developing a new tool called the \u27bite counter\u27 that automates the monitoring of caloric intake. The device is worn like a watch and uses sensors to track wrist motion during a meal. Previous studies have shown that our method accurately counts bites during controlled and uncontrolled meals in the lab. This thesis describes a study to evaluate the accuracy of the method in a cafeteria setting. A cafeteria booth that can seat 1 to 4 people was instrumented with tethered wrist motion trackers, embedded scales, and video cameras, to enable recording of wrist motion, changes in food weight, and actual activities during eating. A total of 276 subjects were recorded eating uncontrolled meals. The data was manually reviewed and the times of all actual bites taken were recorded as \u27ground truth\u27. The wrist motion data was then analyzed using the automated bite counting method to determine the times of automated bite detections. These were compared against the ground truth to evaluate the accuracy of the bite counting method. In total, 22,383 bites were evaluated, consisting of 380 different foods, eaten using 4 different utensils from 4 different containers, across a variety of subject demographics. Results show that the method varied in accuracy from 39 % (for ice cream cones) to 88% (for salad bar) across the 39 most commonly eaten foods (\u3e=100 bite occurrences in the data set). The average accuracy found across all bites was 76% with a positive predictive value of 87%. A second test of the bite counting method using modified timing thresholds resulted in 82% accuracy with a 82% positive predictive value. These results indicate that the method works well across a wide variety of foods, utensils, containers, and subject demographics. The results also indicate that eating rate may be the most important variable to consider in the search for improvements to the method

    Egocentric vision-based passive dietary intake monitoring

    Get PDF
    Egocentric (first-person) perception captures and reveals how people perceive their surroundings. This unique perceptual view enables passive and objective monitoring of human-centric activities and behaviours. In capturing egocentric visual data, wearable cameras are used. Recent advances in wearable technologies have enabled wearable cameras to be lightweight, accurate, and with long battery life, making long-term passive monitoring a promising solution for healthcare and human behaviour understanding. In addition, recent progress in deep learning has provided an opportunity to accelerate the development of passive methods to enable pervasive and accurate monitoring, as well as comprehensive modelling of human-centric behaviours. This thesis investigates and proposes innovative egocentric technologies for passive dietary intake monitoring and human behaviour analysis. Compared to conventional dietary assessment methods in nutritional epidemiology, such as 24-hour dietary recall (24HR) and food frequency questionnaires (FFQs), which heavily rely on subjects’ memory to recall the dietary intake, and trained dietitians to collect, interpret, and analyse the dietary data, passive dietary intake monitoring can ease such burden and provide more accurate and objective assessment of dietary intake. Egocentric vision-based passive monitoring uses wearable cameras to continuously record human-centric activities with a close-up view. This passive way of monitoring does not require active participation from the subject, and records rich spatiotemporal details for fine-grained analysis. Based on egocentric vision and passive dietary intake monitoring, this thesis proposes: 1) a novel network structure called PAR-Net to achieve accurate food recognition by mining discriminative food regions. PAR-Net has been evaluated with food intake images captured by wearable cameras as well as those non-egocentric food images to validate its effectiveness for food recognition; 2) a deep learning-based solution for recognising consumed food items as well as counting the number of bites taken by the subjects from egocentric videos in an end-to-end manner; 3) in light of privacy concerns in egocentric data, this thesis also proposes a privacy-preserved solution for passive dietary intake monitoring, which uses image captioning techniques to summarise the image content and subsequently combines image captioning with 3D container reconstruction to report the actual food volume consumed. Furthermore, a novel framework that integrates food recognition, hand tracking and face recognition has also been developed to tackle the challenge of assessing individual dietary intake in food sharing scenarios with the use of a panoramic camera. Extensive experiments have been conducted. Tested with both laboratory (captured in London) and field study data (captured in Africa), the above proposed solutions have proven the feasibility and accuracy of using the egocentric camera technologies with deep learning methods for individual dietary assessment and human behaviour analysis.Open Acces

    Bite detection and differentiation using templates of wrist motion

    Get PDF
    We introduce a new algorithm of bite detection during an eating activity based on template matching. The algorithm uses a template to model the motion of the wrist over a 6-second window centered on the time when a person takes a bite. We also determine if different types of bites (for example food vs. drink, or using different types of utensils) have different wrist motion templates. This method is implemented on 22,383 bites and 5 different types of templates are built. We then describe a method to recognize different types of bites using the set of templates. The obtained accuracy was 46%. Finally, we describe a method to detect bites using the set of templates and compare its accuracy to the original threshold-based algorithm. We get positive predictive value of 75 % and true positive rate of 47% found across all bites

    USING THE BITE COUNTER TO OVERCOME THE EFFECT OF PLATE SIZE ON FOOD INTAKE

    Get PDF
    According to a recent National Health and Nutrition Examination Survey, overweight and obesity have reached epidemic levels in the United States (Flegal et at., 2010, NHANES, 2010) There are many treatments for overweight and obesity, the most popular being behavioral interventions (Berkel et al., 2005). Self-monitoring is one of the most important factors of successful behavioral interventions (Baker & Kirschenbaum, 1993). The Bite Counter is a newly developed tool for weight loss that aids in the self-monitoring process (Dong et al., 2011). The purpose of the current study was to determine if bite count feedback and an instruction on the number of bites to take could overcome the known environmental cue of plate size where eating from larger plates causes individuals to eat more (Wansink 2004). Data were collected from 112 participants eating a meal of macaroni and cheese in a laboratory setting. In a 2x2 design, the participants were assigned to one of four conditions: instruction given and small plate, instruction given and large plate, instruction not given and small plate, or instruction not given and large plate. Grams consumed and bites taken were measured post meal as the main dependent variables. A 2x2 ANOVA of grams consumed revealed a main effect of INSTRUCTION (F(1,104)= 5.297, p=.023, η² = .048) such that those given an instruction to take 22 bites consumed more macaroni and cheese, a main effect of PLATE SIZE (F(1,104)= 5.798, p=.018, η² = .053) such that those eating from a large plate consumed more macaroni and cheese, and an interaction (F(1,104)= 7.695, p= .007, η² = .069) such that the given instruction partially overcame the effect of plate size on grams consumed. A 2x2 ANOVA of bites taken revealed a main effect of INSTRUCTION (F(1,104)= 7.47, p= .007, η² = .067) such that those given an instruction to take 22 bites took more bites, a main effect of PLATE SIZE (F(1,104)= 14.264, p\u3c .001, η² = .121) such that those eating from a large plate took more bites , and an interaction (F(1,104)= 14.964, p\u3c .001, η² = .126) such that the given instruction partially overcame the effect of plate size on number of bites taken . The results suggest that a given instruction on the number of bites to take along with feedback on the number of bites taken, can partially overcome a known environmental cue of plate size

    Detecting Periods of Eating in Everyday Life by Tracking Wrist Motion — What is a Meal?

    Get PDF
    Eating is one of the most basic activities observed in sentient animals, a behavior so natural that humans often eating without giving the activity a second thought. Unfortunately, this often leads to consuming more calories than expended, which can cause weight gain - a leading cause of diseases and death. This proposal describes research in methods to automatically detect periods of eating by tracking wrist motion so that calorie consumption can be tracked. We first briefly discuss how obesity is caused due to an imbalance in calorie intake and expenditure. Calorie consumption and expenditure can be tracked manually using tools like paper diaries, however it is well known that human bias can affect the accuracy of such tracking. Researchers in the upcoming field of automated dietary monitoring (ADM) are attempting to track diet using electronic methods in an effort to mitigate this bias. We attempt to replicate a previous algorithm that detects eating by tracking wrist motion electronically. The previous algorithm was evaluated on data collected from 43 subjects using an iPhone as the sensor. Periods of time are segmented first, and then classified using a naive Bayesian classifier. For replication, we describe the collection of the Clemson all-day data set (CAD), a free-living eating activity dataset containing 4,680 hours of wrist motion collected from 351 participants - the largest of its kind known to us. We learn that while different sensors are available to log wrist acceleration data, no unified convention exists, and this data must thus be transformed between conventions. We learn that the performance of the eating detection algorithm is affected due to changes in the sensors used to track wrist motion, increased variability in behavior due to a larger participant pool, and the ratio of eating to non-eating in the dataset. We learn that commercially available acceleration sensors contain noise in their reported readings which affects wrist tracking specifically due to the low magnitude of wrist acceleration. Commercial accelerometers can have noise up to 0.06g which is acceptable in applications like automobile crash testing or pedestrian indoor navigation, but not in ones using wrist motion. We quantify linear acceleration noise in our free-living dataset. We explain sources of noise, a method to mitigate it, and also evaluate the effect of this noise on the eating detection algorithm. By visualizing periods of eating in the collected dataset we learn that that people often conduct secondary activities while eating, such as walking, watching television, working, and doing household chores. These secondary activities cause wrist motions that obfuscate wrist motions associated with eating, which increases the difficulty of detecting periods of eating (meals). Subjects reported conducting secondary activities in 72% of meals. Analysis of wrist motion data revealed that the wrist was resting 12.8% of the time during self-reported meals, compared to only 6.8% of the time in a cafeteria dataset. Walking motion was found during 5.5% of the time during meals in free-living, compared to 0% in the cafeteria. Augmenting an eating detection classifier to include walking and resting detection improved the average per person accuracy from 74% to 77% on our free-living dataset (t[353]=7.86, p\u3c0.001). This suggests that future data collections for eating activity detection should also collect detailed ground truth on secondary activities being conducted during eating. Finally, learning from this data collection, we describe a convolutional neural network (CNN) to detect periods of eating by tracking wrist motion during everyday life. Eating uses hand-to-mouth gestures for ingestion, each of which lasts appx 1-5 sec. The novelty of our new approach is that we analyze a much longer window (0.5-15 min) that can contain other gestures related to eating, such as cutting or manipulating food, preparing foods for consumption, and resting between ingestion events. The context of these other gestures can improve the detection of periods of eating. We found that accuracy at detecting eating increased by 15% in longer windows compared to shorter windows. Overall results on CAD were 89% detection of meals with 1.7 false positives for every true positive (FP/TP), and a time weighted accuracy of 80%
    • …
    corecore