6 research outputs found

    Statistical models for meal-level estimation of mass and energy intake using features derived from video observation and a chewing sensor

    Get PDF
    Accurate and objective assessment of energy intake remains an ongoing problem. We used features derived from annotated video observation and a chewing sensor to predict mass and energy intake during a meal without participant self-report. 30 participants each consumed 4 different meals in a laboratory setting and wore a chewing sensor while being videotaped. Subject-independent models were derived from bite, chew, and swallow features obtained from either video observation or information extracted from the chewing sensor. With multiple regression analysis, a forward selection procedure was used to choose the best model. The best estimates of meal mass and energy intake had (mean ± standard deviation) absolute percentage errors of 25.2% ± 18.9% and 30.1% ± 33.8%, respectively, and mean ± standard deviation estimation errors of −17.7 ± 226.9 g and −6.1 ± 273.8 kcal using features derived from both video observations and sensor data. Both video annotation and sensor-derived features may be utilized to objectively quantify energy intake.DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.)Published versio

    Early Detection of the Initiation of Sit-to-Stand Posture Transitions Using Orthosis-Mounted Sensors

    No full text
    Assistance during sit-to-stand (SiSt) transitions for frail elderly may be provided by powered orthotic devices. The control of the powered orthosis may be performed by the means of electromyography (EMG), which requires direct contact of measurement electrodes to the skin. The purpose of this study was to determine if a non-EMG-based method that uses inertial sensors placed at different positions on the orthosis, and a lightweight pattern recognition algorithm may accurately identify SiSt transitions without false positives. A novel method is proposed to eliminate false positives based on a two-stage design: stage one detects the sitting posture; stage two recognizes the initiation of a SiSt transition from a sitting position. The method was validated using data from 10 participants who performed 34 different activities and posture transitions. Features were obtained from the sensor signals and then combined into lagged epochs. A reduced number of features was selected using a minimum-redundancy-maximum-relevance (mRMR) algorithm and forward feature selection. To obtain a recognition model with low computational complexity, we compared the use of an extreme learning machine (ELM) and multilayer perceptron (MLP) for both stages of the recognition algorithm. Both classifiers were able to accurately identify all posture transitions with no false positives. The average detection time was 0.19 ± 0.33 s for ELM and 0.13 ± 0.32 s for MLP. The MLP classifier exhibited less time complexity in the recognition phase compared to ELM. However, the ELM classifier presented lower computational demands in the training phase. Results demonstrated that the proposed algorithm could potentially be adopted to control a powered orthosis

    Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment

    Get PDF
    Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants’ food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (±standard deviation (STD)) kappa value of 0.74 (±0.02) and for food intake annotation the average kappa (Light’s kappa) of 0.82 (±0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (±0.10) and 0.78 (±0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters’ annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies

    Meal Microstructure Characterization from Sensor-Based Food Intake Detection

    Get PDF
    To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary
    corecore