123 research outputs found

    Evaluation of Chewing and Swallowing Sensors for Monitoring Ingestive Behavior

    Get PDF
    Monitoring Ingestive Behavior (MIB) of individuals is of special importance to identify and treat eating patterns associated with obesity and eating disorders. Current methods for MIB require subjects reporting every meal consumed, which is burdensome and tend to increase the reporting bias over time. This study presents an evaluation of the burden imposed by two wearable sensors for MIB during unrestricted food intake: a strain sensor to detect chewing events and a throat microphone to detect swallowing sounds. A total of 30 healthy subjects with various levels of adiposity participated in experiments involving the consumption of four meals in four different visits. A questionnaire was handled to subjects at the end of the last visit to evaluate the sensors burden in terms of the comfort levels experienced. Results showed that sensors presented high comfort levels as subjects indicated that the way they ate their meal was not considerably affected by the presence of the sensors. A statistical analysis showed that chewing sensor presented significantly higher comfort levels than the swallowing sensor. The outcomes of this study confirmed the suitability of the chewing and swallowing sensors for MIB and highlighted important aspects of comfort that should be addressed to obtain acceptable and less burdensome wearable sensors for MIB.Fil: Fontana, Juan Manuel. University of Alabama; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Sazonov, Edward S.. University of Alabama; Estados Unido

    A novel approach for food intake detection using electroglottography

    Get PDF
    Many methods for monitoring diet and food intake rely on subjects self-reporting their daily intake. These methods are subjective, potentially inaccurate and need to be replaced by more accurate and objective methods. This paper presents a novel approach that uses an electroglottograph (EGG) device for an objective and automatic detection of food intake. Thirty subjects participated in a four-visit experiment involving the consumption of meals with self-selected content. Variations in the electrical impedance across the larynx caused by the passage of food during swallowing were captured by the EGG device. To compare performance of the proposed method with a well-established acoustical method, a throat microphone was used for monitoring swallowing sounds. Both signals were segmented into non-overlapping epochs of 30 s and processed to extract wavelet features. Subject-independent classifiers were trained, using artificial neural networks, to identify periods of food intake from the wavelet features. Results from leave-one-out cross validation showed an average per-epoch classification accuracy of 90.1% for the EGG-based method and 83.1% for the acoustic-based method, demonstrating the feasibility of using an EGG for food intake detection.Fil: Farooq, Muhammad. University of Alabama; Estados UnidosFil: Fontana, Juan Manuel. University of Alabama; Estados Unidos. Universidad Nacional de Río Cuarto. Facultad de Ingeniería. Departamento de Mecánica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; ArgentinaFil: Sazonov, Edward. University of Alabama; Estados Unido

    Automatic food intake detection based on swallowing sounds

    Get PDF
    This paper presents a novel fully automatic food intake detection methodology, an important step toward objective monitoring of ingestive behavior. The aim of such monitoring is to improve our understanding of eating behaviors associated with obesity and eating disorders. The proposed methodology consists of two stages. First, acoustic detection of swallowing instances based on mel-scale Fourier spectrum features and classification using support vector machines is performed. Principal component analysis and a smoothing algorithm are used to improve swallowing detection accuracy. Second, the frequency of swallowing is used as a predictor for detection of food intake episodes. The proposed methodology was tested on data collected from 12 subjects with various degrees of adiposity. Average accuracies of \u3e80% and \u3e75% were obtained for intra-subject and inter-subject models correspondingly with a temporal resolution of 30 s. Results obtained on 44.1 h of data with a total of 7305 swallows show that detection accuracies are comparable for obese and lean subjects. They also suggest feasibility of food intake detection based on swallowing sounds and potential of the proposed methodology for automatic monitoring of ingestive behavior. Based on a wearable non-invasive acoustic sensor the proposed methodology may potentially be used in free-living conditions

    Automatic identification of the number of food items in a meal using clustering techniques based on the monitoring of swallowing and chewing

    Get PDF
    The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy \u3e95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions

    Reproducibility of Dietary Intake Measurement From Diet Diaries, Photographic Food Records, and a Novel Sensor Method

    Get PDF
    Objective: No data currently exist on the reproducibility of photographic food records compared to diet diaries, two commonly used methods to measure dietary intake. Our aim was to examine the reproducibility of diet diaries, photographic food records, and a novel electronic sensor, consisting of counts of chews and swallows using wearable sensors and video analysis, for estimating energy intake. Method: This was a retrospective analysis of data from a previous study, in which 30 participants (15 female), aged 29 ± 12 y and having a BMI of 27.9 ± 5.5, consumed three identical meals on different days. Four different methods were used to estimate total mass and energy intake on each day: (1) weighed food record; (2) photographic food record; (3) diet diary; and (4) novel mathematical model based on counts of chews and swallows (CCS models) obtained via the use of electronic sensors and video monitoring system. The study staff conducted weighed food records for all meals, took pre- and post-meal photographs, and ensured that diet diaries were completed by participants at the end of each meal. All methods were compared against the weighed food record, which was used as the reference method. Results: Reproducibility was significantly different between the diet diary and photographic food record for total energy intake (p = 0.004). The photographic record had greater reproducibility vs. the diet diary for all parameters measured. For total energy intake, the novel sensor method exhibited good reproducibility (repeatability coefficient (RC) of 59.9 (45.9, 70.4), which was better than that for the diet diary [RC = 79.6 (55.5, 103.3)] but not as repeatable as the photographic method [RC = 43.4 (32.1, 53.9)]. Conclusion: Photographic food records offer superior precision to the diet diary and, therefore, would be valuable for longitudinal studies with repeated measures of dietary intake. A novel electronic sensor also shows promise for the collection of longitudinal dietary intake data.Fil: Fontana, Juan Manuel. Universidad Nacional de Río Cuarto. Instituto para el Desarrollo Agroindustrial y de la Salud. - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto para el Desarrollo Agroindustrial y de la Salud; ArgentinaFil: Pan, Zhaoxing. University of Colorado; Estados UnidosFil: Sazonov, Edward S.. University of Alabama; Estados UnidosFil: McCrory, Megan A.. Boston University; Estados UnidosFil: Thomas, J. Graham. University Brown; Estados UnidosFil: McGrane, Kelli S.. University of Colorado; Estados UnidosFil: Marden, Tyson. University of Colorado; Estados UnidosFil: Higgins, Janine A.. University of Colorado; Estados Unido

    Statistical models for meal-level estimation of mass and energy intake using features derived from video observation and a chewing sensor

    Get PDF
    Accurate and objective assessment of energy intake remains an ongoing problem. We used features derived from annotated video observation and a chewing sensor to predict mass and energy intake during a meal without participant self-report. 30 participants each consumed 4 different meals in a laboratory setting and wore a chewing sensor while being videotaped. Subject-independent models were derived from bite, chew, and swallow features obtained from either video observation or information extracted from the chewing sensor. With multiple regression analysis, a forward selection procedure was used to choose the best model. The best estimates of meal mass and energy intake had (mean ± standard deviation) absolute percentage errors of 25.2% ± 18.9% and 30.1% ± 33.8%, respectively, and mean ± standard deviation estimation errors of −17.7 ± 226.9 g and −6.1 ± 273.8 kcal using features derived from both video observations and sensor data. Both video annotation and sensor-derived features may be utilized to objectively quantify energy intake.DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.)Published versio

    Egocentric Image Captioning for Privacy-Preserved Passive Dietary Intake Monitoring

    Get PDF
    Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviours of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this paper, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning to dietary intake assessment in real life settings

    Egocentric image captioning for privacy-preserved passive dietary intake monitoring

    Get PDF
    Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings
    corecore