2,619 research outputs found

    Image-based food classification and volume estimation for dietary assessment: a review.

    Get PDF
    A daily dietary assessment method named 24-hour dietary recall has commonly been used in nutritional epidemiology studies to capture detailed information of the food eaten by the participants to help understand their dietary behaviour. However, in this self-reporting technique, the food types and the portion size reported highly depends on users' subjective judgement which may lead to a biased and inaccurate dietary analysis result. As a result, a variety of visual-based dietary assessment approaches have been proposed recently. While these methods show promises in tackling issues in nutritional epidemiology studies, several challenges and forthcoming opportunities, as detailed in this study, still exist. This study provides an overview of computing algorithms, mathematical models and methodologies used in the field of image-based dietary assessment. It also provides a comprehensive comparison of the state of the art approaches in food recognition and volume/weight estimation in terms of their processing speed, model accuracy, efficiency and constraints. It will be followed by a discussion on deep learning method and its efficacy in dietary assessment. After a comprehensive exploration, we found that integrated dietary assessment systems combining with different approaches could be the potential solution to tackling the challenges in accurate dietary intake assessment

    Formative evaluation of a mobile liquid portion size estimation interface for people with varying literacy skills

    Get PDF
    Chronically ill people, especially those with low literacy skills, often have difficulty estimating portion sizes of liquids to help them stay within their recommended fluid limits. There is a plethora of mobile applications that can help people monitor their nutritional intake but unfortunately these applications require the user to have high literacy and numeracy skills for portion size recording. In this paper, we present two studies in which the low- and the high-fidelity versions of a portion size estimation interface, designed using the cognitive strategies adults employ for portion size estimation during diet recall studies, was evaluated by a chronically ill population with varying literacy skills. The low fidelity interface was evaluated by ten patients who were all able to accurately estimate portion sizes of various liquids with the interface. Eighteen participants did an in situ evaluation of the high-fidelity version incorporated in a diet and fluid monitoring mobile application for 6 weeks. Although the accuracy of the estimation cannot be confirmed in the second study but the participants who actively interacted with the interface showed better health outcomes by the end of the study. Based on these findings, we provide recommendations for designing the next iteration of an accurate and low literacy- accessible liquid portion size estimation mobile interface

    A Survey on Automated Food Monitoring and Dietary Management Systems

    Get PDF
    Healthy diet with balanced nutrition is key to the prevention of life-threatening diseases such as obesity, cardiovascular disease, and cancer. Recent advances in smartphone and wearable sensor technologies have led to a proliferation of food monitoring applications based on automated food image processing and eating episode detection, with the goal to conquer drawbacks of the traditional manual food journaling that is time consuming, inaccurate, underreporting, and low adherent. In order to provide users feedback with nutritional information accompanied by insightful dietary advice, various techniques in light of the key computational learning principles have been explored. This survey presents a variety of methodologies and resources on this topic, along with unsolved problems, and closes with a perspective and boarder implications of this field

    Quantification of energy intake using food image analysis

    Get PDF
    Obtaining real-time and accurate estimates of energy intake while people reside in their natural environment is technically and methodologically challenging. The goal of this project is to estimate energy intake accurately in real-time and free-living conditions. In this study, we propose a computer vision based system to estimate energy intake based on food pictures taken and emailed by subjects participating in the experiment. The system introduces a reference card inclusion procedure, which is used for geometric and photometric corrections. Image classification and segmentation methods are also incorporated into the system to have fully-automated decision making

    Automatic Food Intake Assessment Using Camera Phones

    Get PDF
    Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user\u27s memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors

    GourmetNet: Food Segmentation Using Multi-Scale Waterfall Features With Spatial and Channel Attention

    Get PDF
    Deep learning and Computer vision are extensively used to solve problems in wide range of domains from automotive and manufacturing to healthcare and surveillance. Research in deep learning for food images is mainly limited to food identification and detection. Food segmentation is an important problem as the first step for nutrition monitoring, food volume and calorie estimation. This research is intended to expand the horizons of deep learning and semantic segmentation by proposing a novel single-pass, end-to-end trainable network for food segmentation. Our novel architecture incorporates both channel attention and spatial attention information in an expanded multi-scale feature representation using the WASPv2 module. The refined features will be processed with the advanced multi-scale waterfall module that combines the benefits of cascade filtering and pyramid representations without requiring a separate decoder or postprocessing

    Collaborative design and feasibility assessment of computational nutrient sensing for simulated food-intake tracking in a healthcare environment

    Get PDF
    One in four older adults (65 years and over) are living with some form of malnutrition. This increases their odds of hospitalization four-fold and is associated with decreased quality of life and increased mortality. In long-term care (LTC), residents have more complex care needs and the proportion affected is a staggering 54% primarily due to low intake. Tracking intake is important for monitoring whether residents are meeting their nutritional needs however current methods are time-consuming, subjective, and prone to large margins of error. This reduces the utility of tracked data and makes it challenging to identify individuals at-risk in a timely fashion. While technologies exist for tracking food-intake, they have not been designed for use within the LTC context and require a large time burden by the user. Especially in light of the machine learning boom, there is great opportunity to harness learnings from this domain and apply it to the field of nutrition for enhanced food-intake tracking. Additionally, current approaches to monitoring food-intake tracking are limited by the nutritional database to which they are linked making generalizability a challenge. Drawing inspiration from current methods, the desires of end-users (primary users: personal support workers, registered staff, dietitians), and machine learning approaches suitable for this context in which there is limited data available, we investigated novel methods for assessing needs in this environment and imagine an alternative approach. We leveraged image processing and machine learning to remove subjectivity while increasing accuracy and precision to support higher-quality food-intake tracking. This thesis presents the ideation, design, development and evaluation of a collaboratively designed, and feasibility assessment, of computational nutrient sensing for simulated food-intake tracking in the LTC environment. We sought to remove potential barriers to uptake through collaborative design and ongoing end user engagement for developing solution concepts for a novel Automated Food Imaging and Nutrient Intake Tracking (AFINI-T) system while implementing the technology in parallel. More specifically, we demonstrated the effectiveness of applying a modified participatory iterative design process modeled from the Google Sprint framework in the LTC context which identified priority areas and established functional criteria for usability and feasibility. Concurrently, we developed the novel AFINI-T system through the co-integration of image processing and machine learning and guided by the application of food-intake tracking in LTC to address three questions: (1) where is there food? (i.e., food segmentation), (2) how much food was consumed? (i.e., volume estimation) using a fully automatic imaging system for quantifying food-intake. We proposed a novel deep convolutional encoder-decoder food network with depth-refinement (EDFN-D) using an RGB-D camera for quantifying a plate’s remaining food volume relative to reference portions in whole and modified texture foods. To determine (3) what foods are present (i.e., feature extraction and classification), we developed a convolutional autoencoder to learn meaningful food-specific features and developed classifiers which leverage a priori information about when certain foods would be offered and the level of texture modification prescribed to apply real-world constraints of LTC. We sought to address real-world complexity by assessing a wide variety of food items through the construction of a simulated food-intake dataset emulating various degrees of food-intake and modified textures (regular, minced, puréed). To ensure feasibility-related barriers to uptake were mitigated, we employed a feasibility assessment using the collaboratively designed prototype. Finally, this thesis explores the feasibility of applying biophotonic principles to food as a first step to enhancing food database estimates. Motivated by a theoretical optical dilution model, a novel deep neural network (DNN) was evaluated for estimating relative nutrient density of commercially prepared purées. For deeper analysis we describe the link between color and two optically active nutrients, vitamin A, and anthocyanins, and suggest it may be feasible to utilize optical properties of foods to enhance nutritional estimation. This research demonstrates a transdisciplinary approach to designing and implementing a novel food-intake tracking system which addresses several shortcomings of the current method. Upon translation, this system may provide additional insights for supporting more timely nutritional interventions through enhanced monitoring of nutritional intake status among LTC residents

    A Systematic Literature Review With Bibliometric Meta-Analysis Of Deep Learning And 3D Reconstruction Methods In Image Based Food Volume Estimation Using Scopus, Web Of Science And IEEE Database

    Get PDF
    Purpose- Estimation of food portions is necessary in image based dietary monitoring techniques. The purpose of this systematic survey is to identify peer reviewed literature in image-based food volume estimation methods in Scopus, Web of Science and IEEE database. It further analyzes bibliometric survey of image-based food volume estimation methods with 3D reconstruction and deep learning techniques. Design/methodology/approach- Scopus, Web of Science and IEEE citation databases are used to gather the data. Using advanced keyword search and PRISMA approach, relevant papers were extracted, selected and analyzed. The bibliographic data of the articles published in the journals over the past twenty years were extracted. A deeper analysis was performed using bibliometric indicators and applications with Microsoft Excel and VOS viewer. A comparative analysis of the most cited works in deep learning and 3D reconstruction methods is performed. Findings: This review summarizes the results from the extracted literature. It traces research directions in the food volume estimation methods. Bibliometric analysis and PRISMA search results suggest a broader taxonomy of the image-based methods to estimate food volume in dietary management systems and projects. Deep learning and 3D reconstruction methods show better accuracy in the estimations over other approaches. The work also discusses importance of diverse and robust image datasets for training accurate learning models in food volume estimation. Practical implications- Bibliometric analysis and systematic review gives insights to researchers, dieticians and practitioners with the research trends in estimation of food portions and their accuracy. It also discusses the challenges in building food volume estimator model using deep learning and opens new research directions. Originality/value- This study represents an overview of the research in the food volume estimation methods using deep learning and 3D reconstruction methods using works from 1995 to 2020. The findings present the five different popular methods which have been used in the image based food volume estimation and also shows the research trends with the emerging 3D reconstruction and deep learning methodologies. Additionally, the work emphasizes the challenges in the use of these approaches and need of developing more diverse, benchmark image data sets for food volume estimation including raw food, cooked food in all states and served with different containers

    Validation Study of a Passive Image-Assisted Dietary Assessment Method with Automated Image Analysis Process

    Get PDF
    Background: Image-assisted dietary assessment is being developed to enhance accuracy of dietary assessment. This study validated a passive image-assisted dietary assessment method, with an emphasis on examining if food shape and complexity influenced results.Methods: A 2x2x2x2x3 mixed factorial design was used, with a between-subject factor of meal orders, and within-subject factors of food shapes, food complexities, meals, and methods of measurement, to validate the passive image-assisted dietary assessment method. Thirty men and women (22.7 ± 1.6 kg/m2, 25.1 ± 6.6 years, 46.7% White) wore the Sony Smarteyeglass that automatically took images while two meals containing four foods representing four food categories were consumed. Images from the first 5 minutes of each meal were coded and then compared to DietCam for food identification. The comparison produced four outcomes: DietCam identifying food correctly in image (True Positive), DietCam incorrectly identifying food in image (False Positive), DietCam not identifying food in image (False Negative), or DietCam correctly identifying that the food is not in the image (True Negative). Participants’ feedback about the Sony Smarteyeglass was obtained by a survey.Results: A total of 36,412 images were coded by raters and analyzed by DietCam, with raters coding that 92.4% of images contained foods and DietCam coding that 76.3% of images contained foods. Mixed factorial analysis of covariance revealed a significant main effect of percent agreement between DietCam and rater’s coded images [(F (3,48) = 8.5, p \u3c 0.0001]. The overall mean of True Positive was 22.2 ± 3.6 %, False Positive was 1.2 ± 0.4%, False Negative was 19.6 ± 5.0%, and True Negative was 56.8 ± 7.2%. True Negative was significantly (p \u3c 0.0001) different from all other percent agreement categories. No main effects of food shape or complexity were found. Participants reported that they were not willing to wear the Sony Smarteyeglass under different types of dining experiences.Conclusion: DietCam is most accurate in identifying images that do not contain food. The platform from which the images are collected needs to be modified to enhance consumer acceptance
    • …
    corecore