173 research outputs found
Smartphone-based Calorie Estimation From Food Image Using Distance Information
Personal assistive systems for diet control can play a vital role to combat obesity. As smartphones have become inseparable companions for a large number of people around the world, designing smartphone-based system is perhaps the best choice at the moment. Using this system people can take an image of their food right before eating, know the calorie content based on the food items on the plate. In this paper, we propose a simple method that ensures both user flexibility and high accuracy at the same time. The proposed system employs capturing food images with a fixed posture and estimating the volume of the food using simple geometry. The real world experiments on different food items chosen arbitrarily show that the proposed system can work well for both regular and liquid food items
A Multi-Task Learning Approach for Meal Assessment
Key role in the prevention of diet-related chronic diseases plays the
balanced nutrition together with a proper diet. The conventional dietary
assessment methods are time-consuming, expensive and prone to errors. New
technology-based methods that provide reliable and convenient dietary
assessment, have emerged during the last decade. The advances in the field of
computer vision permitted the use of meal image to assess the nutrient content
usually through three steps: food segmentation, recognition and volume
estimation. In this paper, we propose a use one RGB meal image as input to a
multi-task learning based Convolutional Neural Network (CNN). The proposed
approach achieved outstanding performance, while a comparison with
state-of-the-art methods indicated that the proposed approach exhibits clear
advantage in accuracy, along with a massive reduction of processing time
Analysis & Numerical Simulation of Indian Food Image Classification Using Convolutional Neural Network
Recognition of Indian food can be assumed to be a fine-grained visual task owing to recognition property of various food classes. It is therefore important to provide an optimized approach to segmentation and classification for different applications based on food recognition. Food computation mainly utilizes a computer science approach which needs food data from various data outlets like real-time images, social flat-forms, food journaling, food datasets etc, for different modalities. In order to consider Indian food images for a number of applications we need a proper analysis of food images with state-of-art-techniques. The appropriate segmentation and classification methods are required to forecast the relevant and upgraded analysis. As accurate segmentation lead to proper recognition and identification, in essence we have considered segmentation of food items from images. Considering the basic convolution neural network (CNN) model, there are edge and shape constraints that influence the outcome of segmentation on the edge side. Approaches that can solve the problem of edges need to be developed; an edge-adaptive As we have solved the problem of food segmentation with CNN, we also have difficulty in classifying food, which has been an important area for various types of applications. Food analysis is the primary component of health-related applications and is needed in our day to day life. It has the proficiency to directly predict the score function from image pixels, input layer to produce the tensor outputs and convolution layer is used for self- learning kernel through back-propagation. In this method, feature extraction and Max-Pooling is considered with multiple layers, and outputs are obtained using softmax functionality. The proposed implementation tests 92.89% accuracy by considering some data from yummly dataset and by own prepared dataset. Consequently, it is seen that some more improvement is needed in food image classification. We therefore consider the segmented feature of EA-CNN and concatenated it with the feature of our custom Inception-V3 to provide an optimized classification. It enhances the capacity of important features for further classification process. In extension we have considered south Indian food classes, with our own collected food image dataset and got 96.27% accuracy. The obtained accuracy for the considered dataset is very well in comparison with our foregoing method and state-of-the-art techniques.
Narrative Review: Food Image Use for Machine Learnings’ Function in Dietary Assessment and Real Time Nutrition Feedback and Education
Technology has played a key role in advancing the health and agriculture sectors to improve obesity rates, diseasecontrol, food waste, and overall health disparities. However, these health and lifestyle determinants continue to plague theUnited States population. While new technologies have been and are currently being developed to address these concerns, they may not be practical for the general population. Utilizing machine learning advancement in food recognition using smartphone technology may be a means to improve the dietary component of nutrition assessments while providing valuable nutrition feedback. This narrative review was conducted to assess the current state of the literature on nutrition technology using image recognition for practical applications, while also proposing theoretical uses for the technology to improve quality of life through dietary feedback
AI4Food-NutritionFW: A Novel Framework for the Automatic Synthesis and Analysis of Eating Behaviours
Nowadays millions of images are shared on social media and web platforms. In
particular, many of them are food images taken from a smartphone over time,
providing information related to the individual's diet. On the other hand,
eating behaviours are directly related to some of the most prevalent diseases
in the world. Exploiting recent advances in image processing and Artificial
Intelligence (AI), this scenario represents an excellent opportunity to: i)
create new methods that analyse the individuals' health from what they eat, and
ii) develop personalised recommendations to improve nutrition and diet under
specific circumstances (e.g., obesity or COVID). Having tunable tools for
creating food image datasets that facilitate research in both lines is very
much needed.
This paper proposes AI4Food-NutritionFW, a framework for the creation of food
image datasets according to configurable eating behaviours. AI4Food-NutritionFW
simulates a user-friendly and widespread scenario where images are taken using
a smartphone. In addition to the framework, we also provide and describe a
unique food image dataset that includes 4,800 different weekly eating
behaviours from 15 different profiles and 1,200 subjects. Specifically, we
consider profiles that comply with actual lifestyles from healthy eating
behaviours (according to established knowledge), variable profiles (e.g.,
eating out, holidays), to unhealthy ones (e.g., excess of fast food or sweets).
Finally, we automatically evaluate a healthy index of the subject's eating
behaviours using multidimensional metrics based on guidelines for healthy diets
proposed by international organisations, achieving promising results (99.53%
and 99.60% accuracy and sensitivity, respectively). We also release to the
research community a software implementation of our proposed
AI4Food-NutritionFW and the mentioned food image dataset created with it.Comment: 10 pages, 5 figures, 4 table
GourmetNet: Food Segmentation Using Multi-Scale Waterfall Features With Spatial and Channel Attention
Deep learning and Computer vision are extensively used to solve problems in wide range of domains from automotive and manufacturing to healthcare and surveillance. Research in deep learning for food images is mainly limited to food identification and detection. Food segmentation is an important problem as the first step for nutrition monitoring, food volume and calorie estimation. This research is intended to expand the horizons of deep learning and semantic segmentation by proposing a novel single-pass, end-to-end trainable network for food segmentation. Our novel architecture incorporates both channel attention and spatial attention information in an expanded multi-scale feature representation using the WASPv2 module. The refined features will be processed with the advanced multi-scale waterfall module that combines the benefits of cascade filtering and pyramid representations without requiring a separate decoder or postprocessing
- …