31 research outputs found
FoodNet: Recognizing Foods Using Ensemble of Deep Networks
In this work we propose a methodology for an automatic food classification
system which recognizes the contents of the meal from the images of the food.
We developed a multi-layered deep convolutional neural network (CNN)
architecture that takes advantages of the features from other deep networks and
improves the efficiency. Numerous classical handcrafted features and approaches
are explored, among which CNNs are chosen as the best performing features.
Networks are trained and fine-tuned using preprocessed images and the filter
outputs are fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and newly contributed
Indian food image database demonstrate the effectiveness of the proposed
methodology as compared to many other benchmark deep learned CNN frameworks.Comment: 5 pages, 3 figures, 3 tables, IEEE Signal Processing Letter
Dietary assessment and obesity aviodance system based on vision: A review
Using technology for food objects recognition and estimation of its calories is very useful to spread food culture and awareness among people in the age of obesity due to the bad habits of food consumption and wide range of inappropriate food products.Image based sensing of such system
is very promising with the large expanding of camera embedded portable devices such as smartphones, PC tablets, and laptops.In the past decade, researchers have been working on developing a reliable image based system for food recognition and calories estimation.Different approaches have tackled the system from different aspects.This paper reviews the state of the
art of this interesting application, and presents its experimental results.Future work of research is presented in order to guide new researchers toward potential tracks to create more maturity and reliability to this application
Fine-grained Image Classification by Exploring Bipartite-Graph Labels
Given a food image, can a fine-grained object recognition engine tell "which
restaurant which dish" the food belongs to? Such ultra-fine grained image
recognition is the key for many applications like search by images, but it is
very challenging because it needs to discern subtle difference between classes
while dealing with the scarcity of training data. Fortunately, the ultra-fine
granularity naturally brings rich relationships among object classes. This
paper proposes a novel approach to exploit the rich relationships through
bipartite-graph labels (BGL). We show how to model BGL in an overall
convolutional neural networks and the resulting system can be optimized through
back-propagation. We also show that it is computationally efficient in
inference thanks to the bipartite structure. To facilitate the study, we
construct a new food benchmark dataset, which consists of 37,885 food images
collected from 6 restaurants and totally 975 menus. Experimental results on
this new food and three other datasets demonstrates BGL advances previous works
in fine-grained object recognition. An online demo is available at
http://www.f-zhou.com/fg_demo/
Machine learning approaches for dietary assessment
When considering individuals with dietary limitations, automatic food recognition, and assessment are paramount. Smartphone-oriented applications are convenient and handy when dish recognition and the elements inside are required. Machine learning (deep learning) applied to image recognition, alongside other classification techniques (for example, bag-of-words), are possible approaches to tackle this problem. The current most promising approach to the classification problem is deep leaning, which requires high computation for training, but it is an extremely fast and computationally light classifier. Since the requirement for the classifiers to be as accurate as possible, the humans must also be considered as the classifier. This work tests and compares deep-learning methods bag-of-words applied to computer vision, and the human visual system. Results show that deep learning is better when considering a low number of food categories. However, with more food categories, the human overcomes the machine algorithms
Grocery Shopping Assistant Using OpenCV
In this paper we present an android mobile application that allows user to keep track of food products and grocery items bought during each grocery shopping along with its nutrient information. This application allows user to get nutrient information of products and grocery by just taking a photo. Product matching is performed using SURF feature detection followed by FLANN feature matching. We extract the table from the nutrient fact table image using concepts of erosion, dilation and contour detection. Classifying the grocery is done using Object Categorization through the concepts of Bag of Words (BOW) and SVM machine learning. This application includes three main subsystems: client (Android), server (Node.js) and image processing (OpenCV)