10,454 research outputs found
Using deep learning for food recognition
Treballs Finals de Grau d'Enginyeria Informà tica, Facultat de Matemà tiques, Universitat de Barcelona, Any: 2020, Director: Petia Radeva i Bhalaji Nagarajan[en] Image recognition is a very challenging and important problem in the computer vision field. And food image classification is one of the most challenging branches of this field.
In real-world scenarios, it is more common for a food image to have more than one food item. As a result, the multi-label classification problem has generated significant interest in recent years. However, multi-label recognition is a much more difficult object recognition task compared to single-label recognition. In this work, we will study the multi-label food recognition problem by using deep learning algorithms, specifically Convolutional Neural Networks. We will show how redefining the loss function as well as augmenting the training dataset can
leverage the multi-label food recognition problem. Extensive validation will be presented to show the strengths and limitations of multi-label food recognition
FoodNet: Recognizing Foods Using Ensemble of Deep Networks
In this work we propose a methodology for an automatic food classification
system which recognizes the contents of the meal from the images of the food.
We developed a multi-layered deep convolutional neural network (CNN)
architecture that takes advantages of the features from other deep networks and
improves the efficiency. Numerous classical handcrafted features and approaches
are explored, among which CNNs are chosen as the best performing features.
Networks are trained and fine-tuned using preprocessed images and the filter
outputs are fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and newly contributed
Indian food image database demonstrate the effectiveness of the proposed
methodology as compared to many other benchmark deep learned CNN frameworks.Comment: 5 pages, 3 figures, 3 tables, IEEE Signal Processing Letter
A deep representation for depth images from synthetic data
Convolutional Neural Networks (CNNs) trained on large scale RGB databases
have become the secret sauce in the majority of recent approaches for object
categorization from RGB-D data. Thanks to colorization techniques, these
methods exploit the filters learned from 2D images to extract meaningful
representations in 2.5D. Still, the perceptual signature of these two kind of
images is very different, with the first usually strongly characterized by
textures, and the second mostly by silhouettes of objects. Ideally, one would
like to have two CNNs, one for RGB and one for depth, each trained on a
suitable data collection, able to capture the perceptual properties of each
channel for the task at hand. This has not been possible so far, due to the
lack of a suitable depth database. This paper addresses this issue, proposing
to opt for synthetically generated images rather than collecting by hand a 2.5D
large scale database. While being clearly a proxy for real data, synthetic
images allow to trade quality for quantity, making it possible to generate a
virtually infinite amount of data. We show that the filters learned from such
data collection, using the very same architecture typically used on visual
data, learns very different filters, resulting in depth features (a) able to
better characterize the different facets of depth images, and (b) complementary
with respect to those derived from CNNs pre-trained on 2D datasets. Experiments
on two publicly available databases show the power of our approach
Food Recognition using Fusion of Classifiers based on CNNs
With the arrival of convolutional neural networks, the complex problem of
food recognition has experienced an important improvement in recent years. The
best results have been obtained using methods based on very deep convolutional
neural networks, which show that the deeper the model,the better the
classification accuracy will be obtain. However, very deep neural networks may
suffer from the overfitting problem. In this paper, we propose a combination of
multiple classifiers based on different convolutional models that complement
each other and thus, achieve an improvement in performance. The evaluation of
our approach is done on two public datasets: Food-101 as a dataset with a wide
variety of fine-grained dishes, and Food-11 as a dataset of high-level food
categories, where our approach outperforms the independent CNN models
Lemon Classification Using Deep Learning
Abstract : Background: Vegetable agriculture is very important to human continued existence and remains a key driver of
many economies worldwide, especially in underdeveloped and developing economies. Objectives: There is an increasing
demand for food and cash crops, due to the increasing in world population and the challenges enforced by climate
modifications, there is an urgent need to increase plant production while reducing costs. Methods: In this paper, Lemon
classification approach is presented with a dataset that contains approximately 2,000 images belong to 3 species at a few
developing phases. Convolutional Neural Network (CNN) algorithms, a deep learning technique extensively applied to
image recognition was used, for this task. The results: found that CNN-driven lemon classification applications when used
in farming automation have the latent to enhance crop harvest and improve output and productivity when designed
properly. The trained model achieved an accuracy of 99.48% on a held-out test set, demonstrating the feasibility of this
approach
- …