27 research outputs found
Food Ingredients Recognition through Multi-label Learning
Automatically constructing a food diary that tracks the ingredients consumed
can help people follow a healthy diet. We tackle the problem of food
ingredients recognition as a multi-label learning problem. We propose a method
for adapting a highly performing state of the art CNN in order to act as a
multi-label predictor for learning recipes in terms of their list of
ingredients. We prove that our model is able to, given a picture, predict its
list of ingredients, even if the recipe corresponding to the picture has never
been seen by the model. We make public two new datasets suitable for this
purpose. Furthermore, we prove that a model trained with a high variability of
recipes and ingredients is able to generalize better on new data, and visualize
how it specializes each of its neurons to different ingredients.Comment: 8 page
Exploring Food Detection using CNNs
One of the most common critical factors directly related to the cause of a
chronic disease is unhealthy diet consumption. In this sense, building an
automatic system for food analysis could allow a better understanding of the
nutritional information with respect to the food eaten and thus it could help
in taking corrective actions in order to consume a better diet. The Computer
Vision community has focused its efforts on several areas involved in the
visual food analysis such as: food detection, food recognition, food
localization, portion estimation, among others. For food detection, the best
results evidenced in the state of the art were obtained using Convolutional
Neural Network. However, the results of all these different approaches were
gotten on different datasets and therefore are not directly comparable. This
article proposes an overview of the last advances on food detection and an
optimal model based on GoogLeNet Convolutional Neural Network method, principal
component analysis, and a support vector machine that outperforms the state of
the art on two public food/non-food datasets
CuisineNet: Food Attributes Classification using Multi-scale Convolution Network
Diversity of food and its attributes represents the culinary habits of
peoples from different countries. Thus, this paper addresses the problem of
identifying food culture of people around the world and its flavor by
classifying two main food attributes, cuisine and flavor. A deep learning model
based on multi-scale convotuional networks is proposed for extracting more
accurate features from input images. The aggregation of multi-scale convolution
layers with different kernel size is also used for weighting the features
results from different scales. In addition, a joint loss function based on
Negative Log Likelihood (NLL) is used to fit the model probability to multi
labeled classes for multi-modal classification task. Furthermore, this work
provides a new dataset for food attributes, so-called Yummly48K, extracted from
the popular food website, Yummly. Our model is assessed on the constructed
Yummly48K dataset. The experimental results show that our proposed method
yields 65% and 62% average F1 score on validation and test set which
outperforming the state-of-the-art models.Comment: 8 pages, Submitted in CCIA 201
Analyzing First-Person Stories Based on Socializing, Eating and Sedentary Patterns
First-person stories can be analyzed by means of egocentric pictures acquired
throughout the whole active day with wearable cameras. This manuscript presents
an egocentric dataset with more than 45,000 pictures from four people in
different environments such as working or studying. All the images were
manually labeled to identify three patterns of interest regarding people's
lifestyle: socializing, eating and sedentary. Additionally, two different
approaches are proposed to classify egocentric images into one of the 12 target
categories defined to characterize these three patterns. The approaches are
based on machine learning and deep learning techniques, including traditional
classifiers and state-of-art convolutional neural networks. The experimental
results obtained when applying these methods to the egocentric dataset
demonstrated their adequacy for the problem at hand.Comment: Accepted at First International Workshop on Social Signal Processing
and Beyond, 19th International Conference on Image Analysis and Processing
(ICIAP), September 201
Analyzing First-Person Stories Based on Socializing, Eating and Sedentary Patterns
First-person stories can be analyzed by means of egocentric pictures acquired
throughout the whole active day with wearable cameras. This manuscript presents
an egocentric dataset with more than 45,000 pictures from four people in
different environments such as working or studying. All the images were
manually labeled to identify three patterns of interest regarding people's
lifestyle: socializing, eating and sedentary. Additionally, two different
approaches are proposed to classify egocentric images into one of the 12 target
categories defined to characterize these three patterns. The approaches are
based on machine learning and deep learning techniques, including traditional
classifiers and state-of-art convolutional neural networks. The experimental
results obtained when applying these methods to the egocentric dataset
demonstrated their adequacy for the problem at hand.Comment: Accepted at First International Workshop on Social Signal Processing
and Beyond, 19th International Conference on Image Analysis and Processing
(ICIAP), September 201
FoodNet: Recognizing Foods Using Ensemble of Deep Networks
In this work we propose a methodology for an automatic food classification
system which recognizes the contents of the meal from the images of the food.
We developed a multi-layered deep convolutional neural network (CNN)
architecture that takes advantages of the features from other deep networks and
improves the efficiency. Numerous classical handcrafted features and approaches
are explored, among which CNNs are chosen as the best performing features.
Networks are trained and fine-tuned using preprocessed images and the filter
outputs are fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and newly contributed
Indian food image database demonstrate the effectiveness of the proposed
methodology as compared to many other benchmark deep learned CNN frameworks.Comment: 5 pages, 3 figures, 3 tables, IEEE Signal Processing Letter
NMT-Keras: a Very Flexible Toolkit with a Focus on Interactive NMT and Online Learning
[EN] We present NMT-Keras, a flexible toolkit for training deep learning models, which puts a particular emphasis on the development of advanced applications of neural machine translation systems, such as interactive-predictive translation protocols and long-term adaptation of the translation system via continuous learning.
NMT-Keras is based on an extended version of the popular Keras library, and it runs on Theano and TensorFlow. State-of-the-art neural machine translation models are deployed and used following the high-level framework provided by Keras.
Given its high modularity and flexibility, it also has been extended to tackle different problems, such as image and video captioning, sentence classification and visual question answering.Much of our Keras fork and the Multimodal Keras Wrapper libraries were developed together with Marc Bolaños. We also acknowledge the rest of contributors to these open-source projects. The research leading this work received funding from grants PROMETEO/2018/004 and CoMUN-HaT - TIN2015-70924-C2-1-R. We finally
acknowledge NVIDIA Corporation for the donation of GPUs used in this work.Peris-Abril, Á.; Casacuberta Nolla, F. (2018). NMT-Keras: a Very Flexible Toolkit with a Focus on Interactive NMT and Online Learning. The Prague Bulletin of Mathematical Linguistics. 111:113-124. https://doi.org/10.2478/pralin-2018-0010S11312411