1,277 research outputs found
FoodNet: Recognizing Foods Using Ensemble of Deep Networks
In this work we propose a methodology for an automatic food classification
system which recognizes the contents of the meal from the images of the food.
We developed a multi-layered deep convolutional neural network (CNN)
architecture that takes advantages of the features from other deep networks and
improves the efficiency. Numerous classical handcrafted features and approaches
are explored, among which CNNs are chosen as the best performing features.
Networks are trained and fine-tuned using preprocessed images and the filter
outputs are fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and newly contributed
Indian food image database demonstrate the effectiveness of the proposed
methodology as compared to many other benchmark deep learned CNN frameworks.Comment: 5 pages, 3 figures, 3 tables, IEEE Signal Processing Letter
Complex-valued Iris Recognition Network
In this work, we design a complex-valued neural network for the task of iris
recognition. Unlike the problem of general object recognition, where
real-valued neural networks can be used to extract pertinent features, iris
recognition depends on the extraction of both phase and amplitude information
from the input iris texture in order to better represent its stochastic
content. This necessitates the extraction and processing of phase information
that cannot be effectively handled by a real-valued neural network. In this
regard, we design a complex-valued neural network that can better capture the
multi-scale, multi-resolution, and multi-orientation phase and amplitude
features of the iris texture. We show a strong correspondence of the proposed
complex-valued iris recognition network with Gabor wavelets that are used to
generate the classical IrisCode; however, the proposed method enables automatic
complex-valued feature learning that is tailored for iris recognition.
Experiments conducted on three benchmark datasets - ND-CrossSensor-2013,
CASIA-Iris-Thousand and UBIRIS.v2 - show the benefit of the proposed network
for the task of iris recognition. Further, the generalization capability of the
proposed network is demonstrated by training and testing it across different
datasets. Finally, visualization schemes are used to convey the type of
features being extracted by the complex-valued network in comparison to
classical real-valued networks. The results of this work are likely to be
applicable in other domains, where complex Gabor filters are used for texture
modeling
Constrained Design of Deep Iris Networks
Despite the promise of recent deep neural networks in the iris recognition
setting, there are vital properties of the classic IrisCode which are almost
unable to be achieved with current deep iris networks: the compactness of model
and the small number of computing operations (FLOPs). This paper re-models the
iris network design process as a constrained optimization problem which takes
model size and computation into account as learning criteria. On one hand, this
allows us to fully automate the network design process to search for the best
iris network confined to the computation and model compactness constraints. On
the other hand, it allows us to investigate the optimality of the classic
IrisCode and recent iris networks. It also allows us to learn an optimal iris
network and demonstrate state-of-the-art performance with less computation and
memory requirements
Ensemble of convolutional neural networks for bioimage classification
This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni
Combining deep and handcrafted image features for MRI brain scan classification
Progresses in the areas of artificial intelligence, machine learning, and medical imaging technologies have allowed the development of the medical image processing field with some astonishing results in the last two decades. These innovations enabled the clinicians to view the human body in high-resolution or three-dimensional cross-sectional slices, which resulted in an increase in the accuracy of the diagnosis and the examination of patients in a non-invasive manner. The fundamental step for MRI brain scans classifiers is their ability to extract meaningful features. As a result, many works have proposed different methods for features extraction to classify the abnormal growths in brain MRI scans. More recently, the application of deep learning algorithms to medical imaging lead to impressive performance enhancements in classifying and diagnosing complicated pathologies such as brain tumors. In this study, a deep learning feature extraction algorithm is proposed to extract the relevant features from MRI brain scans. In parallel, handcrafted features are extracted using the modified grey level co-occurrence matrix (MGLCM) method. Subsequently, the extracted relevant features are combined with handcrafted features to improve the classification process of MRI brain scans with SVM used as the classifier. The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%
- …