1,509 research outputs found
Automatic Classification of Bright Retinal Lesions via Deep Network Features
The diabetic retinopathy is timely diagonalized through color eye fundus
images by experienced ophthalmologists, in order to recognize potential retinal
features and identify early-blindness cases. In this paper, it is proposed to
extract deep features from the last fully-connected layer of, four different,
pre-trained convolutional neural networks. These features are then feeded into
a non-linear classifier to discriminate three-class diabetic cases, i.e.,
normal, exudates, and drusen. Averaged across 1113 color retinal images
collected from six publicly available annotated datasets, the deep features
approach perform better than the classical bag-of-words approach. The proposed
approaches have an average accuracy between 91.23% and 92.00% with more than
13% improvement over the traditional state of art methods.Comment: Preprint submitted to Journal of Medical Imaging | SPIE (Tue, Jul 28,
2017
Joint Geometrical and Statistical Alignment for Visual Domain Adaptation
This paper presents a novel unsupervised domain adaptation method for
cross-domain visual recognition. We propose a unified framework that reduces
the shift between domains both statistically and geometrically, referred to as
Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two
coupled projections that project the source domain and target domain data into
low dimensional subspaces where the geometrical shift and distribution shift
are reduced simultaneously. The objective function can be solved efficiently in
a closed form. Extensive experiments have verified that the proposed method
significantly outperforms several state-of-the-art domain adaptation methods on
a synthetic dataset and three different real world cross-domain visual
recognition tasks
Differential geometric regularization for supervised learning of classifiers
We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to an estimator of the class probability P(y|\vec x). The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification.http://proceedings.mlr.press/v48/baia16.pdfPublished versio
Automatic Emphysema Detection using Weakly Labeled HRCT Lung Images
A method for automatically quantifying emphysema regions using
High-Resolution Computed Tomography (HRCT) scans of patients with chronic
obstructive pulmonary disease (COPD) that does not require manually annotated
scans for training is presented. HRCT scans of controls and of COPD patients
with diverse disease severity are acquired at two different centers. Textural
features from co-occurrence matrices and Gaussian filter banks are used to
characterize the lung parenchyma in the scans. Two robust versions of multiple
instance learning (MIL) classifiers, miSVM and MILES, are investigated. The
classifiers are trained with the weak labels extracted from the forced
expiratory volume in one minute (FEV) and diffusing capacity of the lungs
for carbon monoxide (DLCO). At test time, the classifiers output a patient
label indicating overall COPD diagnosis and local labels indicating the
presence of emphysema. The classifier performance is compared with manual
annotations by two radiologists, a classical density based method, and
pulmonary function tests (PFTs). The miSVM classifier performed better than
MILES on both patient and emphysema classification. The classifier has a
stronger correlation with PFT than the density based method, the percentage of
emphysema in the intersection of annotations from both radiologists, and the
percentage of emphysema annotated by one of the radiologists. The correlation
between the classifier and the PFT is only outperformed by the second
radiologist. The method is therefore promising for facilitating assessment of
emphysema and reducing inter-observer variability.Comment: Accepted at PLoS ON
Kernel and Classifier Level Fusion for Image Classification.
Automatic understanding of visual information is one of the main requirements for a complete artificial intelligence system and an essential component of autonomous robots. State-of-the-art image recognition approaches are based on different local descriptors, each capturing some properties of the image such as intensity, color and texture. Each set of local descriptors is represented by a codebook and gives rise to a separate feature channel. For classification the feature channels are combined by using multiple kernel learning (MKL), early fusion or classifier level fusion approaches. Due to the importance of complementary information in fusion techniques, there is an increasing demand for diverse feature channels. The first part of the thesis focuses on the ways to encode information from images that is complementary to the state-of-the-art local features. To address this issue we present a novel image representation which can encode the structure of an object and propose three descriptors based on this representation. In the state-of-the-art recognition system the kernels are often computed independently of each other and thus may be highly informative yet redundant. Proper selection and fusion of the kernels is, therefore, crucial to maximize the performance and to address the efficiency issues in visual recognition applications. We address this issue in second part of the thesis where, we propose novel techniques to fuse feature channels for object and pattern recognition. We present an extensive evaluation of the fusion methods on four object recognition datasets and achieve state-of-the-art results on all of them. We also present results on four bioinformatics datasets to demonstrate that the proposed fusion methods work for a variety of pattern recognition problems, provided that we have multiple feature channels
- …