6,222 research outputs found
2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA
We present a new approach for face recognition system. The method is based on
2D face image features using subset of non-correlated and Orthogonal Gabor
Filters instead of using the whole Gabor Filter Bank, then compressing the
output feature vector using Linear Discriminant Analysis (LDA). The face image
has been enhanced using multi stage image processing technique to normalize it
and compensate for illumination variation. Experimental results show that the
proposed system is effective for both dimension reduction and good recognition
performance when compared to the complete Gabor filter bank. The system has
been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and
achieved average recognition rate of 98.9 %
Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors
The impressive performance of deep convolutional neural networks in
single-view 3D reconstruction suggests that these models perform non-trivial
reasoning about the 3D structure of the output space. However, recent work has
challenged this belief, showing that complex encoder-decoder architectures
perform similarly to nearest-neighbor baselines or simple linear decoder models
that exploit large amounts of per category data in standard benchmarks. On the
other hand settings where 3D shape must be inferred for new categories with few
examples are more natural and require models that generalize about shapes. In
this work we demonstrate experimentally that naive baselines do not apply when
the goal is to learn to reconstruct novel objects using very few examples, and
that in a \emph{few-shot} learning setting, the network must learn concepts
that can be applied to new categories, avoiding rote memorization. To address
deficiencies in existing approaches to this problem, we propose three
approaches that efficiently integrate a class prior into a 3D reconstruction
model, allowing to account for intra-class variability and imposing an implicit
compositional structure that the model should learn. Experiments on the popular
ShapeNet database demonstrate that our method significantly outperform existing
baselines on this task in the few-shot setting
A model of brain morphological changes related to aging and Alzheimer's disease from cross-sectional assessments
In this study we propose a deformation-based framework to jointly model the
influence of aging and Alzheimer's disease (AD) on the brain morphological
evolution. Our approach combines a spatio-temporal description of both
processes into a generative model. A reference morphology is deformed along
specific trajectories to match subject specific morphologies. It is used to
define two imaging progression markers: 1) a morphological age and 2) a disease
score. These markers can be computed locally in any brain region. The approach
is evaluated on brain structural magnetic resonance images (MRI) from the ADNI
database. The generative model is first estimated on a control population,
then, for each subject, the markers are computed for each acquisition. The
longitudinal evolution of these markers is then studied in relation with the
clinical diagnosis of the subjects and used to generate possible morphological
evolution. In the model, the morphological changes associated with normal aging
are mainly found around the ventricles, while the Alzheimer's disease specific
changes are more located in the temporal lobe and the hippocampal area. The
statistical analysis of these markers highlights differences between clinical
conditions even though the inter-subject variability is quiet high. In this
context, the model can be used to generate plausible morphological trajectories
associated with the disease. Our method gives two interpretable scalar imaging
biomarkers assessing the effects of aging and disease on brain morphology at
the individual and population level. These markers confirm an acceleration of
apparent aging for Alzheimer's subjects and can help discriminate clinical
conditions even in prodromal stages. More generally, the joint modeling of
normal and pathological evolutions shows promising results to describe
age-related brain diseases over long time scales.Comment: NeuroImage, Elsevier, In pres
Visual Representations: Defining Properties and Deep Approximations
Visual representations are defined in terms of minimal sufficient statistics
of visual data, for a class of tasks, that are also invariant to nuisance
variability. Minimal sufficiency guarantees that we can store a representation
in lieu of raw data with smallest complexity and no performance loss on the
task at hand. Invariance guarantees that the statistic is constant with respect
to uninformative transformations of the data. We derive analytical expressions
for such representations and show they are related to feature descriptors
commonly used in computer vision, as well as to convolutional neural networks.
This link highlights the assumptions and approximations tacitly assumed by
these methods and explains empirical practices such as clamping, pooling and
joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November
13, 2015, February 28, 201
- …