17,749 research outputs found
Unconstrained Face Recognition
Although face recognition has been actively studied over the past
decade, the state-of-the-art recognition systems yield
satisfactory performance only under controlled scenarios and
recognition accuracy degrades significantly when confronted with
unconstrained situations due to variations such as illumintion,
pose, etc. In this dissertation, we propose novel approaches that
are able to recognize human faces under unconstrained situations.
Part I presents algorithms for face recognition under
illumination/pose variations. For face recognition across
illuminations, we present a generalized photometric stereo
approach by modeling all face appearances belonging to all humans
under all lighting conditions. Using a linear generalization, we
achieve a factorization of the observation matrix consisting of
face appearances of different individuals, each under a different
illumination. We resolve ambiguities in factorization using
surface integrability and symmetry constraints. In addition, an
illumination-invariant identity descriptor is provided to perform
face recognition across illuminations. We further extend the
generalized photometric stereo approach to an illuminating light
field approach, which is able to recognize faces under pose and
illumination variations.
Face appearance lies in a high-dimensional nonlinear manifold. In
Part II, we introduce machine learning approaches based on
reproducing kernel Hilbert space (RKHS) to capture higher-order
statistical characteristics of the nonlinear appearance manifold.
In particular, we analyze principal components of the RKHS in a
probabilistic manner and compute distances such as the Chernoff
distance, the Kullback-Leibler divergence between two Gaussian
densities in RKHS.
Part III is on face tracking and recognition from video. We first
present an enhanced tracking algorithm that models online
appearance changes in a video sequence using a mixture model and
produces good tracking results in various challenging scenarios.
For video-based face recognition, while conventional approaches
treat tracking and recognition separately, we present a
simultaneous tracking-and-recognition approach. This simultaneous
approach solved using the sequential importance sampling
algorithm improves accuracy in both tracking and recognition.
Finally, we propose a unifying framework called probabilistic
identity characterization able to perform face recognition under
registration/illumination/pose variation and from a still image,
a group of still images, or a video sequence
Graph-based classification of multiple observation sets
We consider the problem of classification of an object given multiple
observations that possibly include different transformations. The possible
transformations of the object generally span a low-dimensional manifold in the
original signal space. We propose to take advantage of this manifold structure
for the effective classification of the object represented by the observation
set. In particular, we design a low complexity solution that is able to exploit
the properties of the data manifolds with a graph-based algorithm. Hence, we
formulate the computation of the unknown label matrix as a smoothing process on
the manifold under the constraint that all observations represent an object of
one single class. It results into a discrete optimization problem, which can be
solved by an efficient and low complexity algorithm. We demonstrate the
performance of the proposed graph-based algorithm in the classification of sets
of multiple images. Moreover, we show its high potential in video-based face
recognition, where it outperforms state-of-the-art solutions that fall short of
exploiting the manifold structure of the face image data sets.Comment: New content adde
Damage to Association Fiber Tracts Impairs Recognition of the Facial Expression of Emotion
An array of cortical and subcortical structures have been implicated in the recognition of emotion from facial expressions. It remains unknown how these regions communicate as parts of a system to achieve recognition, but white matter tracts are likely critical to this process. We hypothesized that (1) damage to white matter tracts would be associated with recognition impairment and (2) the degree of disconnection of association fiber tracts [inferior longitudinal fasciculus (ILF) and/or inferior fronto-occipital fasciculus (IFOF)] connecting the visual cortex with emotion-related regions would negatively correlate with recognition performance. One hundred three patients with focal, stable brain lesions mapped onto a reference brain were tested on their recognition of six basic emotional facial expressions. Association fiber tracts from a probabilistic atlas were coregistered to the reference brain. Parameters estimating disconnection were entered in a general linear model to predict emotion recognition impairments, accounting for lesion size and cortical damage. Damage associated with the right IFOF significantly predicted an overall facial emotion recognition impairment and specific impairments for sadness, anger, and fear. One subject had a pure white matter lesion in the location of the right IFOF and ILF. He presented specific, unequivocal emotion recognition impairments. Additional analysis suggested that impairment in fear recognition can result from damage to the IFOF and not the amygdala. Our findings demonstrate the key role of white matter association tracts in the recognition of the facial expression of emotion and identify specific tracts that may be most critical
Automatic Classification of Human Epithelial Type 2 Cell Indirect Immunofluorescence Images using Cell Pyramid Matching
This paper describes a novel system for automatic classification of images
obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial
type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The
IIF protocol on HEp-2 cells has been the hallmark method to identify the
presence of ANAs, due to its high sensitivity and the large range of antigens
that can be detected. However, it suffers from numerous shortcomings, such as
being subjective as well as time and labour intensive. Computer Aided
Diagnostic (CAD) systems have been developed to address these problems, which
automatically classify a HEp-2 cell image into one of its known patterns (eg.
speckled, homogeneous). Most of the existing CAD systems use handpicked
features to represent a HEp-2 cell image, which may only work in limited
scenarios. We propose a novel automatic cell image classification method termed
Cell Pyramid Matching (CPM), which is comprised of regional histograms of
visual words coupled with the Multiple Kernel Learning framework. We present a
study of several variations of generating histograms and show the efficacy of
the system on two publicly available datasets: the ICPR HEp-2 cell
classification contest dataset and the SNPHEp-2 dataset.Comment: arXiv admin note: substantial text overlap with arXiv:1304.126
Multimodal person recognition for human-vehicle interaction
Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies
- …