8,206 research outputs found
Beyond Gauss: Image-Set Matching on the Riemannian Manifold of PDFs
State-of-the-art image-set matching techniques typically implicitly model
each image-set with a Gaussian distribution. Here, we propose to go beyond
these representations and model image-sets as probability distribution
functions (PDFs) using kernel density estimators. To compare and match
image-sets, we exploit Csiszar f-divergences, which bear strong connections to
the geodesic distance defined on the space of PDFs, i.e., the statistical
manifold. Furthermore, we introduce valid positive definite kernels on the
statistical manifolds, which let us make use of more powerful classification
schemes to match image-sets. Finally, we introduce a supervised dimensionality
reduction technique that learns a latent space where f-divergences reflect the
class labels of the data. Our experiments on diverse problems, such as
video-based face recognition and dynamic texture classification, evidence the
benefits of our approach over the state-of-the-art image-set matching methods
2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA
We present a new approach for face recognition system. The method is based on
2D face image features using subset of non-correlated and Orthogonal Gabor
Filters instead of using the whole Gabor Filter Bank, then compressing the
output feature vector using Linear Discriminant Analysis (LDA). The face image
has been enhanced using multi stage image processing technique to normalize it
and compensate for illumination variation. Experimental results show that the
proposed system is effective for both dimension reduction and good recognition
performance when compared to the complete Gabor filter bank. The system has
been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and
achieved average recognition rate of 98.9 %
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
Positive Semidefinite Metric Learning Using Boosting-like Algorithms
The success of many machine learning and pattern recognition methods relies
heavily upon the identification of an appropriate distance metric on the input
data. It is often beneficial to learn such a metric from the input training
data, instead of using a default one such as the Euclidean distance. In this
work, we propose a boosting-based technique, termed BoostMetric, for learning a
quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance
metric requires enforcing the constraint that the matrix parameter to the
metric remains positive definite. Semidefinite programming is often used to
enforce this constraint, but does not scale well and easy to implement.
BoostMetric is instead based on the observation that any positive semidefinite
matrix can be decomposed into a linear combination of trace-one rank-one
matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak
learners within an efficient and scalable boosting-based learning process. The
resulting methods are easy to implement, efficient, and can accommodate various
types of constraints. We extend traditional boosting algorithms in that its
weak learner is a positive semidefinite matrix with trace and rank being one
rather than a classifier or regressor. Experiments on various datasets
demonstrate that the proposed algorithms compare favorably to those
state-of-the-art methods in terms of classification accuracy and running time.Comment: 30 pages, appearing in Journal of Machine Learning Researc
Disturbance Grassmann Kernels for Subspace-Based Learning
In this paper, we focus on subspace-based learning problems, where data
elements are linear subspaces instead of vectors. To handle this kind of data,
Grassmann kernels were proposed to measure the space structure and used with
classifiers, e.g., Support Vector Machines (SVMs). However, the existing
discriminative algorithms mostly ignore the instability of subspaces, which
would cause the classifiers misled by disturbed instances. Thus we propose
considering all potential disturbance of subspaces in learning processes to
obtain more robust classifiers. Firstly, we derive the dual optimization of
linear classifiers with disturbance subject to a known distribution, resulting
in a new kernel, Disturbance Grassmann (DG) kernel. Secondly, we research into
two kinds of disturbance, relevant to the subspace matrix and singular values
of bases, with which we extend the Projection kernel on Grassmann manifolds to
two new kernels. Experiments on action data indicate that the proposed kernels
perform better compared to state-of-the-art subspace-based methods, even in a
worse environment.Comment: This paper include 3 figures, 10 pages, and has been accpeted to
SIGKDD'1
- …