39,362 research outputs found
Segmented Mixture-of-Gaussian Classification for Hyperspectral Image Analysis
Abstract—The same high dimensionality of hyperspectral imagery that facilitates detection of subtle differences in spectral response due to differing chemical composition also hinders the deployment of traditional statistical pattern-classification procedures, particularly when relatively few training samples are available. Traditional approaches to addressing this issue, which typically employ dimensionality reduction based on either projection or feature selection, are at best suboptimal for hyperspectral classification tasks. A divide-and-conquer algorithm is proposed to exploit the high correlation between successive spectral bands and the resulting block-diagonal correlation structure to partition the hyperspectral space into approximately independent subspaces. Subsequently, dimensionality reduction based on a graph-theoretic localitypreserving discriminant analysis is combined with classification driven by Gaussian mixture models independently in each subspace. The locality-preserving discriminant analysis preserves the potentially multimodal statistical structure of the data, which the Gaussian mixture model classifier learns in the reduced-dimensional subspace. Experimental results demonstrate that the proposed system significantly outperforms traditional classification approaches, even when few training samples are employed. Index Terms—Hyperspectral data, information fusion I
On the exact minimization of saturated loss functions for robust regression and subspace estimation
This paper deals with robust regression and subspace estimation and more
precisely with the problem of minimizing a saturated loss function. In
particular, we focus on computational complexity issues and show that an exact
algorithm with polynomial time-complexity with respect to the number of data
can be devised for robust regression and subspace estimation. This result is
obtained by adopting a classification point of view and relating the problems
to the search for a linear model that can approximate the maximal number of
points with a given error. Approximate variants of the algorithms based on
ramdom sampling are also discussed and experiments show that it offers an
accuracy gain over the traditional RANSAC for a similar algorithmic simplicity.Comment: Pattern Recognition Letters, Elsevier, 201
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data
The classification of high dimensional data with kernel methods is considered
in this article. Exploit- ing the emptiness property of high dimensional
spaces, a kernel based on the Mahalanobis distance is proposed. The computation
of the Mahalanobis distance requires the inversion of a covariance matrix. In
high dimensional spaces, the estimated covariance matrix is ill-conditioned and
its inversion is unstable or impossible. Using a parsimonious statistical
model, namely the High Dimensional Discriminant Analysis model, the specific
signal and noise subspaces are estimated for each considered class making the
inverse of the class specific covariance matrix explicit and stable, leading to
the definition of a parsimonious Mahalanobis kernel. A SVM based framework is
used for selecting the hyperparameters of the parsimonious Mahalanobis kernel
by optimizing the so-called radius-margin bound. Experimental results on three
high dimensional data sets show that the proposed kernel is suitable for
classifying high dimensional data, providing better classification accuracies
than the conventional Gaussian kernel
- …