17,868 research outputs found
Multiple Data-Dependent Kernel Fisher Discriminant Analysis for Face Recognition
Kernel Fisher discriminant analysis (KFDA) method has demonstrated its success in extracting facial features for face recognition. Compared to linear techniques, it can better describe the complex and nonlinear variations of face images. However, a single kernel is not always suitable for the applications of face recognition which contain data from multiple, heterogeneous sources, such as face images under huge variations of pose, illumination, and facial expression. To improve the performance of KFDA in face recognition, a novel algorithm named multiple data-dependent kernel Fisher discriminant analysis (MDKFDA) is proposed in this paper. The constructed multiple data-dependent kernel (MDK) is a combination of several base kernels with a data-dependent kernel constraint on their weights. By solving the optimization equation based on Fisher criterion and maximizing the margin criterion, the parameter optimization of data-dependent kernel and multiple base kernels is achieved. Experimental results on the three face databases validate the effectiveness of the proposed algorithm
A Simple Iterative Algorithm for Parsimonious Binary Kernel Fisher Discrimination
By applying recent results in optimization theory variously known as optimization transfer or majorize/minimize algorithms, an algorithm for binary, kernel, Fisher discriminant analysis is introduced that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The problem is converted into a smooth optimization that can be solved iteratively with no greater overhead than iteratively re-weighted least-squares. The result is simple, easily programmed and is shown to perform, in terms of both accuracy and parsimony, as well as or better than a number of leading machine learning algorithms on two well-studied and substantial benchmarks
Parsimonious Kernel Fisher Discrimination
By applying recent results in optimization transfer, a new algorithm for kernel Fisher Discriminant Analysis is provided that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The algorithm is simple, easily programmed and is shown to perform as well as or better than a number of leading machine learning algorithms on a substantial benchmark. It is then applied to a set of extreme small-sample-size problems in virtual screening where it is found to be less accurate than a currently leading approach but is still comparable in a number of cases
A Bi-level Nonlinear Eigenvector Algorithm for Wasserstein Discriminant Analysis
Much like the classical Fisher linear discriminant analysis, Wasserstein
discriminant analysis (WDA) is a supervised linear dimensionality reduction
method that seeks a projection matrix to maximize the dispersion of different
data classes and minimize the dispersion of same data classes. However, in
contrast, WDA can account for both global and local inter-connections between
data classes using a regularized Wasserstein distance. WDA is formulated as a
bi-level nonlinear trace ratio optimization. In this paper, we present a
bi-level nonlinear eigenvector (NEPv) algorithm, called WDA-nepv. The inner
kernel of WDA-nepv for computing the optimal transport matrix of the
regularized Wasserstein distance is formulated as an NEPv, and meanwhile the
outer kernel for the trace ratio optimization is also formulated as another
NEPv. Consequently, both kernels can be computed efficiently via
self-consistent-field iterations and modern solvers for linear eigenvalue
problems. Comparing with the existing algorithms for WDA, WDA-nepv is
derivative-free and surrogate-model-free. The computational efficiency and
applications in classification accuracy of WDA-nepv are demonstrated using
synthetic and real-life datasets
Neural Class-Specific Regression for face verification
Face verification is a problem approached in the literature mainly using
nonlinear class-specific subspace learning techniques. While it has been shown
that kernel-based Class-Specific Discriminant Analysis is able to provide
excellent performance in small- and medium-scale face verification problems,
its application in today's large-scale problems is difficult due to its
training space and computational requirements. In this paper, generalizing our
previous work on kernel-based class-specific discriminant analysis, we show
that class-specific subspace learning can be cast as a regression problem. This
allows us to derive linear, (reduced) kernel and neural network-based
class-specific discriminant analysis methods using efficient batch and/or
iterative training schemes, suited for large-scale learning problems. We test
the performance of these methods in two datasets describing medium- and
large-scale face verification problems.Comment: 9 pages, 4 figure
Confident Kernel Sparse Coding and Dictionary Learning
In recent years, kernel-based sparse coding (K-SRC) has received particular
attention due to its efficient representation of nonlinear data structures in
the feature space. Nevertheless, the existing K-SRC methods suffer from the
lack of consistency between their training and test optimization frameworks. In
this work, we propose a novel confident K-SRC and dictionary learning algorithm
(CKSC) which focuses on the discriminative reconstruction of the data based on
its representation in the kernel space. CKSC focuses on reconstructing each
data sample via weighted contributions which are confident in its corresponding
class of data. We employ novel discriminative terms to apply this scheme to
both training and test frameworks in our algorithm. This specific design
increases the consistency of these optimization frameworks and improves the
discriminative performance in the recall phase. In addition, CKSC directly
employs the supervised information in its dictionary learning framework to
enhance the discriminative structure of the dictionary. For empirical
evaluations, we implement our CKSC algorithm on multivariate time-series
benchmarks such as DynTex++ and UTKinect. Our claims regarding the superior
performance of the proposed algorithm are justified throughout comparing its
classification results to the state-of-the-art K-SRC algorithms.Comment: 10 pages, ICDM 2018 conferenc
Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings
The recovery of the intrinsic geometric structures of data collections is an
important problem in data analysis. Supervised extensions of several manifold
learning approaches have been proposed in the recent years. Meanwhile, existing
methods primarily focus on the embedding of the training data, and the
generalization of the embedding to initially unseen test data is rather
ignored. In this work, we build on recent theoretical results on the
generalization performance of supervised manifold learning algorithms.
Motivated by these performance bounds, we propose a supervised manifold
learning method that computes a nonlinear embedding while constructing a smooth
and regular interpolation function that extends the embedding to the whole data
space in order to achieve satisfactory generalization. The embedding and the
interpolator are jointly learnt such that the Lipschitz regularity of the
interpolator is imposed while ensuring the separation between different
classes. Experimental results on several image data sets show that the proposed
method outperforms traditional classifiers and the supervised dimensionality
reduction algorithms in comparison in terms of classification accuracy in most
settings
- …