2,941 research outputs found
Discriminative Features via Generalized Eigenvectors
Representing examples in a way that is compatible with the underlying
classifier can greatly enhance the performance of a learning system. In this
paper we investigate scalable techniques for inducing discriminative features
by taking advantage of simple second order structure in the data. We focus on
multiclass classification and show that features extracted from the generalized
eigenvectors of the class conditional second moments lead to classifiers with
excellent empirical performance. Moreover, these features have attractive
theoretical properties, such as inducing representations that are invariant to
linear transformations of the input. We evaluate classifiers built from these
features on three different tasks, obtaining state of the art results
DPCA: Dimensionality Reduction for Discriminative Analytics of Multiple Large-Scale Datasets
Principal component analysis (PCA) has well-documented merits for data
extraction and dimensionality reduction. PCA deals with a single dataset at a
time, and it is challenged when it comes to analyzing multiple datasets. Yet in
certain setups, one wishes to extract the most significant information of one
dataset relative to other datasets. Specifically, the interest may be on
identifying, namely extracting features that are specific to a single target
dataset but not the others. This paper develops a novel approach for such
so-termed discriminative data analysis, and establishes its optimality in the
least-squares (LS) sense under suitable data modeling assumptions. The
criterion reveals linear combinations of variables by maximizing the ratio of
the variance of the target data to that of the remainders. The novel approach
solves a generalized eigenvalue problem by performing SVD just once. Numerical
tests using synthetic and real datasets showcase the merits of the proposed
approach relative to its competing alternatives.Comment: 5 pages, 2 figure
Kernel Manifold Alignment
We introduce a kernel method for manifold alignment (KEMA) and domain
adaptation that can match an arbitrary number of data sources without needing
corresponding pairs, just few labeled examples in all domains. KEMA has
interesting properties: 1) it generalizes other manifold alignment methods, 2)
it can align manifolds of very different complexities, performing a sort of
manifold unfolding plus alignment, 3) it can define a domain-specific metric to
cope with multimodal specificities, 4) it can align data spaces of different
dimensionality, 5) it is robust to strong nonlinear feature deformations, and
6) it is closed-form invertible which allows transfer across-domains and data
synthesis. We also present a reduced-rank version for computational efficiency
and discuss the generalization performance of KEMA under Rademacher principles
of stability. KEMA exhibits very good performance over competing methods in
synthetic examples, visual object recognition and recognition of facial
expressions tasks
- …