99,637 research outputs found
Non-Redundant Spectral Dimensionality Reduction
Spectral dimensionality reduction algorithms are widely used in numerous
domains, including for recognition, segmentation, tracking and visualization.
However, despite their popularity, these algorithms suffer from a major
limitation known as the "repeated Eigen-directions" phenomenon. That is, many
of the embedding coordinates they produce typically capture the same direction
along the data manifold. This leads to redundant and inefficient
representations that do not reveal the true intrinsic dimensionality of the
data. In this paper, we propose a general method for avoiding redundancy in
spectral algorithms. Our approach relies on replacing the orthogonality
constraints underlying those methods by unpredictability constraints.
Specifically, we require that each embedding coordinate be unpredictable (in
the statistical sense) from all previous ones. We prove that these constraints
necessarily prevent redundancy, and provide a simple technique to incorporate
them into existing methods. As we illustrate on challenging high-dimensional
scenarios, our approach produces significantly more informative and compact
representations, which improve visualization and classification tasks
Sparse canonical correlation analysis from a predictive point of view
Canonical correlation analysis (CCA) describes the associations between two
sets of variables by maximizing the correlation between linear combinations of
the variables in each data set. However, in high-dimensional settings where the
number of variables exceeds the sample size or when the variables are highly
correlated, traditional CCA is no longer appropriate. This paper proposes a
method for sparse CCA. Sparse estimation produces linear combinations of only a
subset of variables from each data set, thereby increasing the interpretability
of the canonical variates. We consider the CCA problem from a predictive point
of view and recast it into a regression framework. By combining an alternating
regression approach together with a lasso penalty, we induce sparsity in the
canonical vectors. We compare the performance with other sparse CCA techniques
in different simulation settings and illustrate its usefulness on a genomic
data set
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
- …