3,344 research outputs found
Eigenvector continuation with subspace learning
A common challenge faced in quantum physics is finding the extremal
eigenvalues and eigenvectors of a Hamiltonian matrix in a vector space so large
that linear algebra operations on general vectors are not possible. There are
numerous efficient methods developed for this task, but they generally fail
when some control parameter in the Hamiltonian matrix exceeds some threshold
value. In this work we present a new technique called eigenvector continuation
that can extend the reach of these methods. The key insight is that while an
eigenvector resides in a linear space with enormous dimensions, the eigenvector
trajectory generated by smooth changes of the Hamiltonian matrix is well
approximated by a very low-dimensional manifold. We prove this statement using
analytic function theory and propose an algorithm to solve for the extremal
eigenvectors. We benchmark the method using several examples from quantum
many-body theory.Comment: Version to appear in Physical Review Letters, 4 + 6 pages (main +
supplemental materials), 1 + 6 figures (main + supplemental materials
Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery
PCA is one of the most widely used dimension reduction techniques. A related
easier problem is "subspace learning" or "subspace estimation". Given
relatively clean data, both are easily solved via singular value decomposition
(SVD). The problem of subspace learning or PCA in the presence of outliers is
called robust subspace learning or robust PCA (RPCA). For long data sequences,
if one tries to use a single lower dimensional subspace to represent the data,
the required subspace dimension may end up being quite large. For such data, a
better model is to assume that it lies in a low-dimensional subspace that can
change over time, albeit gradually. The problem of tracking such data (and the
subspaces) while being robust to outliers is called robust subspace tracking
(RST). This article provides a magazine-style overview of the entire field of
robust subspace learning and tracking. In particular solutions for three
problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition
(S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an
entire data vector is either an outlier or an inlier. The S+LR formulation
instead assumes that outliers occur on only a few data vector indices and hence
are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201
Joint & Progressive Learning from High-Dimensional Data for Multi-Label Classification
Despite the fact that nonlinear subspace learning techniques (e.g. manifold
learning) have successfully applied to data representation, there is still room
for improvement in explainability (explicit mapping), generalization
(out-of-samples), and cost-effectiveness (linearization). To this end, a novel
linearized subspace learning technique is developed in a joint and progressive
way, called \textbf{j}oint and \textbf{p}rogressive \textbf{l}earning
str\textbf{a}teg\textbf{y} (J-Play), with its application to multi-label
classification. The J-Play learns high-level and semantically meaningful
feature representation from high-dimensional data by 1) jointly performing
multiple subspace learning and classification to find a latent subspace where
samples are expected to be better classified; 2) progressively learning
multi-coupled projections to linearly approach the optimal mapping bridging the
original space with the most discriminative subspace; 3) locally embedding
manifold structure in each learnable latent subspace. Extensive experiments are
performed to demonstrate the superiority and effectiveness of the proposed
method in comparison with previous state-of-the-art methods.Comment: accepted in ECCV 201
On the Sample Complexity of Subspace Learning
A large number of algorithms in machine learning, from principal component
analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral
embedding and support estimation methods, rely on estimating a linear subspace
from samples. In this paper we introduce a general formulation of this problem
and derive novel learning error estimates. Our results rely on natural
assumptions on the spectral properties of the covariance operator associated to
the data distribu- tion, and hold for a wide class of metrics between
subspaces. As special cases, we discuss sharp error estimates for the
reconstruction properties of PCA and spectral support estimation. Key to our
analysis is an operator theoretic approach that has broad applicability to
spectral learning methods.Comment: Extendend Version of conference pape
Neural Class-Specific Regression for face verification
Face verification is a problem approached in the literature mainly using
nonlinear class-specific subspace learning techniques. While it has been shown
that kernel-based Class-Specific Discriminant Analysis is able to provide
excellent performance in small- and medium-scale face verification problems,
its application in today's large-scale problems is difficult due to its
training space and computational requirements. In this paper, generalizing our
previous work on kernel-based class-specific discriminant analysis, we show
that class-specific subspace learning can be cast as a regression problem. This
allows us to derive linear, (reduced) kernel and neural network-based
class-specific discriminant analysis methods using efficient batch and/or
iterative training schemes, suited for large-scale learning problems. We test
the performance of these methods in two datasets describing medium- and
large-scale face verification problems.Comment: 9 pages, 4 figure
- …