1,118 research outputs found
Randomized Riemannian Preconditioning for Orthogonality Constrained Problems
Optimization problems with (generalized) orthogonality constraints are
prevalent across science and engineering. For example, in computational science
they arise in the symmetric (generalized) eigenvalue problem, in nonlinear
eigenvalue problems, and in electronic structures computations, to name a few
problems. In statistics and machine learning, they arise, for example, in
canonical correlation analysis and in linear discriminant analysis. In this
article, we consider using randomized preconditioning in the context of
optimization problems with generalized orthogonality constraints. Our proposed
algorithms are based on Riemannian optimization on the generalized Stiefel
manifold equipped with a non-standard preconditioned geometry, which
necessitates development of the geometric components necessary for developing
algorithms based on this approach. Furthermore, we perform asymptotic
convergence analysis of the preconditioned algorithms which help to
characterize the quality of a given preconditioner using second-order
information. Finally, for the problems of canonical correlation analysis and
linear discriminant analysis, we develop randomized preconditioners along with
corresponding bounds on the relevant condition number
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix with a much smaller
sketch that can be used to solve a general class of
constrained k-rank approximation problems to within error.
Importantly, this class of problems includes -means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For -means dimensionality reduction, we provide relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for \bv{A}, but can be used directly to compute this
subspace.
Finally, for -means clustering, we show how to achieve a
approximation by Johnson-Lindenstrauss projecting data points to just dimensions. This gives the first result that leverages the
specific structure of -means to achieve dimension independent of input size
and sublinear in
Optimality of the Johnson-Lindenstrauss Lemma
For any integers and , we show the existence of a set of vectors such that any embedding satisfying
must have This lower bound matches the upper bound given by the Johnson-Lindenstrauss
lemma [JL84]. Furthermore, our lower bound holds for nearly the full range of
of interest, since there is always an isometric embedding into
dimension (either the identity map, or projection onto
).
Previously such a lower bound was only known to hold against linear maps ,
and not for such a wide range of parameters [LN16]. The
best previously known lower bound for general was [Wel74, Lev83, Alo03], which
is suboptimal for any .Comment: v2: simplified proof, also added reference to Lev8
Coresets-Methods and History: A Theoreticians Design Pattern for Approximation and Streaming Algorithms
We present a technical survey on the state of the art approaches in data reduction and the coreset framework. These include geometric decompositions, gradient methods, random sampling, sketching and random projections. We further outline their importance for the design of streaming algorithms and give a brief overview on lower bounding techniques
Toward a unified theory of sparse dimensionality reduction in Euclidean space
Let be a sparse Johnson-Lindenstrauss
transform [KN14] with non-zeroes per column. For a subset of the unit
sphere, given, we study settings for required to
ensure i.e. so that preserves the norm of every
simultaneously and multiplicatively up to . We
introduce a new complexity parameter, which depends on the geometry of , and
show that it suffices to choose and such that this parameter is small.
Our result is a sparse analog of Gordon's theorem, which was concerned with a
dense having i.i.d. Gaussian entries. We qualitatively unify several
results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and
Fourier-based restricted isometries. Our work also implies new results in using
the sparse Johnson-Lindenstrauss transform in numerical linear algebra,
classical and model-based compressed sensing, manifold learning, and
constrained least squares problems such as the Lasso
Impossibility of dimension reduction in the nuclear norm
Let (the Schatten--von Neumann trace class) denote the Banach
space of all compact linear operators whose nuclear norm
is finite, where
are the singular values of . We prove that
for arbitrarily large there exists a subset
with that cannot be
embedded with bi-Lipschitz distortion into any -dimensional
linear subspace of . is not even a -Lipschitz
quotient of any subset of any -dimensional linear subspace of
. Thus, does not admit a dimension reduction
result \'a la Johnson and Lindenstrauss (1984), which complements the work of
Harrow, Montanaro and Short (2011) on the limitations of quantum dimension
reduction under the assumption that the embedding into low dimensions is a
quantum channel. Such a statement was previously known with
replaced by the Banach space of absolutely summable sequences via the
work of Brinkman and Charikar (2003). In fact, the above set can
be taken to be the same set as the one that Brinkman and Charikar considered,
viewed as a collection of diagonal matrices in . The challenge is
to demonstrate that cannot be faithfully realized in an arbitrary
low-dimensional subspace of , while Brinkman and Charikar
obtained such an assertion only for subspaces of that consist of
diagonal operators (i.e., subspaces of ). We establish this by proving
that the Markov 2-convexity constant of any finite dimensional linear subspace
of is at most a universal constant multiple of
- …