683 research outputs found
Optimality of the Johnson-Lindenstrauss Lemma
For any integers and , we show the existence of a set of vectors such that any embedding satisfying
must have This lower bound matches the upper bound given by the Johnson-Lindenstrauss
lemma [JL84]. Furthermore, our lower bound holds for nearly the full range of
of interest, since there is always an isometric embedding into
dimension (either the identity map, or projection onto
).
Previously such a lower bound was only known to hold against linear maps ,
and not for such a wide range of parameters [LN16]. The
best previously known lower bound for general was [Wel74, Lev83, Alo03], which
is suboptimal for any .Comment: v2: simplified proof, also added reference to Lev8
Recommended from our members
Almost Optimal Explicit Johnson-Lindenstrauss Families
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in the design and analysis of algorithms. Constructions of linear embeddings satisfying the Johnson Lindenstrauss property necessarily involve randomness and much attention has been given to obtain explicit constructions minimizing the number of random bits used. In this work we give explicit constructions with an almost optimal use of randomness: For 0 ε ] ≤ δ, with seed-length r = O log d + log(1/δ) · loglog(1/δ)ε. In particular, for δ = 1/ poly(d) and fixed ε > 0, we obtain seed-length O((log d)(log log d)). Previous constructions required Ω(log2d) random bits to obtain polynomially small error. We also give a new elementary proof of the optimality of the JL lemma showing a lower bound of Ω(log(1/δ)/ε2) on the embedding dimension. Previously, Jayram and Woodruff [9] used communication complexity techniques to show a similar bound.Engineering and Applied Science
Random projections for linear programming
Random projections are random linear maps, sampled from appropriate
distributions, that approx- imately preserve certain geometrical invariants so
that the approximation improves as the dimension of the space grows. The
well-known Johnson-Lindenstrauss lemma states that there are random ma- trices
with surprisingly few rows that approximately preserve pairwise Euclidean
distances among a set of points. This is commonly used to speed up algorithms
based on Euclidean distances. We prove that these matrices also preserve other
quantities, such as the distance to a cone. We exploit this result to devise a
probabilistic algorithm to solve linear programs approximately. We show that
this algorithm can approximately solve very large randomly generated LP
instances. We also showcase its application to an error correction coding
problem.Comment: 26 pages, 1 figur
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix with a much smaller
sketch that can be used to solve a general class of
constrained k-rank approximation problems to within error.
Importantly, this class of problems includes -means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For -means dimensionality reduction, we provide relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for \bv{A}, but can be used directly to compute this
subspace.
Finally, for -means clustering, we show how to achieve a
approximation by Johnson-Lindenstrauss projecting data points to just dimensions. This gives the first result that leverages the
specific structure of -means to achieve dimension independent of input size
and sublinear in
Randomized Dimensionality Reduction for k-means Clustering
We study the topic of dimensionality reduction for -means clustering.
Dimensionality reduction encompasses the union of two approaches: \emph{feature
selection} and \emph{feature extraction}. A feature selection based algorithm
for -means clustering selects a small subset of the input features and then
applies -means clustering on the selected features. A feature extraction
based algorithm for -means clustering constructs a small set of new
artificial features and then applies -means clustering on the constructed
features. Despite the significance of -means clustering as well as the
wealth of heuristic methods addressing it, provably accurate feature selection
methods for -means clustering are not known. On the other hand, two provably
accurate feature extraction methods for -means clustering are known in the
literature; one is based on random projections and the other is based on the
singular value decomposition (SVD).
This paper makes further progress towards a better understanding of
dimensionality reduction for -means clustering. Namely, we present the first
provably accurate feature selection method for -means clustering and, in
addition, we present two feature extraction methods. The first feature
extraction method is based on random projections and it improves upon the
existing results in terms of time complexity and number of features needed to
be extracted. The second feature extraction method is based on fast approximate
SVD factorizations and it also improves upon the existing results in terms of
time complexity. The proposed algorithms are randomized and provide
constant-factor approximation guarantees with respect to the optimal -means
objective value.Comment: IEEE Transactions on Information Theory, to appea
- …