817 research outputs found
Fast Label Embeddings via Randomized Linear Algebra
Many modern multiclass and multilabel problems are characterized by
increasingly large output spaces. For these problems, label embeddings have
been shown to be a useful primitive that can improve computational and
statistical efficiency. In this work we utilize a correspondence between rank
constrained estimation and low dimensional label embeddings that uncovers a
fast label embedding algorithm which works in both the multiclass and
multilabel settings. The result is a randomized algorithm whose running time is
exponentially faster than naive algorithms. We demonstrate our techniques on
two large-scale public datasets, from the Large Scale Hierarchical Text
Challenge and the Open Directory Project, where we obtain state of the art
results.Comment: To appear in the proceedings of the ECML/PKDD 2015 conference.
Reference implementation available at https://github.com/pmineiro/randembe
Sparser Johnson-Lindenstrauss Transforms
We give two different and simple constructions for dimensionality reduction
in via linear mappings that are sparse: only an
-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified
abstract, fixed typos, added open problem section; v4: simplified section 4
by giving 1 analysis that covers both constructions; v3: proof of Theorem 25
in v2 was written incorrectly, now fixed; v2: Added another construction
achieving same upper bound, and added proof of near-tight lower bound for DKS
schem
Optimal approximate matrix product in terms of stable rank
We prove, using the subspace embedding guarantee in a black box way, that one
can achieve the spectral norm guarantee for approximate matrix multiplication
with a dimensionality-reducing map having
rows. Here is the maximum stable rank, i.e. squared ratio of
Frobenius and operator norms, of the two matrices being multiplied. This is a
quantitative improvement over previous work of [MZ11, KVZ14], and is also
optimal for any oblivious dimensionality-reducing map. Furthermore, due to the
black box reliance on the subspace embedding property in our proofs, our
theorem can be applied to a much more general class of sketching matrices than
what was known before, in addition to achieving better bounds. For example, one
can apply our theorem to efficient subspace embeddings such as the Subsampled
Randomized Hadamard Transform or sparse subspace embeddings, or even with
subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication
shown in prior work, implies quantitative improvements for approximate least
squares regression and low rank approximation. Our main result has also already
been applied to improve dimensionality reduction guarantees for -means
clustering [CEMMP14], and implies new results for nonparametric regression
[YPW15].
We also separately point out that the proof of the "BSS" deterministic
row-sampling result of [BSS12] can be modified to show that for any matrices
of stable rank at most , one can achieve the spectral norm
guarantee for approximate matrix multiplication of by deterministically
sampling rows that can be found in polynomial
time. The original result of [BSS12] was for rank instead of stable rank. Our
observation leads to a stronger version of a main theorem of [KMST10].Comment: v3: minor edits; v2: fixed one step in proof of Theorem 9 which was
wrong by a constant factor (see the new Lemma 5 and its use; final theorem
unaffected
Coresets-Methods and History: A Theoreticians Design Pattern for Approximation and Streaming Algorithms
We present a technical survey on the state of the art approaches in data reduction and the coreset framework. These include geometric decompositions, gradient methods, random sampling, sketching and random projections. We further outline their importance for the design of streaming algorithms and give a brief overview on lower bounding techniques
Towards Lightweight and Automated Representation Learning System for Networks
We propose LIGHTNE 2.0, a cost-effective, scalable, automated, and
high-quality network embedding system that scales to graphs with hundreds of
billions of edges on a single machine. In contrast to the mainstream belief
that distributed architecture and GPUs are needed for large-scale network
embedding with good quality, we prove that we can achieve higher quality,
better scalability, lower cost, and faster runtime with shared-memory, CPU-only
architecture. LIGHTNE 2.0 combines two theoretically grounded embedding methods
NetSMF and ProNE. We introduce the following techniques to network embedding
for the first time: (1) a newly proposed downsampling method to reduce the
sample complexity of NetSMF while preserving its theoretical advantages; (2) a
high-performance parallel graph processing stack GBBS to achieve high memory
efficiency and scalability; (3) sparse parallel hash table to aggregate and
maintain the matrix sparsifier in memory; (4) a fast randomized singular value
decomposition (SVD) enhanced by power iteration and fast orthonormalization to
improve vanilla randomized SVD in terms of both efficiency and effectiveness;
(5) Intel MKL for proposed fast randomized SVD and spectral propagation; and
(6) a fast and lightweight AutoML library FLAML for automated hyperparameter
tuning. Experimental results show that LIGHTNE 2.0 can be up to 84X faster than
GraphVite, 30X faster than PBG and 9X faster than NetSMF while delivering
better performance. LIGHTNE 2.0 can embed very large graph with 1.7 billion
nodes and 124 billion edges in half an hour on a CPU server, while other
baselines cannot handle very large graphs of this scale
Recommended from our members
OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings
An oblivious subspace embedding (OSE) given some parameters , d is a distribution over matrices such that for any linear subspace with dim(W) = d, . We show that a certain class of distributions, Oblivious Sparse Norm-Approximating Projections (OSNAPs), provides OSE's with , and where every matrix in the support of the OSE has only non-zero entries per column, for any desired constant. Plugging OSNAPs into known algorithms for approximate least squares regression, regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: we show that for any fixed with orthonormal columns and random sparse , all singular values of lie in with good probability. This can be seen as a generalization of the sparse Johnson-Lindenstrauss lemma, which was concerned with d = 1. Our methods also recover a slightly sharper version of a main result of [Clarkson-Woodruff, STOC 2013], with a much simpler proof. That is, we show that OSNAPs give an OSE with , .Engineering and Applied Science
- …