764 research outputs found
Provable Deterministic Leverage Score Sampling
We explain theoretically a curious empirical phenomenon: "Approximating a
matrix by deterministically selecting a subset of its columns with the
corresponding largest leverage scores results in a good low-rank matrix
surrogate". To obtain provable guarantees, previous work requires randomized
sampling of the columns with probabilities proportional to their leverage
scores.
In this work, we provide a novel theoretical analysis of deterministic
leverage score sampling. We show that such deterministic sampling can be
provably as accurate as its randomized counterparts, if the leverage scores
follow a moderately steep power-law decay. We support this power-law assumption
by providing empirical evidence that such decay laws are abundant in real-world
data sets. We then demonstrate empirically the performance of deterministic
leverage score sampling, which many times matches or outperforms the
state-of-the-art techniques.Comment: 20th ACM SIGKDD Conference on Knowledge Discovery and Data Minin
Feature Selection for Linear SVM with Provable Guarantees
We give two provably accurate feature-selection techniques for the linear
SVM. The algorithms run in deterministic and randomized time respectively. Our
algorithms can be used in an unsupervised or supervised setting. The supervised
approach is based on sampling features from support vectors. We prove that the
margin in the feature space is preserved to within -relative error of
the margin in the full feature space in the worst-case. In the unsupervised
setting, we also provide worst-case guarantees of the radius of the minimum
enclosing ball, thereby ensuring comparable generalization as in the full
feature space and resolving an open problem posed in Dasgupta et al. We present
extensive experiments on real-world datasets to support our theory and to
demonstrate that our method is competitive and often better than prior
state-of-the-art, for which there are no known provable guarantees.Comment: Appearing in Proceedings of 18th AISTATS, JMLR W&CP, vol 38, 201
Improved Practical Matrix Sketching with Guarantees
Matrices have become essential data representations for many large-scale
problems in data analytics, and hence matrix sketching is a critical task.
Although much research has focused on improving the error/size tradeoff under
various sketching paradigms, the many forms of error bounds make these
approaches hard to compare in theory and in practice. This paper attempts to
categorize and compare most known methods under row-wise streaming updates with
provable guarantees, and then to tweak some of these methods to gain practical
improvements while retaining guarantees.
For instance, we observe that a simple heuristic iSVD, with no guarantees,
tends to outperform all known approaches in terms of size/error trade-off. We
modify the best performing method with guarantees FrequentDirections under the
size/error trade-off to match the performance of iSVD and retain its
guarantees. We also demonstrate some adversarial datasets where iSVD performs
quite poorly. In comparing techniques in the time/error trade-off, techniques
based on hashing or sampling tend to perform better. In this setting we modify
the most studied sampling regime to retain error guarantee but obtain dramatic
improvements in the time/error trade-off.
Finally, we provide easy replication of our studies on APT, a new testbed
which makes available not only code and datasets, but also a computing platform
with fixed environmental settings.Comment: 27 page
Learning loopy graphical models with latent variables: Efficient methods and guarantees
The problem of structure estimation in graphical models with latent variables
is considered. We characterize conditions for tractable graph estimation and
develop efficient methods with provable guarantees. We consider models where
the underlying Markov graph is locally tree-like, and the model is in the
regime of correlation decay. For the special case of the Ising model, the
number of samples required for structural consistency of our method scales
as , where p is the
number of variables, is the minimum edge potential, is
the depth (i.e., distance from a hidden node to the nearest observed nodes),
and is a parameter which depends on the bounds on node and edge
potentials in the Ising model. Necessary conditions for structural consistency
under any algorithm are derived and our method nearly matches the lower bound
on sample requirements. Further, the proposed method is practical to implement
and provides flexibility to control the number of latent variables and the
cycle lengths in the output graph.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1070 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Randomized and Deterministic Attention Sparsification Algorithms for Over-parameterized Feature Dimension
Large language models (LLMs) have shown their power in different areas.
Attention computation, as an important subroutine of LLMs, has also attracted
interests in theory. Recently the static computation and dynamic maintenance of
attention matrix has been studied by [Alman and Song 2023] and [Brand, Song and
Zhou 2023] from both algorithmic perspective and hardness perspective. In this
work, we consider the sparsification of the attention problem. We make one
simplification which is the logit matrix is symmetric. Let denote the
length of sentence, let denote the embedding dimension. Given a matrix , suppose and with , then we aim for finding (where ) such that \begin{align*} \| D(Y)^{-1} \exp( Y Y^\top ) -
D(X)^{-1} \exp( X X^\top) \|_{\infty} \leq O(r) \end{align*} We provide two
results for this problem.
Our first result is a randomized algorithm. It runs in
time, has succeed
probability, and chooses . Here
denotes the number of non-zero entries in . We use to denote the
exponent of matrix multiplication. Currently .
Our second result is a deterministic algorithm. It runs in
time and chooses . Here denote the -th column
of matrix .
Our main findings have the following implication for applied LLMs task: for
any super large feature dimension, we can reduce it down to the size nearly
linear in length of sentence
- β¦