764 research outputs found

    Provable Deterministic Leverage Score Sampling

    Full text link
    We explain theoretically a curious empirical phenomenon: "Approximating a matrix by deterministically selecting a subset of its columns with the corresponding largest leverage scores results in a good low-rank matrix surrogate". To obtain provable guarantees, previous work requires randomized sampling of the columns with probabilities proportional to their leverage scores. In this work, we provide a novel theoretical analysis of deterministic leverage score sampling. We show that such deterministic sampling can be provably as accurate as its randomized counterparts, if the leverage scores follow a moderately steep power-law decay. We support this power-law assumption by providing empirical evidence that such decay laws are abundant in real-world data sets. We then demonstrate empirically the performance of deterministic leverage score sampling, which many times matches or outperforms the state-of-the-art techniques.Comment: 20th ACM SIGKDD Conference on Knowledge Discovery and Data Minin

    Feature Selection for Linear SVM with Provable Guarantees

    Full text link
    We give two provably accurate feature-selection techniques for the linear SVM. The algorithms run in deterministic and randomized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support vectors. We prove that the margin in the feature space is preserved to within Ο΅\epsilon-relative error of the margin in the full feature space in the worst-case. In the unsupervised setting, we also provide worst-case guarantees of the radius of the minimum enclosing ball, thereby ensuring comparable generalization as in the full feature space and resolving an open problem posed in Dasgupta et al. We present extensive experiments on real-world datasets to support our theory and to demonstrate that our method is competitive and often better than prior state-of-the-art, for which there are no known provable guarantees.Comment: Appearing in Proceedings of 18th AISTATS, JMLR W&CP, vol 38, 201

    Improved Practical Matrix Sketching with Guarantees

    Full text link
    Matrices have become essential data representations for many large-scale problems in data analytics, and hence matrix sketching is a critical task. Although much research has focused on improving the error/size tradeoff under various sketching paradigms, the many forms of error bounds make these approaches hard to compare in theory and in practice. This paper attempts to categorize and compare most known methods under row-wise streaming updates with provable guarantees, and then to tweak some of these methods to gain practical improvements while retaining guarantees. For instance, we observe that a simple heuristic iSVD, with no guarantees, tends to outperform all known approaches in terms of size/error trade-off. We modify the best performing method with guarantees FrequentDirections under the size/error trade-off to match the performance of iSVD and retain its guarantees. We also demonstrate some adversarial datasets where iSVD performs quite poorly. In comparing techniques in the time/error trade-off, techniques based on hashing or sampling tend to perform better. In this setting we modify the most studied sampling regime to retain error guarantee but obtain dramatic improvements in the time/error trade-off. Finally, we provide easy replication of our studies on APT, a new testbed which makes available not only code and datasets, but also a computing platform with fixed environmental settings.Comment: 27 page

    Learning loopy graphical models with latent variables: Efficient methods and guarantees

    Get PDF
    The problem of structure estimation in graphical models with latent variables is considered. We characterize conditions for tractable graph estimation and develop efficient methods with provable guarantees. We consider models where the underlying Markov graph is locally tree-like, and the model is in the regime of correlation decay. For the special case of the Ising model, the number of samples nn required for structural consistency of our method scales as n=Ξ©(ΞΈminβ‘βˆ’Ξ΄Ξ·(Ξ·+1)βˆ’2log⁑p)n=\Omega(\theta_{\min}^{-\delta\eta(\eta+1)-2}\log p), where p is the number of variables, ΞΈmin⁑\theta_{\min} is the minimum edge potential, Ξ΄\delta is the depth (i.e., distance from a hidden node to the nearest observed nodes), and Ξ·\eta is a parameter which depends on the bounds on node and edge potentials in the Ising model. Necessary conditions for structural consistency under any algorithm are derived and our method nearly matches the lower bound on sample requirements. Further, the proposed method is practical to implement and provides flexibility to control the number of latent variables and the cycle lengths in the output graph.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1070 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Randomized and Deterministic Attention Sparsification Algorithms for Over-parameterized Feature Dimension

    Full text link
    Large language models (LLMs) have shown their power in different areas. Attention computation, as an important subroutine of LLMs, has also attracted interests in theory. Recently the static computation and dynamic maintenance of attention matrix has been studied by [Alman and Song 2023] and [Brand, Song and Zhou 2023] from both algorithmic perspective and hardness perspective. In this work, we consider the sparsification of the attention problem. We make one simplification which is the logit matrix is symmetric. Let nn denote the length of sentence, let dd denote the embedding dimension. Given a matrix X∈RnΓ—dX \in \mathbb{R}^{n \times d}, suppose d≫nd \gg n and βˆ₯XX⊀βˆ₯∞<r\| X X^\top \|_{\infty} < r with r∈(0,0.1)r \in (0,0.1), then we aim for finding Y∈RnΓ—mY \in \mathbb{R}^{n \times m} (where mβ‰ͺdm\ll d) such that \begin{align*} \| D(Y)^{-1} \exp( Y Y^\top ) - D(X)^{-1} \exp( X X^\top) \|_{\infty} \leq O(r) \end{align*} We provide two results for this problem. βˆ™\bullet Our first result is a randomized algorithm. It runs in O~(nnz(X)+nΟ‰)\widetilde{O}(\mathrm{nnz}(X) + n^{\omega} ) time, has 1βˆ’Ξ΄1-\delta succeed probability, and chooses m=O(nlog⁑(n/Ξ΄))m = O(n \log(n/\delta)). Here nnz(X)\mathrm{nnz}(X) denotes the number of non-zero entries in XX. We use Ο‰\omega to denote the exponent of matrix multiplication. Currently Ο‰β‰ˆ2.373\omega \approx 2.373. βˆ™\bullet Our second result is a deterministic algorithm. It runs in O~(min⁑{βˆ‘i∈[d]nnz(Xi)2,dnΟ‰βˆ’1}+nΟ‰+1)\widetilde{O}(\min\{\sum_{i\in[d]}\mathrm{nnz}(X_i)^2, dn^{\omega-1}\} + n^{\omega+1}) time and chooses m=O(n)m = O(n). Here XiX_i denote the ii-th column of matrix XX. Our main findings have the following implication for applied LLMs task: for any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence
    • …
    corecore