47 research outputs found

    Linear Programming Bounds for Randomly Sampling Colorings

    Full text link
    Here we study the problem of sampling random proper colorings of a bounded degree graph. Let kk be the number of colors and let dd be the maximum degree. In 1999, Vigoda showed that the Glauber dynamics is rapidly mixing for any k>116dk > \frac{11}{6} d. It turns out that there is a natural barrier at 116\frac{11}{6}, below which there is no one-step coupling that is contractive, even for the flip dynamics. We use linear programming and duality arguments to guide our construction of a better coupling. We fully characterize the obstructions to going beyond 116\frac{11}{6}. These examples turn out to be quite brittle, and even starting from one, they are likely to break apart before the flip dynamics changes the distance between two neighboring colorings. We use this intuition to design a variable length coupling that shows that the Glauber dynamics is rapidly mixing for any kβ‰₯(116βˆ’Ο΅0)dk\ge \left(\frac{11}{6} - \epsilon_0\right)d where Ο΅0β‰₯9.4β‹…10βˆ’5\epsilon_0 \geq 9.4 \cdot 10^{-5}. This is the first improvement to Vigoda's analysis that holds for general graphs.Comment: 30 pages, 3 figures; fixed some typo

    Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications

    Full text link
    We introduce the problem of learning mixtures of kk subcubes over {0,1}n\{0,1\}^n, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising nO(log⁑k)n^{O(\log k)}-time learning algorithm based on higher-order multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error Ο΅\epsilon on kk-leaf decision trees with at most ss stochastic transitions on any root-to-leaf path in nO(s+log⁑k)β‹…poly(1/Ο΅)n^{O(s + \log k)}\cdot\text{poly}(1/\epsilon) time. In this stochastic setting, the classic Occam algorithms for learning decision trees with zero stochastic transitions break down, while the low-degree algorithm of Linial et al. inherently has a quasipolynomial dependence on 1/Ο΅1/\epsilon. In contrast, as we will show, mixtures of kk subcubes are uniquely determined by their degree 2log⁑k2 \log k moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/Ο΅1/\epsilon of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of Feldman et al. for the related but harder problem of learning mixtures of binary product distributions.Comment: 62 pages; to appear in STOC 201

    Provably learning a multi-head attention layer

    Full text link
    The multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models. Given a sequence length kk, attention matrices Θ1,…,Θm∈RdΓ—d\mathbf{\Theta}_1,\ldots,\mathbf{\Theta}_m\in\mathbb{R}^{d\times d}, and projection matrices W1,…,Wm∈RdΓ—d\mathbf{W}_1,\ldots,\mathbf{W}_m\in\mathbb{R}^{d\times d}, the corresponding multi-head attention layer F:RkΓ—dβ†’RkΓ—dF: \mathbb{R}^{k\times d}\to \mathbb{R}^{k\times d} transforms length-kk sequences of dd-dimensional tokens X∈RkΓ—d\mathbf{X}\in\mathbb{R}^{k\times d} via F(X)β‰œβˆ‘i=1msoftmax(XΘiX⊀)XWiF(\mathbf{X}) \triangleq \sum^m_{i=1} \mathrm{softmax}(\mathbf{X}\mathbf{\Theta}_i\mathbf{X}^\top)\mathbf{X}\mathbf{W}_i. In this work, we initiate the study of provably learning a multi-head attention layer from random examples and give the first nontrivial upper and lower bounds for this problem: - Provided {Wi,Θi}\{\mathbf{W}_i, \mathbf{\Theta}_i\} satisfy certain non-degeneracy conditions, we give a (dk)O(m3)(dk)^{O(m^3)}-time algorithm that learns FF to small error given random labeled examples drawn uniformly from {Β±1}kΓ—d\{\pm 1\}^{k\times d}. - We prove computational lower bounds showing that in the worst case, exponential dependence on mm is unavoidable. We focus on Boolean X\mathbf{X} to mimic the discrete nature of tokens in large language models, though our techniques naturally extend to standard continuous settings, e.g. Gaussian. Our algorithm, which is centered around using examples to sculpt a convex body containing the unknown parameters, is a significant departure from existing provable algorithms for learning feedforward networks, which predominantly exploit algebraic and rotation invariance properties of the Gaussian distribution. In contrast, our analysis is more flexible as it primarily relies on various upper and lower tail bounds for the input distribution and "slices" thereof.Comment: 105 pages, comments welcom

    A faster and simpler algorithm for learning shallow networks

    Full text link
    We revisit the well-studied problem of learning a linear combination of kk ReLU activations given labeled examples drawn from the standard dd-dimensional Gaussian measure. Chen et al. [CDG+23] recently gave the first algorithm for this problem to run in poly(d,1/Ξ΅)\text{poly}(d,1/\varepsilon) time when k=O(1)k = O(1), where Ξ΅\varepsilon is the target error. More precisely, their algorithm runs in time (d/Ξ΅)quasipoly(k)(d/\varepsilon)^{\mathrm{quasipoly}(k)} and learns over multiple stages. Here we show that a much simpler one-stage version of their algorithm suffices, and moreover its runtime is only (d/Ξ΅)O(k2)(d/\varepsilon)^{O(k^2)}.Comment: 14 page

    Futility and utility of a few ancillas for Pauli channel learning

    Full text link
    In this paper we revisit one of the prototypical tasks for characterizing the structure of noise in quantum devices, estimating the eigenvalues of an nn-qubit Pauli noise channel. Prior work (Chen et al., 2022) established exponential lower bounds for this task for algorithms with limited quantum memory. We first improve upon their lower bounds and show: (1) Any algorithm without quantum memory must make Ξ©(2n/Ο΅2)\Omega(2^n/\epsilon^2) measurements to estimate each eigenvalue within error Ο΅\epsilon. This is tight and implies the randomized benchmarking protocol is optimal, resolving an open question of (Flammia and Wallman, 2020). (2) Any algorithm with ≀k\le k ancilla qubits of quantum memory must make Ξ©(2(nβˆ’k)/3)\Omega(2^{(n-k)/3}) queries to the unknown channel. Crucially, unlike in (Chen et al., 2022), our bound holds even if arbitrary adaptive control and channel concatenation are allowed. In fact these lower bounds, like those of (Chen et al., 2022), hold even for the easier hypothesis testing problem of determining whether the underlying channel is completely depolarizing or has exactly one other nontrivial eigenvalue. Surprisingly, we show that: (3) With only k=2k=2 ancilla qubits of quantum memory, there is an algorithm that solves this hypothesis testing task with high probability using a single measurement. Note that (3) does not contradict (2) as the protocol concatenates exponentially many queries to the channel before the measurement. This result suggests a novel mechanism by which channel concatenation and O(1)O(1) qubits of quantum memory could work in tandem to yield striking speedups for quantum process learning that are not possible for quantum state learning.Comment: 35 pages, 1 figur
    corecore