47 research outputs found

### Linear Programming Bounds for Randomly Sampling Colorings

Here we study the problem of sampling random proper colorings of a bounded
degree graph. Let $k$ be the number of colors and let $d$ be the maximum
degree. In 1999, Vigoda showed that the Glauber dynamics is rapidly mixing for
any $k > \frac{11}{6} d$. It turns out that there is a natural barrier at
$\frac{11}{6}$, below which there is no one-step coupling that is contractive,
even for the flip dynamics.
We use linear programming and duality arguments to guide our construction of
a better coupling. We fully characterize the obstructions to going beyond
$\frac{11}{6}$. These examples turn out to be quite brittle, and even starting
from one, they are likely to break apart before the flip dynamics changes the
distance between two neighboring colorings. We use this intuition to design a
variable length coupling that shows that the Glauber dynamics is rapidly mixing
for any $k\ge \left(\frac{11}{6} - \epsilon_0\right)d$ where $\epsilon_0 \geq
9.4 \cdot 10^{-5}$. This is the first improvement to Vigoda's analysis that
holds for general graphs.Comment: 30 pages, 3 figures; fixed some typo

### Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications

We introduce the problem of learning mixtures of $k$ subcubes over
$\{0,1\}^n$, which contains many classic learning theory problems as a special
case (and is itself a special case of others). We give a surprising $n^{O(\log
k)}$-time learning algorithm based on higher-order multilinear moments. It is
not possible to learn the parameters because the same distribution can be
represented by quite different models. Instead, we develop a framework for
reasoning about how multilinear moments can pinpoint essential features of the
mixture, like the number of components.
We also give applications of our algorithm to learning decision trees with
stochastic transitions (which also capture interesting scenarios where the
transitions are deterministic but there are latent variables). Using our
algorithm for learning mixtures of subcubes, we can approximate the Bayes
optimal classifier within additive error $\epsilon$ on $k$-leaf decision trees
with at most $s$ stochastic transitions on any root-to-leaf path in $n^{O(s +
\log k)}\cdot\text{poly}(1/\epsilon)$ time. In this stochastic setting, the
classic Occam algorithms for learning decision trees with zero stochastic
transitions break down, while the low-degree algorithm of Linial et al.
inherently has a quasipolynomial dependence on $1/\epsilon$.
In contrast, as we will show, mixtures of $k$ subcubes are uniquely
determined by their degree $2 \log k$ moments and hence provide a useful
abstraction for simultaneously achieving the polynomial dependence on
$1/\epsilon$ of the classic Occam algorithms for decision trees and the
flexibility of the low-degree algorithm in being able to accommodate stochastic
transitions. Using our multilinear moment techniques, we also give the first
improved upper and lower bounds since the work of Feldman et al. for the
related but harder problem of learning mixtures of binary product
distributions.Comment: 62 pages; to appear in STOC 201

### Provably learning a multi-head attention layer

The multi-head attention layer is one of the key components of the
transformer architecture that sets it apart from traditional feed-forward
models. Given a sequence length $k$, attention matrices
$\mathbf{\Theta}_1,\ldots,\mathbf{\Theta}_m\in\mathbb{R}^{d\times d}$, and
projection matrices $\mathbf{W}_1,\ldots,\mathbf{W}_m\in\mathbb{R}^{d\times
d}$, the corresponding multi-head attention layer $F: \mathbb{R}^{k\times d}\to
\mathbb{R}^{k\times d}$ transforms length-$k$ sequences of $d$-dimensional
tokens $\mathbf{X}\in\mathbb{R}^{k\times d}$ via $F(\mathbf{X}) \triangleq
\sum^m_{i=1}
\mathrm{softmax}(\mathbf{X}\mathbf{\Theta}_i\mathbf{X}^\top)\mathbf{X}\mathbf{W}_i$.
In this work, we initiate the study of provably learning a multi-head attention
layer from random examples and give the first nontrivial upper and lower bounds
for this problem:
- Provided $\{\mathbf{W}_i, \mathbf{\Theta}_i\}$ satisfy certain
non-degeneracy conditions, we give a $(dk)^{O(m^3)}$-time algorithm that learns
$F$ to small error given random labeled examples drawn uniformly from $\{\pm
1\}^{k\times d}$.
- We prove computational lower bounds showing that in the worst case,
exponential dependence on $m$ is unavoidable.
We focus on Boolean $\mathbf{X}$ to mimic the discrete nature of tokens in
large language models, though our techniques naturally extend to standard
continuous settings, e.g. Gaussian. Our algorithm, which is centered around
using examples to sculpt a convex body containing the unknown parameters, is a
significant departure from existing provable algorithms for learning
feedforward networks, which predominantly exploit algebraic and rotation
invariance properties of the Gaussian distribution. In contrast, our analysis
is more flexible as it primarily relies on various upper and lower tail bounds
for the input distribution and "slices" thereof.Comment: 105 pages, comments welcom

### A faster and simpler algorithm for learning shallow networks

We revisit the well-studied problem of learning a linear combination of $k$
ReLU activations given labeled examples drawn from the standard $d$-dimensional
Gaussian measure. Chen et al. [CDG+23] recently gave the first algorithm for
this problem to run in $\text{poly}(d,1/\varepsilon)$ time when $k = O(1)$,
where $\varepsilon$ is the target error. More precisely, their algorithm runs
in time $(d/\varepsilon)^{\mathrm{quasipoly}(k)}$ and learns over multiple
stages. Here we show that a much simpler one-stage version of their algorithm
suffices, and moreover its runtime is only $(d/\varepsilon)^{O(k^2)}$.Comment: 14 page

### Futility and utility of a few ancillas for Pauli channel learning

In this paper we revisit one of the prototypical tasks for characterizing the
structure of noise in quantum devices, estimating the eigenvalues of an
$n$-qubit Pauli noise channel. Prior work (Chen et al., 2022) established
exponential lower bounds for this task for algorithms with limited quantum
memory. We first improve upon their lower bounds and show:
(1) Any algorithm without quantum memory must make $\Omega(2^n/\epsilon^2)$
measurements to estimate each eigenvalue within error $\epsilon$. This is tight
and implies the randomized benchmarking protocol is optimal, resolving an open
question of (Flammia and Wallman, 2020).
(2) Any algorithm with $\le k$ ancilla qubits of quantum memory must make
$\Omega(2^{(n-k)/3})$ queries to the unknown channel. Crucially, unlike in
(Chen et al., 2022), our bound holds even if arbitrary adaptive control and
channel concatenation are allowed.
In fact these lower bounds, like those of (Chen et al., 2022), hold even for
the easier hypothesis testing problem of determining whether the underlying
channel is completely depolarizing or has exactly one other nontrivial
eigenvalue. Surprisingly, we show that:
(3) With only $k=2$ ancilla qubits of quantum memory, there is an algorithm
that solves this hypothesis testing task with high probability using a single
measurement.
Note that (3) does not contradict (2) as the protocol concatenates
exponentially many queries to the channel before the measurement. This result
suggests a novel mechanism by which channel concatenation and $O(1)$ qubits of
quantum memory could work in tandem to yield striking speedups for quantum
process learning that are not possible for quantum state learning.Comment: 35 pages, 1 figur