47 research outputs found
Linear Programming Bounds for Randomly Sampling Colorings
Here we study the problem of sampling random proper colorings of a bounded
degree graph. Let be the number of colors and let be the maximum
degree. In 1999, Vigoda showed that the Glauber dynamics is rapidly mixing for
any . It turns out that there is a natural barrier at
, below which there is no one-step coupling that is contractive,
even for the flip dynamics.
We use linear programming and duality arguments to guide our construction of
a better coupling. We fully characterize the obstructions to going beyond
. These examples turn out to be quite brittle, and even starting
from one, they are likely to break apart before the flip dynamics changes the
distance between two neighboring colorings. We use this intuition to design a
variable length coupling that shows that the Glauber dynamics is rapidly mixing
for any where . This is the first improvement to Vigoda's analysis that
holds for general graphs.Comment: 30 pages, 3 figures; fixed some typo
Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications
We introduce the problem of learning mixtures of subcubes over
, which contains many classic learning theory problems as a special
case (and is itself a special case of others). We give a surprising -time learning algorithm based on higher-order multilinear moments. It is
not possible to learn the parameters because the same distribution can be
represented by quite different models. Instead, we develop a framework for
reasoning about how multilinear moments can pinpoint essential features of the
mixture, like the number of components.
We also give applications of our algorithm to learning decision trees with
stochastic transitions (which also capture interesting scenarios where the
transitions are deterministic but there are latent variables). Using our
algorithm for learning mixtures of subcubes, we can approximate the Bayes
optimal classifier within additive error on -leaf decision trees
with at most stochastic transitions on any root-to-leaf path in time. In this stochastic setting, the
classic Occam algorithms for learning decision trees with zero stochastic
transitions break down, while the low-degree algorithm of Linial et al.
inherently has a quasipolynomial dependence on .
In contrast, as we will show, mixtures of subcubes are uniquely
determined by their degree moments and hence provide a useful
abstraction for simultaneously achieving the polynomial dependence on
of the classic Occam algorithms for decision trees and the
flexibility of the low-degree algorithm in being able to accommodate stochastic
transitions. Using our multilinear moment techniques, we also give the first
improved upper and lower bounds since the work of Feldman et al. for the
related but harder problem of learning mixtures of binary product
distributions.Comment: 62 pages; to appear in STOC 201
Provably learning a multi-head attention layer
The multi-head attention layer is one of the key components of the
transformer architecture that sets it apart from traditional feed-forward
models. Given a sequence length , attention matrices
, and
projection matrices , the corresponding multi-head attention layer transforms length- sequences of -dimensional
tokens via .
In this work, we initiate the study of provably learning a multi-head attention
layer from random examples and give the first nontrivial upper and lower bounds
for this problem:
- Provided satisfy certain
non-degeneracy conditions, we give a -time algorithm that learns
to small error given random labeled examples drawn uniformly from .
- We prove computational lower bounds showing that in the worst case,
exponential dependence on is unavoidable.
We focus on Boolean to mimic the discrete nature of tokens in
large language models, though our techniques naturally extend to standard
continuous settings, e.g. Gaussian. Our algorithm, which is centered around
using examples to sculpt a convex body containing the unknown parameters, is a
significant departure from existing provable algorithms for learning
feedforward networks, which predominantly exploit algebraic and rotation
invariance properties of the Gaussian distribution. In contrast, our analysis
is more flexible as it primarily relies on various upper and lower tail bounds
for the input distribution and "slices" thereof.Comment: 105 pages, comments welcom
A faster and simpler algorithm for learning shallow networks
We revisit the well-studied problem of learning a linear combination of
ReLU activations given labeled examples drawn from the standard -dimensional
Gaussian measure. Chen et al. [CDG+23] recently gave the first algorithm for
this problem to run in time when ,
where is the target error. More precisely, their algorithm runs
in time and learns over multiple
stages. Here we show that a much simpler one-stage version of their algorithm
suffices, and moreover its runtime is only .Comment: 14 page
Futility and utility of a few ancillas for Pauli channel learning
In this paper we revisit one of the prototypical tasks for characterizing the
structure of noise in quantum devices, estimating the eigenvalues of an
-qubit Pauli noise channel. Prior work (Chen et al., 2022) established
exponential lower bounds for this task for algorithms with limited quantum
memory. We first improve upon their lower bounds and show:
(1) Any algorithm without quantum memory must make
measurements to estimate each eigenvalue within error . This is tight
and implies the randomized benchmarking protocol is optimal, resolving an open
question of (Flammia and Wallman, 2020).
(2) Any algorithm with ancilla qubits of quantum memory must make
queries to the unknown channel. Crucially, unlike in
(Chen et al., 2022), our bound holds even if arbitrary adaptive control and
channel concatenation are allowed.
In fact these lower bounds, like those of (Chen et al., 2022), hold even for
the easier hypothesis testing problem of determining whether the underlying
channel is completely depolarizing or has exactly one other nontrivial
eigenvalue. Surprisingly, we show that:
(3) With only ancilla qubits of quantum memory, there is an algorithm
that solves this hypothesis testing task with high probability using a single
measurement.
Note that (3) does not contradict (2) as the protocol concatenates
exponentially many queries to the channel before the measurement. This result
suggests a novel mechanism by which channel concatenation and qubits of
quantum memory could work in tandem to yield striking speedups for quantum
process learning that are not possible for quantum state learning.Comment: 35 pages, 1 figur