803 research outputs found
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs
A -birthday repetition of a
two-prover game is a game in which the two provers are sent
random sets of questions from of sizes and respectively.
These two sets are sampled independently uniformly among all sets of questions
of those particular sizes. We prove the following birthday repetition theorem:
when satisfies some mild conditions, decreases exponentially in where is the total number of
questions. Our result positively resolves an open question posted by Aaronson,
Impagliazzo and Moshkovitz (CCC 2014).
As an application of our birthday repetition theorem, we obtain new
fine-grained hardness of approximation results for dense CSPs. Specifically, we
establish a tight trade-off between running time and approximation ratio for
dense CSPs by showing conditional lower bounds, integrality gaps and
approximation algorithms. In particular, for any sufficiently large and for
every , we show the following results:
- We exhibit an -approximation algorithm for dense Max -CSPs
with alphabet size via -level of Sherali-Adams relaxation.
- Through our birthday repetition theorem, we obtain an integrality gap of
for -level Lasserre relaxation for fully-dense Max
-CSP.
- Assuming that there is a constant such that Max 3SAT cannot
be approximated to within of the optimal in sub-exponential
time, our birthday repetition theorem implies that any algorithm that
approximates fully-dense Max -CSP to within a factor takes
time, almost tightly matching the algorithmic
result based on Sherali-Adams relaxation.Comment: 45 page
Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness
We prove several results giving new and stronger connections between learning theory, circuit
complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)]
denote n-variable C-circuits of size ≤ s(n). We show:
Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the
uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1
and ε > 0 the class C[n
k
] can be learned to high accuracy in time O(2n
ε
). There is ε > 0 such that
C[2n
ε
] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n)
O(1)
.
Equivalences between Learning Models. We use learning speedups to obtain equivalences
between various randomized learning and compression models, including sub-exponential
time learning with membership queries, sub-exponential time learning with membership and
equivalence queries, probabilistic function compression and probabilistic average-case function
compression.
A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting,
there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure
pseudorandom functions computable in C[poly(n)].
Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n
k
] admits a
randomized weak learning algorithm with membership queries under the uniform distribution
that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n
k
]. If for some ε > 0 there
are P-natural proofs useful against C[2n
ε
], then ZPEXP * C[poly(n)].
Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆
i.o.Circuit[n
k
], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3
], then ZPEXP ⊆
i.o.ESUBEXP.
Hardness Results for MCSP. All functions in non-uniform NC1
reduce to the Minimum
Circuit Size Problem via truth-table reductions computable by TC0
circuits. In particular, if
MCSP ∈ TC0
then NC1 = TC0
Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness
We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show:
Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}.
Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression.
A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)].
Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)].
Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP.
Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
Consistency of circuit lower bounds with bounded theories
Proving that there are problems in that require
boolean circuits of super-linear size is a major frontier in complexity theory.
While such lower bounds are known for larger complexity classes, existing
results only show that the corresponding problems are hard on infinitely many
input lengths. For instance, proving almost-everywhere circuit lower bounds is
open even for problems in . Giving the notorious difficulty of
proving lower bounds that hold for all large input lengths, we ask the
following question: Can we show that a large set of techniques cannot prove
that is easy infinitely often? Motivated by this and related
questions about the interaction between mathematical proofs and computations,
we investigate circuit complexity from the perspective of logic.
Among other results, we prove that for any parameter it is
consistent with theory that computational class , where is one of
the pairs: and , and , and
. In other words, these theories cannot establish
infinitely often circuit upper bounds for the corresponding problems. This is
of interest because the weaker theory already formalizes
sophisticated arguments, such as a proof of the PCP Theorem. These consistency
statements are unconditional and improve on earlier theorems of [KO17] and
[BM18] on the consistency of lower bounds with
Quantum de Finetti Theorems under Local Measurements with Applications
Quantum de Finetti theorems are a useful tool in the study of correlations in
quantum multipartite states. In this paper we prove two new quantum de Finetti
theorems, both showing that under tests formed by local measurements one can
get a much improved error dependence on the dimension of the subsystems. We
also obtain similar results for non-signaling probability distributions. We
give the following applications of the results:
We prove the optimality of the Chen-Drucker protocol for 3-SAT, under the
exponential time hypothesis.
We show that the maximum winning probability of free games can be estimated
in polynomial time by linear programming. We also show that 3-SAT with m
variables can be reduced to obtaining a constant error approximation of the
maximum winning probability under entangled strategies of O(m^{1/2})-player
one-round non-local games, in which the players communicate O(m^{1/2}) bits all
together.
We show that the optimization of certain polynomials over the hypersphere can
be performed in quasipolynomial time in the number of variables n by
considering O(log(n)) rounds of the Sum-of-Squares (Parrilo/Lasserre) hierarchy
of semidefinite programs. As an application to entanglement theory, we find a
quasipolynomial-time algorithm for deciding multipartite separability.
We consider a result due to Aaronson -- showing that given an unknown n qubit
state one can perform tomography that works well for most observables by
measuring only O(n) independent and identically distributed (i.i.d.) copies of
the state -- and relax the assumption of having i.i.d copies of the state to
merely the ability to select subsystems at random from a quantum multipartite
state.
The proofs of the new quantum de Finetti theorems are based on information
theory, in particular on the chain rule of mutual information.Comment: 39 pages, no figure. v2: changes to references and other minor
improvements. v3: added some explanations, mostly about Theorem 1 and
Conjecture 5. STOC version. v4, v5. small improvements and fixe
- …