247 research outputs found
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
A Casual Tour Around a Circuit Complexity Bound
I will discuss the recent proof that the complexity class NEXP
(nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial
size. The proof will be described from the perspective of someone trying to
discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News,
September 201
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation
We present an efficient proof system for Multipoint Arithmetic Circuit
Evaluation: for every arithmetic circuit of size and
degree over a field , and any inputs ,
the Prover sends the Verifier the values and a proof of length, and
the Verifier tosses coins and can check the proof in about time, with probability of error less than .
For small degree , this "Merlin-Arthur" proof system (a.k.a. MA-proof
system) runs in nearly-linear time, and has many applications. For example, we
obtain MA-proof systems that run in time (for various ) for the
Permanent, Circuit-SAT for all sublinear-depth circuits, counting
Hamiltonian cycles, and infeasibility of - linear programs. In general,
the value of any polynomial in Valiant's class can be certified
faster than "exhaustive summation" over all possible assignments. These results
strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed
by Russell Impagliazzo and others.
We also give a three-round (AMA) proof system for quantified Boolean formulas
running in time, nearly-linear time MA-proof systems for
counting orthogonal vectors in a collection and finding Closest Pairs in the
Hamming metric, and a MA-proof system running in -time for
counting -cliques in graphs.
We point to some potential future directions for refuting the
Nondeterministic Strong ETH.Comment: 17 page
The Quantum Adiabatic Algorithm applied to random optimization problems: the quantum spin glass perspective
Among various algorithms designed to exploit the specific properties of
quantum computers with respect to classical ones, the quantum adiabatic
algorithm is a versatile proposition to find the minimal value of an arbitrary
cost function (ground state energy). Random optimization problems provide a
natural testbed to compare its efficiency with that of classical algorithms.
These problems correspond to mean field spin glasses that have been extensively
studied in the classical case. This paper reviews recent analytical works that
extended these studies to incorporate the effect of quantum fluctuations, and
presents also some original results in this direction.Comment: 151 pages, 21 figure
On the Complexity of Random Satisfiability Problems with Planted Solutions
The problem of identifying a planted assignment given a random -SAT
formula consistent with the assignment exhibits a large algorithmic gap: while
the planted solution becomes unique and can be identified given a formula with
clauses, there are distributions over clauses for which the best
known efficient algorithms require clauses. We propose and study a
unified model for planted -SAT, which captures well-known special cases. An
instance is described by a planted assignment and a distribution on
clauses with literals. We define its distribution complexity as the largest
for which the distribution is not -wise independent ( for
any distribution with a planted assignment).
Our main result is an unconditional lower bound, tight up to logarithmic
factors, for statistical (query) algorithms [Kearns 1998, Feldman et. al 2012],
matching known upper bounds, which, as we show, can be implemented using a
statistical algorithm. Since known approaches for problems over distributions
have statistical analogues (spectral, MCMC, gradient-based, convex optimization
etc.), this lower bound provides a rigorous explanation of the observed
algorithmic gap. The proof introduces a new general technique for the analysis
of statistical query algorithms. It also points to a geometric paring
phenomenon in the space of all planted assignments.
We describe consequences of our lower bounds to Feige's refutation hypothesis
[Feige 2002] and to lower bounds on general convex programs that solve planted
-SAT. Our bounds also extend to other planted -CSP models, and, in
particular, provide concrete evidence for the security of Goldreich's one-way
function and the associated pseudorandom generator when used with a
sufficiently hard predicate [Goldreich 2000].Comment: Extended abstract appeared in STOC 201
Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games
In this work, we seek a more refined understanding of the complexity of local optimum computation for Max-Cut and pure Nash equilibrium (PNE) computation for congestion games with weighted players and linear latency functions. We show that computing a PNE of linear weighted congestion games is PLS-complete either for very restricted strategy spaces, namely when player strategies are paths on a series-parallel network with a single origin and destination, or for very restricted latency functions, namely when the latency on each resource is equal to the congestion. Our results reveal a remarkable gap regarding the complexity of PNE in congestion games with weighted and unweighted players, since in case of unweighted players, a PNE can be easily computed by either a simple greedy algorithm (for series-parallel networks) or any better response dynamics (when the latency is equal to the congestion). For the latter of the results above, we need to show first that computing a local optimum of a natural restriction of Max-Cut, which we call Node-Max-Cut, is PLS-complete. In Node-Max-Cut, the input graph is vertex-weighted and the weight of each edge is equal to the product of the weights of its endpoints. Due to the very restricted nature of Node-Max-Cut, the reduction requires a careful combination of new gadgets with ideas and techniques from previous work. We also show how to compute efficiently a (1+?)-approximate equilibrium for Node-Max-Cut, if the number of different vertex weights is constant
- âŠ