28,442 research outputs found
Complexity of Distributions and Average-Case Hardness
We address the following question in the average-case complexity: does there exists a language L such that for all easy distributions D the distributional problem (L, D) is easy on the average while there exists some more hard distribution D\u27 such that (L, D\u27) is hard on the average? We consider two complexity measures of distributions: the complexity of sampling and the complexity of computing the distribution function.
For the complexity of sampling of distribution, we establish a connection between the above question and the hierarchy theorem for sampling distribution recently studied by Thomas Watson. Using this connection we prove that for every 0 < a < b there exist a language L, an ensemble of distributions D samplable in n^{log^b n} steps and a linear-time algorithm A such that for every ensemble of distribution F that samplable in n^{log^a n} steps, A correctly decides L on all inputs from {0, 1}^n except for a set that has infinitely small F-measure, and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has infinitely small D-measure.
In case of complexity of computing the distribution function we prove the following tight result: for every a > 0 there exist a language L, an ensemble of polynomial-time computable distributions D, and a linear-time algorithm A such that for every computable in n^a steps ensemble of distributions FA correctly decides L on all inputs from {0, 1}^n except for a set that has F-measure at most 2^{-n/2}and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has D-measure at most 2^{-n+1}
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Complexity-theoretic foundations of BosonSampling with a linear number of modes
BosonSampling is the leading candidate for demonstrating quantum
computational advantage in photonic systems. While we have recently seen many
impressive experimental demonstrations, there is still a formidable distance
between the complexity-theoretic hardness arguments and current experiments.
One of the largest gaps involves the ratio of photons to modes: all current
hardness evidence assumes a "high-mode" regime in which the number of linear
optical modes scales at least quadratically in the number of photons. By
contrast, current experiments operate in a "low-mode" regime with a linear
number of modes. In this paper we bridge this gap, bringing the hardness
evidence for the low-mode experiments to the same level as had been previously
established for the high-mode regime. This involves proving a new
worst-to-average-case reduction for computing the Permanent that is robust to
large numbers of row repetitions and also to distributions over matrices with
correlated entries.Comment: 26 pages, 3 figures, to appear at QIP 202
Cryptography from Information Loss
© Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest
Phase Transition and Network Structure in Realistic SAT Problems
A fundamental question in Computer Science is understanding when a specific
class of problems go from being computationally easy to hard. Because of its
generality and applications, the problem of Boolean Satisfiability (aka SAT) is
often used as a vehicle for investigating this question. A signal result from
these studies is that the hardness of SAT problems exhibits a dramatic
easy-to-hard phase transition with respect to the problem constrainedness. Past
studies have however focused mostly on SAT instances generated using uniform
random distributions, where all constraints are independently generated, and
the problem variables are all considered of equal importance. These assumptions
are unfortunately not satisfied by most real problems. Our project aims for a
deeper understanding of hardness of SAT problems that arise in practice. We
study two key questions: (i) How does easy-to-hard transition change with more
realistic distributions that capture neighborhood sensitivity and
rich-get-richer aspects of real problems and (ii) Can these changes be
explained in terms of the network properties (such as node centrality and
small-worldness) of the clausal networks of the SAT problems. Our results,
based on extensive empirical studies and network analyses, provide important
structural and computational insights into realistic SAT problems. Our
extensive empirical studies show that SAT instances from realistic
distributions do exhibit phase transition, but the transition occurs sooner (at
lower values of constrainedness) than the instances from uniform random
distribution. We show that this behavior can be explained in terms of their
clausal network properties such as eigenvector centrality and small-worldness
(measured indirectly in terms of the clustering coefficients and average node
distance)
The Power of Quantum Fourier Sampling
A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and
Shepherd, shows that quantum computers can efficiently sample from probability
distributions that cannot be exactly sampled efficiently on a classical
computer, unless the PH collapses. Aaronson and Arkhipov take this further by
considering a distribution that can be sampled efficiently by linear optical
quantum computation, that under two feasible conjectures, cannot even be
approximately sampled classically within bounded total variation distance,
unless the PH collapses.
In this work we use Quantum Fourier Sampling to construct a class of
distributions that can be sampled by a quantum computer. We then argue that
these distributions cannot be approximately sampled classically, unless the PH
collapses, under variants of the Aaronson and Arkhipov conjectures.
In particular, we show a general class of quantumly sampleable distributions
each of which is based on an "Efficiently Specifiable" polynomial, for which a
classical approximate sampler implies an average-case approximation. This class
of polynomials contains the Permanent but also includes, for example, the
Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials.
Although our construction, unlike that proposed by Aaronson and Arkhipov,
likely requires a universal quantum computer, we are able to use this
additional power to weaken the conjectures needed to prove approximate sampling
hardness results
- …