352 research outputs found
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
On Statistical Query Sampling and NMR Quantum Computing
We introduce a ``Statistical Query Sampling'' model, in which the goal of an
algorithm is to produce an element in a hidden set with
reasonable probability. The algorithm gains information about through
oracle calls (statistical queries), where the algorithm submits a query
function and receives an approximation to . We
show how this model is related to NMR quantum computing, in which only
statistical properties of an ensemble of quantum systems can be measured, and
in particular to the question of whether one can translate standard quantum
algorithms to the NMR setting without putting all of their classical
post-processing into the quantum system. Using Fourier analysis techniques
developed in the related context of {em statistical query learning}, we prove a
number of lower bounds (both information-theoretic and cryptographic) on the
ability of algorithms to produces an , even when the set is fairly
simple. These lower bounds point out a difficulty in efficiently applying NMR
quantum computing to algorithms such as Shor's and Simon's algorithm that
involve significant classical post-processing. We also explicitly relate the
notion of statistical query sampling to that of statistical query learning.
An extended abstract appeared in the 18th Aunnual IEEE Conference of
Computational Complexity (CCC 2003), 2003.
Keywords: statistical query, NMR quantum computing, lower boundComment: 17 pages, no figures. Appeared in 18th Aunnual IEEE Conference of
Computational Complexity (CCC 2003
Adiabatic Quantum State Generation and Statistical Zero Knowledge
The design of new quantum algorithms has proven to be an extremely difficult
task. This paper considers a different approach to the problem, by studying the
problem of 'quantum state generation'. This approach provides intriguing links
between many different areas: quantum computation, adiabatic evolution,
analysis of spectral gaps and groundstates of Hamiltonians, rapidly mixing
Markov chains, the complexity class statistical zero knowledge, quantum random
walks, and more.
We first show that many natural candidates for quantum algorithms can be cast
as a state generation problem. We define a paradigm for state generation,
called 'adiabatic state generation' and develop tools for adiabatic state
generation which include methods for implementing very general Hamiltonians and
ways to guarantee non negligible spectral gaps. We use our tools to prove that
adiabatic state generation is equivalent to state generation in the standard
quantum computing model, and finally we show how to apply our techniques to
generate interesting superpositions related to Markov chains.Comment: 35 pages, two figure
Complexity cores in average-case complexity theory
Complexity cores in average-case complexity theory
In average-case complexity theory, one of the interesting questions is
whether the existence of worst-case hard problems in NP implies the
existence of problems in NP that are hard on average. In other words, `If P
≠NP then NP is not a subset of Average-P\u27. It is not known whether such
worst-case to average-case connection exists for NP. However it is known
that such connections exist for complexity classes such as EXP and PSPACE.
This worst-case to average-case connections for classes such as EXP and
PSPACE are obtained via random self-reductions. There is evidence that
techniques used to obtain worst-case to average-case connections for EXP and
PSPACE do not work for NP.
In this thesis, we present an approach which may be helpful to establish
worst-case and average-case connection for NP. Our approach is based on the
notion of complexity cores. The main result is `If P ≠NP and there is a
language in NP whose complexity core belongs to NP, then NP is not a subset
of Average-P\u27. Thus to exhibit a worst-case to average-case connection for
NP, it suffices to show the existence of a language whose core is in NP
Recommended from our members
Extracting Randomness from Samplable Distributions
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications.
Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e., can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large min-entropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist.
We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting.
One of our uniform constructions is as follows: assuming that there are problems in E=DTIME(2^{{O(n)}) that are not solvable by subexponential-size circuits with Sigma_6 gates, there is an efficient extractor that transforms any samplable distribution of length n and min-entropy (1-gamma)n into an output distribution of length (1-O(gamma))n, where gamma is any sufficiently small constant. The running time of the extractor is polynomial in n and the circuit complexity of the sampler. These extractors are based on a connection between deterministic extraction from samplable distributions and hardness against nondeterministic circuits, and on the use of nondeterminism to substantially speed up "list decoding" algorithms for error-correcting codes such as multivariate polynomial codes and Hadamard-like codes.Engineering and Applied Science
Complexity of Distributions and Average-Case Hardness
We address the following question in the average-case complexity: does there exists a language L such that for all easy distributions D the distributional problem (L, D) is easy on the average while there exists some more hard distribution D\u27 such that (L, D\u27) is hard on the average? We consider two complexity measures of distributions: the complexity of sampling and the complexity of computing the distribution function.
For the complexity of sampling of distribution, we establish a connection between the above question and the hierarchy theorem for sampling distribution recently studied by Thomas Watson. Using this connection we prove that for every 0 < a < b there exist a language L, an ensemble of distributions D samplable in n^{log^b n} steps and a linear-time algorithm A such that for every ensemble of distribution F that samplable in n^{log^a n} steps, A correctly decides L on all inputs from {0, 1}^n except for a set that has infinitely small F-measure, and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has infinitely small D-measure.
In case of complexity of computing the distribution function we prove the following tight result: for every a > 0 there exist a language L, an ensemble of polynomial-time computable distributions D, and a linear-time algorithm A such that for every computable in n^a steps ensemble of distributions FA correctly decides L on all inputs from {0, 1}^n except for a set that has F-measure at most 2^{-n/2}and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has D-measure at most 2^{-n+1}
- …