1,475,524 research outputs found

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Average-Case Quantum Query Complexity

    Get PDF
    We compare classical and quantum query complexities of total Boolean functions. It is known that for worst-case complexity, the gap between quantum and classical can be at most polynomial. We show that for average-case complexity under the uniform distribution, quantum algorithms can be exponentially faster than classical algorithms. Under non-uniform distributions the gap can even be super-exponential. We also prove some general bounds for average-case complexity and show that the average-case quantum complexity of MAJORITY under the uniform distribution is nearly quadratically better than the classical complexity.Comment: 14 pages, LaTeX. Some parts rewritten. This version to appear in the Journal of Physics

    Average-Case Complexity of Shellsort

    Full text link
    We prove a general lower bound on the average-case complexity of Shellsort: the average number of data-movements (and comparisons) made by a pp-pass Shellsort for any incremental sequence is \Omega (pn^{1 + 1/p) for all plognp \leq \log n. Using similar arguments, we analyze the average-case complexity of several other sorting algorithms.Comment: 11 pages. Submitted to ICALP'9

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Subsampling Mathematical Relaxations and Average-case Complexity

    Full text link
    We initiate a study of when the value of mathematical relaxations such as linear and semidefinite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a sub-instance induced by a small random subsample of the variables. Let CC be a family of CSPs such as 3SAT, Max-Cut, etc., and let Π\Pi be a relaxation for CC, in the sense that for every instance PCP\in C, Π(P)\Pi(P) is an upper bound the maximum fraction of satisfiable constraints of PP. Loosely speaking, we say that subsampling holds for CC and Π\Pi if for every sufficiently dense instance PCP \in C and every ϵ>0\epsilon>0, if we let PP' be the instance obtained by restricting PP to a sufficiently large constant number of variables, then Π(P)(1±ϵ)Π(P)\Pi(P') \in (1\pm \epsilon)\Pi(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P)=1Θ(γ)\Pi(P')=1-\Theta(\gamma) whenever Π(P)=1γ\Pi(P)=1-\gamma. We show: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the relaxation considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every CSP under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of unique games type. 3. There are non-unique CSPs for which even weak subsampling fails for the above tighter semidefinite programs. Also there are unique CSPs for which subsampling fails for the Sherali-Adams linear programming hierarchy. As a corollary of our weak subsampling for strong semidefinite programs, we obtain a polynomial-time algorithm to certify that random geometric graphs (of the type considered by Feige and Schechtman, 2002) of max-cut value 1γ1-\gamma have a cut value at most 1γ/101-\gamma/10.Comment: Includes several more general results that subsume the previous version of the paper

    Average case complexity of linear multivariate problems

    Get PDF
    We study the average case complexity of a linear multivariate problem (\lmp) defined on functions of dd variables. We consider two classes of information. The first \lstd consists of function values and the second \lall of all continuous linear functionals. Tractability of \lmp means that the average case complexity is O((1/\e)^p) with pp independent of dd. We prove that tractability of an \lmp in \lstd is equivalent to tractability in \lall, although the proof is {\it not} constructive. We provide a simple condition to check tractability in \lall. We also address the optimal design problem for an \lmp by using a relation to the worst case setting. We find the order of the average case complexity and optimal sample points for multivariate function approximation. The theoretical results are illustrated for the folded Wiener sheet measure.Comment: 7 page

    Structural Average Case Complexity

    Get PDF
    AbstractLevin introduced an average-case complexity measure, based on a notion of “polynomial on average,” and defined “average-case polynomial-time many-one reducibility” among randomized decision problems. We generalize his notions of average-case complexity classes, Random-NP and Average-P. Ben-Davidet al. use the notation of 〈C, F〉 to denote the set of randomized decision problems (L, μ) such thatLis a set in C andμis a probability density function in F. This paper introduces Aver〈C, F〉 as the class of randomized decision problems (L, μ) such thatLis computed by a type-C machine onμ-average andμis a density function in F. These notations capture all known average-case complexity classes as, for example, Random-NP= 〈NP, P-comp〉 and Average-P=Aver〈P, ∗〉, where P-comp denotes the set of density functions whose distributions are computable in polynomial time, and ∗ denotes the set of all density functions. Mainly studied are polynomial-time reductions between randomized decision problems: many–one, deterministic Turing and nondeterministic Turing reductions and the average-case versions of them. Based on these reducibilities, structural properties of average-case complexity classes are discussed. We give average-case analogues of concepts in worst-case complexity theory; in particular, the polynomial time hierarchy and Turing self-reducibility, and we show that all known complete sets for Random-NP are Turing self-reducible. A new notion of “real polynomial-time computations” is introduced based on average polynomial-time computations for arbitrary distributions from a fixed set, and it is used to characterize the worst-case complexity classesΔpkandΣpkof the polynomial-time hierarchy
    corecore