22,375 research outputs found

    A New View on Worst-Case to Average-Case Reductions for NP Problems

    Full text link
    We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that under reasonable assumptions, there is no non-adaptive worst-case to average-case reduction that bases the average-case hardness of an NP-problem on the worst-case complexity of an NP-complete problem. We replace the hiding and the heavy samples protocol in [BT03] by employing the histogram verification protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very useful in this context. Once the histogram is verified, our hiding protocol is directly public-coin, whereas the intuition behind the original protocol inherently relies on private coins

    Complexity cores in average-case complexity theory

    Get PDF
    Complexity cores in average-case complexity theory In average-case complexity theory, one of the interesting questions is whether the existence of worst-case hard problems in NP implies the existence of problems in NP that are hard on average. In other words, `If P ≠NP then NP is not a subset of Average-P\u27. It is not known whether such worst-case to average-case connection exists for NP. However it is known that such connections exist for complexity classes such as EXP and PSPACE. This worst-case to average-case connections for classes such as EXP and PSPACE are obtained via random self-reductions. There is evidence that techniques used to obtain worst-case to average-case connections for EXP and PSPACE do not work for NP. In this thesis, we present an approach which may be helpful to establish worst-case and average-case connection for NP. Our approach is based on the notion of complexity cores. The main result is `If P ≠ NP and there is a language in NP whose complexity core belongs to NP, then NP is not a subset of Average-P\u27. Thus to exhibit a worst-case to average-case connection for NP, it suffices to show the existence of a language whose core is in NP

    Structural Average Case Complexity

    Get PDF
    AbstractLevin introduced an average-case complexity measure, based on a notion of “polynomial on average,” and defined “average-case polynomial-time many-one reducibility” among randomized decision problems. We generalize his notions of average-case complexity classes, Random-NP and Average-P. Ben-Davidet al. use the notation of 〈C, F〉 to denote the set of randomized decision problems (L, μ) such thatLis a set in C andμis a probability density function in F. This paper introduces Aver〈C, F〉 as the class of randomized decision problems (L, μ) such thatLis computed by a type-C machine onμ-average andμis a density function in F. These notations capture all known average-case complexity classes as, for example, Random-NP= 〈NP, P-comp〉 and Average-P=Aver〈P, ∗〉, where P-comp denotes the set of density functions whose distributions are computable in polynomial time, and ∗ denotes the set of all density functions. Mainly studied are polynomial-time reductions between randomized decision problems: many–one, deterministic Turing and nondeterministic Turing reductions and the average-case versions of them. Based on these reducibilities, structural properties of average-case complexity classes are discussed. We give average-case analogues of concepts in worst-case complexity theory; in particular, the polynomial time hierarchy and Turing self-reducibility, and we show that all known complete sets for Random-NP are Turing self-reducible. A new notion of “real polynomial-time computations” is introduced based on average polynomial-time computations for arbitrary distributions from a fixed set, and it is used to characterize the worst-case complexity classesΔpkandΣpkof the polynomial-time hierarchy

    Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity

    Get PDF
    A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT complexity of a string). Both MKTP and MCSP are hard for SZK (Statistical Zero Knowledge) under BPP-Turing reductions; neither is known to be NP-complete. Recently, some hardness results for MKTP were proved that are not (yet) known to hold for MCSP. In particular, MKTP is hard for DET (a subclass of P) under nonuniform ?^{NC^0}_m reductions. In this paper, we improve this, to show that the complement of MKTP is hard for the (apparently larger) class NISZK_L under not only ?^{NC^0}_m reductions but even under projections. Also, the complement of MKTP is hard for NISZK under ?^{P/poly}_m reductions. Here, NISZK is the class of problems with non-interactive zero-knowledge proofs, and NISZK_L is the non-interactive version of the class SZK_L that was studied by Dvir et al. As an application, we provide several improved worst-case to average-case reductions to problems in NP, and we obtain a new lower bound on MKTP (which is currently not known to hold for MCSP)

    On the Structure of Learnability Beyond P/Poly

    Get PDF
    Motivated by the goal of showing stronger structural results about the complexity of learning, we study the learnability of strong concept classes beyond P/poly, such as PSPACE/poly and EXP/poly. We show the following: 1) (Unconditional Lower Bounds for Learning) Building on [Adam R. Klivans et al., 2013], we prove unconditionally that BPE/poly cannot be weakly learned in polynomial time over the uniform distribution, even with membership and equivalence queries. 2) (Robustness of Learning) For the concept classes EXP/poly and PSPACE/poly, we show unconditionally that worst-case and average-case learning are equivalent, that PAC-learnability and learnability over the uniform distribution are equivalent, and that membership queries do not help in either case. 3) (Reducing Succinct Search to Decision for Learning) For the decision problems R_{Kt} and R_{KS} capturing the complexity of learning EXP/poly and PSPACE/poly respectively, we show a succinct search to decision reduction: for each of these problems, the problem is in BPP iff there is a probabilistic polynomial-time algorithm computing circuits encoding proofs for positive instances of the problem. This is shown via a more general result giving succinct search to decision results for PSPACE, EXP and NEXP, which might be of independent interest. 4) (Implausibility of Oblivious Strongly Black-Box Reductions showing NP-hardness of learning NP/poly) We define a natural notion of hardness of learning with respect to oblivious strongly black-box reductions. We show that learning PSPACE/poly is PSPACE-hard with respect to oblivious strongly black-box reductions. On the other hand, if learning NP/poly is NP-hard with respect to oblivious strongly black-box reductions, the Polynomial Hierarchy collapses

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems
    corecore