13,312 research outputs found

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    On Nonadaptive Security Reductions of Hitting Set Generators

    Get PDF
    One of the central open questions in the theory of average-case complexity is to establish the equivalence between the worst-case and average-case complexity of the Polynomial-time Hierarchy (PH). One general approach is to show that there exists a PH-computable hitting set generator whose security is based on some NP-hard problem. We present the limits of such an approach, by showing that there exists no exponential-time-computable hitting set generator whose security can be proved by using a nonadaptive randomized polynomial-time reduction from any problem outside AM ? coAM, which significantly improves the previous upper bound BPP^NP of Gutfreund and Vadhan (RANDOM/APPROX 2008 [Gutfreund and Vadhan, 2008]). In particular, any security proof of a hitting set generator based on some NP-hard problem must use either an adaptive or non-black-box reduction (unless the polynomial-time hierarchy collapses). To the best of our knowledge, this is the first result that shows limits of black-box reductions from an NP-hard problem to some form of a distributional problem in DistPH. Based on our results, we argue that the recent worst-case to average-case reduction of Hirahara (FOCS 2018 [Hirahara, 2018]) is inherently non-black-box, without relying on any unproven assumptions. On the other hand, combining the non-black-box reduction with our simulation technique of black-box reductions, we exhibit the existence of a "non-black-box selector" for GapMCSP, i.e., an efficient algorithm that solves GapMCSP given as advice two circuits one of which is guaranteed to compute GapMCSP

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    On the Structure of Learnability Beyond P/Poly

    Get PDF
    Motivated by the goal of showing stronger structural results about the complexity of learning, we study the learnability of strong concept classes beyond P/poly, such as PSPACE/poly and EXP/poly. We show the following: 1) (Unconditional Lower Bounds for Learning) Building on [Adam R. Klivans et al., 2013], we prove unconditionally that BPE/poly cannot be weakly learned in polynomial time over the uniform distribution, even with membership and equivalence queries. 2) (Robustness of Learning) For the concept classes EXP/poly and PSPACE/poly, we show unconditionally that worst-case and average-case learning are equivalent, that PAC-learnability and learnability over the uniform distribution are equivalent, and that membership queries do not help in either case. 3) (Reducing Succinct Search to Decision for Learning) For the decision problems R_{Kt} and R_{KS} capturing the complexity of learning EXP/poly and PSPACE/poly respectively, we show a succinct search to decision reduction: for each of these problems, the problem is in BPP iff there is a probabilistic polynomial-time algorithm computing circuits encoding proofs for positive instances of the problem. This is shown via a more general result giving succinct search to decision results for PSPACE, EXP and NEXP, which might be of independent interest. 4) (Implausibility of Oblivious Strongly Black-Box Reductions showing NP-hardness of learning NP/poly) We define a natural notion of hardness of learning with respect to oblivious strongly black-box reductions. We show that learning PSPACE/poly is PSPACE-hard with respect to oblivious strongly black-box reductions. On the other hand, if learning NP/poly is NP-hard with respect to oblivious strongly black-box reductions, the Polynomial Hierarchy collapses

    On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

    Get PDF
    Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives

    Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

    Get PDF
    What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}]. - ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    Non-Disjoint Promise Problems from Meta-Computational View of Pseudorandom Generator Constructions

    Get PDF

    The Complexity of Relating Quantum Channels to Master Equations

    Get PDF
    Completely positive, trace preserving (CPT) maps and Lindblad master equations are both widely used to describe the dynamics of open quantum systems. The connection between these two descriptions is a classic topic in mathematical physics. One direction was solved by the now famous result due to Lindblad, Kossakowski Gorini and Sudarshan, who gave a complete characterisation of the master equations that generate completely positive semi-groups. However, the other direction has remained open: given a CPT map, is there a Lindblad master equation that generates it (and if so, can we find it's form)? This is sometimes known as the Markovianity problem. Physically, it is asking how one can deduce underlying physical processes from experimental observations. We give a complexity theoretic answer to this problem: it is NP-hard. We also give an explicit algorithm that reduces the problem to integer semi-definite programming, a well-known NP problem. Together, these results imply that resolving the question of which CPT maps can be generated by master equations is tantamount to solving P=NP: any efficiently computable criterion for Markovianity would imply P=NP; whereas a proof that P=NP would imply that our algorithm already gives an efficiently computable criterion. Thus, unless P does equal NP, there cannot exist any simple criterion for determining when a CPT map has a master equation description. However, we also show that if the system dimension is fixed (relevant for current quantum process tomography experiments), then our algorithm scales efficiently in the required precision, allowing an underlying Lindblad master equation to be determined efficiently from even a single snapshot in this case. Our work also leads to similar complexity-theoretic answers to a related long-standing open problem in probability theory.Comment: V1: 43 pages, single column, 8 figures. V2: titled changed; added proof-overview and accompanying figure; 50 pages, single column, 9 figure
    • …
    corecore