20 research outputs found

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    On One-way Functions and Kolmogorov Complexity

    Get PDF
    We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)(1+ε)n,ε>0t(n)\geq (1+\varepsilon)n, \varepsilon>0, the following are equivalent: - One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); - tt-time bounded Kolmogorov Complexity, KtK^t, is mildly hard-on-average (i.e., there exists a polynomial p(n)>0p(n)>0 such that no PPT algorithm can compute KtK^t, for more than a 11p(n)1-\frac{1}{p(n)} fraction of nn-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln2/2+o(1))n/loglogn2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Improved Learning from Kolmogorov Complexity

    Get PDF
    Carmosino, Impagliazzo, Kabanets, and Kolokolova (CCC, 2016) showed that the existence of natural properties in the sense of Razborov and Rudich (JCSS, 1997) implies PAC learning algorithms in the sense of Valiant (Comm. ACM, 1984), for boolean functions in P/poly, under the uniform distribution and with membership queries. It is still an open problem to get from natural properties learning algorithms that do not rely on membership queries but rather use randomly drawn labeled examples. Natural properties may be understood as an average-case version of MCSP, the problem of deciding the minimum size of a circuit computing a given truth-table. Problems related to MCSP include those concerning time-bounded Kolmogorov complexity. MKTP, for example, asks for the KT-complexity of a given string. KT-complexity is a relaxation of circuit size, as it does away with the requirement that a short description of a string be interpreted as a boolean circuit. In this work, under assumptions of MKTP and the related problem MK^tP being easy on average, we get learning algorithms for boolean functions in P/poly that - work over any distribution D samplable by a family of polynomial-size circuits (given explicitly in the case of MKTP), - only use randomly drawn labeled examples from D, and - are agnostic (do not require the target function to belong to the hypothesis class). Our results build upon the recent work of Hirahara and Nanashima (FOCS, 2021) who showed similar learning consequences but under a stronger assumption that NP is easy on average

    On the Symmetries of and Equivalence Test for Design Polynomials

    Get PDF
    In a Nisan-Wigderson design polynomial (in short, a design polynomial), every pair of monomials share a few common variables. A useful example of such a polynomial, introduced in [Neeraj Kayal et al., 2014], is the following: NW_{d,k}({x}) = sum_{h in F_d[z], deg(h) <= k}{ prod_{i=0}^{d-1}{x_{i, h(i)}}}, where d is a prime, F_d is the finite field with d elements, and k << d. The degree of the gcd of every pair of monomials in NW_{d,k} is at most k. For concreteness, we fix k = ceil[sqrt{d}]. The family of polynomials NW := {NW_{d,k} : d is a prime} and close variants of it have been used as hard explicit polynomial families in several recent arithmetic circuit lower bound proofs. But, unlike the permanent, very little is known about the various structural and algorithmic/complexity aspects of NW beyond the fact that NW in VNP. Is NW_{d,k} characterized by its symmetries? Is it circuit-testable, i.e., given a circuit C can we check efficiently if C computes NW_{d,k}? What is the complexity of equivalence test for NW, i.e., given black-box access to a f in F[{x}], can we check efficiently if there exists an invertible linear transformation A such that f = NW_{d,k}(A * {x})? Characterization of polynomials by their symmetries plays a central role in the geometric complexity theory program. Here, we answer the first two questions and partially answer the third. We show that NW_{d,k} is characterized by its group of symmetries over C, but not over R. We also show that NW_{d,k} is characterized by circuit identities which implies that NW_{d,k} is circuit-testable in randomized polynomial time. As another application of this characterization, we obtain the "flip theorem" for NW. We give an efficient equivalence test for NW in the case where the transformation A is a block-diagonal permutation-scaling matrix. The design of this algorithm is facilitated by an almost complete understanding of the group of symmetries of NW_{d,k}: We show that if A is in the group of symmetries of NW_{d,k} then A = D * P, where D and P are diagonal and permutation matrices respectively. This is proved by completely characterizing the Lie algebra of NW_{d,k}, and using an interplay between the Hessian of NW_{d,k} and the evaluation dimension

    A polynomial lower bound for testing monotonicity

    Get PDF
    We show that every algorithm for testing n-variate Boolean functions for monotonicity has query complexity Ω(n1/4). All previous lower bounds for this problem were designed for nonadaptive algorithms and, as a result, the best previous lower bound for general (possibly adaptive) monotonicity testers was only Ω(logn). Combined with the query complexity of the non-adaptive monotonicity tester of Khot, Minzer, and Safra (FOCS 2015), our lower bound shows that adaptivity can result in at most a quadratic reduction in the query complexity for testing monotonicity. By contrast, we show that there is an exponential gap between the query complexity of adaptive and non-adaptive algorithms for testing regular linear threshold functions (LTFs) for monotonicity. Chen, De, Servedio, and Tan (STOC 2015) recently showed that non-adaptive algorithms require almost Ω(n1/2) queries for this task. We introduce a new adaptive monotonicity testing algorithm which has query complexity O(logn) when the input is a regular LTF

    Probabilistic Kolmogorov complexity with applications to average-case complexity

    Get PDF
    Understanding the relationship between the worst-case and average-case complexities of NP and of other subclasses of PH is a long-standing problem in complexity theory. Over the last few years, much progress has been achieved in this front through the investigation of meta-complexity: the complexity of problems that refer to the complexity of the input string x (e.g., given a string x, estimate its time-bounded Kolmogorov complexity). In particular, [Shuichi Hirahara, 2021] employed techniques from meta-complexity to show that if DistNP ⊆ AvgP then UP ⊆ DTIME[2^{O(n/log n)}]. While this and related results [Shuichi Hirahara and Mikito Nanashima, 2021; Lijie Chen et al., 2022] offer exciting progress after a long gap, they do not survive in the setting of randomized computations: roughly speaking, "randomness" is the opposite of "structure", and upper bounding the amount of structure (time-bounded Kolmogorov complexity) of different objects is crucial in recent applications of meta-complexity. This limitation is significant, since randomized computations are ubiquitous in algorithm design and give rise to a more robust theory of average-case complexity [Russell Impagliazzo and Leonid A. Levin, 1990]. In this work, we develop a probabilistic theory of meta-complexity, by incorporating randomness into the notion of complexity of a string x. This is achieved through a new probabilistic variant of time-bounded Kolmogorov complexity that we call pK^t complexity. Informally, pK^t(x) measures the complexity of x when shared randomness is available to all parties involved in a computation. By porting key results from meta-complexity to the probabilistic domain of pK^t complexity and its variants, we are able to establish new connections between worst-case and average-case complexity in the important setting of probabilistic computations: - If DistNP ⊆ AvgBPP, then UP ⊆ RTIME[2^O(n/log n)]. - If DistΣ^P_2 ⊆ AvgBPP, then AM ⊆ BPTIME[2^O(n/log n)]. - In the fine-grained setting [Lijie Chen et al., 2022], we get UTIME[2^O(√{nlog n})] ⊆ RTIME[2^O(√{nlog n})] and AMTIME[2^O(√{nlog n})] ⊆ BPTIME[2^O(√{nlog n})] from stronger average-case assumptions. - If DistPH ⊆ AvgBPP, then PH ⊆ BPTIME[2^O(n/log n)]. Specifically, for an
    corecore