496 research outputs found

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    A fundamental threat to quantum cryptography: gravitational attacks

    Full text link
    An attack on the ``Bennett-Brassard 84''(BB84) quantum key-exchange protocol in which Eve exploits the action of gravitation to infer information about the quantum-mechanical state of the qubit exchanged between Alice and Bob, is described. It is demonstrated that the known laws of physics do not allow to describe the attack. Without making assumptions that are not based on broad consensus, the laws of quantum gravity, unknown up to now, would be needed even for an approximate treatment. Therefore, it is currently not possible to predict with any confidence if information gained in this attack will allow to break BB84. Contrary to previous belief, a proof of the perfect security of BB84 cannot be based on the assumption that the known laws of physics are strictly correct, yet.Comment: Eur. Phys. J. D (2006), in prin

    Classical Cryptographic Protocols in a Quantum World

    Get PDF
    Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a crucial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: what classical protocols remain secure against quantum attackers? Our main contribution is showing the existence of classical two-party protocols for the secure evaluation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world.Comment: Full version of an old paper in Crypto'11. Invited to IJQI. This is authors' copy with different formattin

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property

    A New View on Worst-Case to Average-Case Reductions for NP Problems

    Full text link
    We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that under reasonable assumptions, there is no non-adaptive worst-case to average-case reduction that bases the average-case hardness of an NP-problem on the worst-case complexity of an NP-complete problem. We replace the hiding and the heavy samples protocol in [BT03] by employing the histogram verification protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very useful in this context. Once the histogram is verified, our hiding protocol is directly public-coin, whereas the intuition behind the original protocol inherently relies on private coins

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    P?=NP as minimization of degree 4 polynomial, integration or Grassmann number problem, and new graph isomorphism problem approaches

    Full text link
    While the P vs NP problem is mainly approached form the point of view of discrete mathematics, this paper proposes reformulations into the field of abstract algebra, geometry, fourier analysis and of continuous global optimization - which advanced tools might bring new perspectives and approaches for this question. The first one is equivalence of satisfaction of 3-SAT problem with the question of reaching zero of a nonnegative degree 4 multivariate polynomial (sum of squares), what could be tested from the perspective of algebra by using discriminant. It could be also approached as a continuous global optimization problem inside [0,1]n[0,1]^n, for example in physical realizations like adiabatic quantum computers. However, the number of local minima usually grows exponentially. Reducing to degree 2 polynomial plus constraints of being in {0,1}n\{0,1\}^n, we get geometric formulations as the question if plane or sphere intersects with {0,1}n\{0,1\}^n. There will be also presented some non-standard perspectives for the Subset-Sum, like through convergence of a series, or zeroing of 02πicos(φki)dφ\int_0^{2\pi} \prod_i \cos(\varphi k_i) d\varphi fourier-type integral for some natural kik_i. The last discussed approach is using anti-commuting Grassmann numbers θi\theta_i, making (Adiag(θi))n(A \cdot \textrm{diag}(\theta_i))^n nonzero only if AA has a Hamilton cycle. Hence, the P\neNP assumption implies exponential growth of matrix representation of Grassmann numbers. There will be also discussed a looking promising algebraic/geometric approach to the graph isomorphism problem -- tested to successfully distinguish strongly regular graphs with up to 29 vertices.Comment: 19 pages, 8 figure

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde
    corecore