22 research outputs found

    Separating complexity classes using autoreducibility

    Get PDF

    Errorless Versus Error-Prone Average-Case Complexity

    Get PDF
    We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions. First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH. Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P. Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators

    On Basing Private Information Retrieval on NP-Hardness

    Get PDF
    The possibility of basing the security of cryptographic objects on the (minimal) assumption that \comp{NP} \nsubseteq \comp{BPP} is at the very heart of complexity-theoretic cryptography. Most known results along these lines are negative, showing that assuming widely believed complexity-theoretic conjectures, there are no reductions from an \comp{NP}-hard problem to the task of breaking certain cryptographic schemes. We make progress along this line of inquiry by showing that the security of single-server single-round private information retrieval schemes cannot be based on \comp{NP}-hardness, unless the polynomial hierarchy collapses. Our main technical contribution is in showing how to break the security of a PIR protocol given an \comp{SZK} oracle. Our result is tight in terms of both the correctness and the privacy parameter of the PIR scheme

    Lower Bounds on Assumptions behind Indistinguishability Obfuscation

    Get PDF
    Since the seminal work of Garg et. al (FOCS\u2713) in which they proposed the first candidate construction for indistinguishability obfuscation (iO for short), iO has become a central cryptographic primitive with numerous applications. The security of the proposed construction of Garg et al. and its variants are proved based on multi-linear maps (Garg et. al Eurocrypt\u2713) and their idealized model called the graded encoding model (Brakerski and Rothblum TCC\u2714 and Barak et al. Eurocrypt\u2714). Whether or not iO could be based on standard and well-studied hardness assumptions has remain an elusive open question. In this work we prove emph{lower bounds} on the assumptions that imply iO in a black-box way, based on computational assumptions. Note that any lower bound for iO needs to somehow rely on computational assumptions, because if P = NP then statistically secure iO does exist. Our results are twofold: 1. There is no fully black-box construction of iO from (exponentially secure) collision-resistant hash functions unless the polynomial hierarchy collapses. Our lower bound extends to (separate iO from) any primitive implied by a random oracle in a black-box way. 2. Let P be any primitive that exists relative to random trapdoor permutations, the generic group model for any finite abelian group, or degree-O(1)O(1) graded encoding model for any finite ring. We show that achieving a black-box construction of iO from P is emph{as hard as} basing public-key cryptography on one-way functions. In particular, for any such primitive P we present a constructive procedure that takes any black-box construction of iO from P and turns it into a a construction of semantically secure public-key encryption form any one-way functions. Our separations hold even if the construction of iO from P is {semi-} black-box (Reingold, Trevisan, and Vadhan, TCC\u2704) and the security reduction could access the adversary in a non-black-box way

    Approximation Complexity of Optimization Problems : Structural Foundations and Steiner Tree Problems

    Get PDF
    In this thesis we study the approximation complexity of the Steiner Tree Problem and related problems as well as foundations in structural complexity theory. The Steiner Tree Problem is one of the most fundamental problems in combinatorial optimization. It asks for a shortest connection of a given set of points in an edge-weighted graph. This problem and its numerous variants have applications ranging from electrical engineering, VLSI design and transportation networks to internet routing. It is closely connected to the famous Traveling Salesman Problem and serves as a benchmark problem for approximation algorithms. We give a survey on the Steiner tree Problem, obtaining lower bounds for approximability of the (1,2)-Steiner Tree Problem by combining hardness results of Berman and Karpinski with reduction methods of Bern and Plassmann. We present approximation algorithms for the Steiner Forest Problem in graphs and bounded hypergraphs, the Prize Collecting Steiner Tree Problem and related problems where prizes are given for pairs of terminals. These results are based on the Primal-Dual method and the Local Ratio framework of Bar-Yehuda. We study the Steiner Network Problem and obtain combinatorial approximation algorithms with reasonable running time for two special cases, namely the Uniform Uncapacitated Case and the Prize Collecting Uniform Uncapacitated Case. For the general case, Jain's algorithms obtains an approximation ratio of 2, based on the Ellipsoid Method. We obtain polynomial time approximation schemes for the Dense Prize Collecting Steiner Tree Problem, Dense k-Steiner Problem and the Dense Class Steiner Tree Problem based on the methods of Karpinski and Zelikovsky for approximating the Dense Steiner Tree Problem. Motivated by the question which parameters make the Steiner Tree problem hard to solve, we make an excurs into Fixed Parameter Complexity, focussing on structural aspects of the W-Hierarchy. We prove a Speedup Theorem for the classes FPT and SP and versions if Levin's Lower Bound Theorem for the class SP as well as for Randomized Space Complexity. Starting from the approximation schemes for the dense Steiner Tree problems, we deal with the efficiency of polynomial time approximation schemes in general. We separate the class EPTAS from PTAS under some reasonable complexity theoretic assumption. The same separation was achieved by Cesaty and Trevisan under some assumtion from Fixed Parameter Complexity. We construct an oracle under which our assumtion holds but that of Cesati and Trevisan does not, which implies that using relativizing proof techniques one cannot show that our assumption implies theirs
    corecore