5 research outputs found

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    PCPs and Instance Compression from a Cryptographic Lens

    Get PDF
    Modern cryptography fundamentally relies on the assumption that the adversary trying to break the scheme is computationally bounded. This assumption lets us construct cryptographic protocols and primitives that are known to be impossible otherwise. In this work we explore the effect of bounding the adversary\u27s power in other information theoretic proof-systems and show how to use this assumption to bypass impossibility results. We first consider the question of constructing succinct PCPs. These are PCPs whose length is polynomial only in the length of the original NP witness (in contrast to standard PCPs whose length is proportional to the non-deterministic verification time). Unfortunately, succinct PCPs are known to be impossible to construct under standard complexity assumptions. Assuming the sub-exponential hardness of the learning with errors (LWE) problem, we construct succinct probabilistically checkable arguments or PCAs (Zimand 2001, Kalai and Raz 2009), which are PCPs in which soundness is guaranteed against efficiently generated false proofs. Our PCA construction is for every NP relation that can be verified by a small-depth circuit (e.g., SAT, clique, TSP, etc.) and in contrast to prior work is publicly verifiable and has constant query complexity. Curiously, we also show, as a proof-of-concept, that such publicly-verifiable PCAs can be used to derive hardness of approximation results. Second, we consider the notion of Instance Compression (Harnik and Naor, 2006). An instance compression scheme lets one compress, for example, a CNF formula φ\varphi on mm variables and nmn \gg m clauses to a new formula φ2˘7\varphi\u27 with only poly(m)poly(m) clauses, so that φ\varphi is satisfiable if and only if φ2˘7\varphi\u27 is satisfiable. Instance compression has been shown to be closely related to succinct PCPs and is similarly highly unlikely to exist. We introduce a computational analog of instance compression in which we require that if φ\varphi is unsatisfiable then φ2˘7\varphi\u27 is effectively unsatisfiable, in the sense that it is computationally infeasible to find a satisfying assignment for φ2˘7\varphi\u27 (although such an assignment may exist). Assuming the same sub-exponential LWE assumption, we construct such computational instance compression schemes for every bounded-depth NP relation. As an application, this lets one compress kk formulas ϕ1,,ϕk\phi_1,\dots,\phi_k into a single short formula ϕ\phi that is effectively satisfiable if and only if at least one of the original formulas was satisfiable

    Batch Proofs are Statistically Hiding

    Get PDF
    Batch proofs are proof systems that convince a verifier that x1,,xtLx_1,\dots,x_t \in \mathcal{L}, for some NP\mathsf{NP} language L\mathcal{L}, with communication that is much shorter than sending the tt witnesses. In the case of statistical soundness (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for UP\mathsf{UP}, the class of unique witness NP\mathsf{NP} languages. In the case of computational soundness (a.k.a. arguments, where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of NP\mathsf{NP}, assuming standard cryptographic assumptions. We study the necessary conditions for the existence of batch proofs in these two settings. Our main results are as follows. 1. Statistical Soundness: the existence of a statistically-sound batch proof for L\mathcal{L} implies that L\mathcal{L} has a statistically witness indistinguishable (SWI\mathsf{SWI}) proof, with inverse polynomial SWI\mathsf{SWI} error, and a non-uniform honest prover. The implication is unconditional for obtaining honest-verifier SWI\mathsf{SWI} or for obtaining full-fledged SWI\mathsf{SWI} from public-coin protocols, whereas for private-coin protocols full-fledged SWI\mathsf{SWI} is obtained assuming one-way functions. This poses a barrier for achieving batch proofs beyond UP\mathsf{UP} (where witness indistinguishability is trivial). In particular, assuming that NP\mathsf{NP} does not have SWI\mathsf{SWI} proofs, batch proofs for all of NP\mathsf{NP} do not exist. 2. Computational Soundness: the existence of batch arguments (BARG\mathsf{BARG}s) for NP\mathsf{NP}, together with one-way functions, implies the existence of statistical zero-knowledge (SZK\mathsf{SZK}) arguments for NP\mathsf{NP} with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover. Thus, constant-round interactive BARG\mathsf{BARG}s from one-way functions would yield constant-round SZK\mathsf{SZK} arguments from one-way functions. This would be surprising as SZK\mathsf{SZK} arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions). 3. Non-interactive: the existence of non-interactive BARG\mathsf{BARG}s for NP\mathsf{NP} and one-way functions, implies non-interactive statistical zero-knowledge arguments (NISZKA\mathsf{NISZKA}) for NP\mathsf{NP}, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible and the honest prover can be made uniform. All of our results stem from a common framework showing how to transform a batch protocol for a language L\mathcal{L} into an SWI\mathsf{SWI} protocol for L\mathcal{L}

    Cryptography from information loss

    No full text
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest
    corecore