162 research outputs found

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    On the Relationship between Statistical Zero-Knowledge and Statistical Randomized Encodings

    Get PDF
    \emph{Statistical Zero-knowledge proofs} (Goldwasser, Micali and Rackoff, SICOMP 1989) allow a computationally-unbounded server to convince a computationally-limited client that an input xx is in a language Π\Pi without revealing any additional information about xx that the client cannot compute by herself. \emph{Randomized encoding} (RE) of functions (Ishai and Kushilevitz, FOCS 2000) allows a computationally-limited client to publish a single (randomized) message, \enc(x), from which the server learns whether xx is in Π\Pi and nothing else. It is known that SRESRE, the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally-unbounded server, is contained in the class SZKSZK of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem. In this paper, we explore the relationship between \SRE and \SZK, and derive the following results: * In a non-uniform setting, statistical randomized encoding with one-side privacy (1RE1RE) is equivalent to non-interactive statistical zero-knowledge (NISZKNISZK). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving SRE=SZKSRE=SZK is equivalent to showing that 1RE=RE1RE=RE and SZK=NISZKSZK=NISZK. The latter is a well-known open problem (Goldreich, Sahai, Vadhan, CRYPTO 1999). * If SRESRE is non-trivial (not in BPPBPP), then infinitely-often one-way functions exist. The analog hypothesis for SZKSZK yields only \emph{auxiliary-input} one-way functions (Ostrovsky, Structure in Complexity Theory, 1991), which is believed to be a significantly weaker implication. * If there exists an average-case hard language with \emph{perfect randomized encoding}, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for SZKSZK implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH. We believe that our results sharpen the relationship between SRESRE and SZKSZK and illuminates the core differences between these two classes

    Quantum Simulation for High Energy Physics

    Full text link
    It is for the first time that Quantum Simulation for High Energy Physics (HEP) is studied in the U.S. decadal particle-physics community planning, and in fact until recently, this was not considered a mainstream topic in the community. This fact speaks of a remarkable rate of growth of this subfield over the past few years, stimulated by the impressive advancements in Quantum Information Sciences (QIS) and associated technologies over the past decade, and the significant investment in this area by the government and private sectors in the U.S. and other countries. High-energy physicists have quickly identified problems of importance to our understanding of nature at the most fundamental level, from tiniest distances to cosmological extents, that are intractable with classical computers but may benefit from quantum advantage. They have initiated, and continue to carry out, a vigorous program in theory, algorithm, and hardware co-design for simulations of relevance to the HEP mission. This community whitepaper is an attempt to bring this exciting and yet challenging area of research to the spotlight, and to elaborate on what the promises, requirements, challenges, and potential solutions are over the next decade and beyond.Comment: This is a whitepaper prepared for the topical groups CompF6 (Quantum computing), TF05 (Lattice Gauge Theory), and TF10 (Quantum Information Science) within the Computational Frontier and Theory Frontier of the U.S. Community Study on the Future of Particle Physics (Snowmass 2021). 103 pages and 1 figur

    From Private Simultaneous Messages to Zero-Information Arthur-Merlin Protocols and Back

    Get PDF
    Göös, Pitassi and Watson (ITCS, 2015) have recently introduced the notion of \emph{Zero-Information Arthur-Merlin Protocols} (ZAM). In this model, which can be viewed as a private version of the standard Arthur-Merlin communication complexity game, Alice and Bob are holding a pair of inputs xx and yy respectively, and Merlin, the prover, attempts to convince them that some public function ff evaluates to 1 on (x,y)(x,y). In addition to standard completeness and soundness, Göös et al., require an additional ``zero-knowledge\u27\u27 property which asserts that on each yes-input, the distribution of Merlin\u27s proof leaks no information about the inputs (x,y)(x,y) to an external observer. In this paper, we relate this new notion to the well-studied model of \emph{Private Simultaneous Messages} (PSM) that was originally suggested by Feige, Naor and Kilian (STOC, 1994). Roughly speaking, we show that the randomness complexity of ZAM essentially corresponds to the communication complexity of PSM, and that the communication complexity of ZAM essentially corresponds to the randomness complexity of PSM. This relation works in both directions where different variants of PSM are being used. Consequently, we derive better upper-bounds on the communication-complexity of ZAM for arbitrary functions. As a secondary contribution, we reveal new connections between different variants of PSM protocols which we believe to be of independent interest

    Amplifying the Security of Functional Encryption, Unconditionally

    Get PDF
    Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption (FE). We show two main results: 1) For any constant epsilon in (0,1), we can amplify any FE scheme for P/poly which is epsilon-secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally. 2) For any constant epsilon in (0,1), we can amplify any FE scheme for P/poly which is epsilon-secure against subexponential sized adversaries to a fully subexponentially secure FE scheme for P/poly, unconditionally. Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE. Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions

    The Round Complexity of Perfect MPC with Active Security and Optimal Resiliency

    Get PDF
    In STOC 1988, Ben-Or, Goldwasser, and Wigderson (BGW) established an important milestone in the fields of cryptography and distributed computing by showing that every functionality can be computed with perfect (information-theoretic and error-free) security at the presence of an active (aka Byzantine) rushing adversary that controls up to n/3n/3 of the parties. We study the round complexity of general secure multiparty computation in the BGW model. Our main result shows that every functionality can be realized in only four rounds of interaction, and that some functionalities cannot be computed in three rounds. This completely settles the round-complexity of perfect actively-secure optimally-resilient MPC, resolving a long line of research. Our lower-bound is based on a novel round-reduction technique that allows us to lift existing three-round lower-bounds for verifiable secret sharing to four-round lower-bounds for general MPC. To prove the upper-bound, we develop new round-efficient protocols for computing degree-2 functionalities over large fields, and establish the completeness of such functionalities. The latter result extends the recent completeness theorem of Applebaum, Brakerski and Tsabary (TCC 2018, Eurocrypt 2019) that was limited to the binary field

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    Actively Secure Garbled Circuits with Constant Communication Overhead in the Plain Model

    Get PDF
    We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first protocol that has only a constant multiplicative communication overhead compared to Yao\u27s protocol for passive adversaries, and can be implemented in the plain model by only making a black-box use of (parallel) oblivious transfer and a pseudo-random generator. This improves over the polylogarithmic overhead of the previous best protocol. A similar result could previously be obtained only in an amortized setting, using preprocessing, or by assuming bit-oblivious-transfer as an ideal primitive that has a constant cost. We present two variants of this result, one which is aimed at minimizing the number of oblivious transfers and another which is aimed at optimizing concrete efficiency. Our protocols are based on a novel combination of previous techniques together with a new efficient protocol to certify that pairs of strings transmitted via oblivious transfer satisfy a global relation. Settling for ``security with correlated abort\u27\u27, the concrete communication complexity of the second variant of our protocol can beat the best previous protocols with the same kind of security even for realistic values of the circuit size and the security parameter. This variant is particularly attractive in the offline-online setting, where the online cost is dominated by a single evaluation of an authenticated garbled circuit, and can also be made non-interactive using the Fiat-Shamir heuristic
    corecore