20 research outputs found

    Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

    Get PDF
    In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known

    Instance-Hiding Interactive Proofs

    Get PDF
    In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in AM/poly and coAM/poly. 2. If an average-case hard language has an IHIP, then One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP\u27s are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Strong IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Strong IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Strong IHIP, then One-Way Functions exist

    Secret Sharing and Statistical Zero Knowledge

    Get PDF
    We show a general connection between various types of statistical zero-knowledge (SZK) proof systems and (unconditionally secure) secret sharing schemes. Viewed through the SZK lens, we obtain several new results on secret-sharing: Characterizations: We obtain an almost-characterization of access structures for which there are secret-sharing schemes with an efficient sharing algorithm (but not necessarily efficient reconstruction). In particular, we show that for every language L \in \SZKL (the class of languages that have statistical zero knowledge proofs with log-space verifiers and simulators), a (monotonized) access structure associated with LL has such a secret-sharing scheme. Conversely, we show that such secret-sharing schemes can only exist for languages in \SZK. Constructions: We show new constructions of secret-sharing schemes with efficient sharing and reconstruction for access structures that are in \P, but are not known to be in \NC, namely Bounded-Degree Graph Isomorphism and constant-dimensional lattice problems. In particular, this gives us the first combinatorial access structure that is conjectured to be outside \NC but has an efficient secret-sharing scheme. Previous such constructions (Beimel and Ishai; CCC 2001) were algebraic and number-theoretic in nature. Limitations: We show that universally-efficient secret-sharing schemes, where the complexity of computing the shares is a polynomial independent of the complexity of deciding the access structure, cannot exist for all (monotone languages in) \P, unless there is a polynomial qq such that \P \subseteq \DSPACE(q(n))

    Control, Confidentiality, and the Right to be Forgotten

    Full text link
    Recent digital rights frameworks give users the right to delete their data from systems that store and process their personal information (e.g., the "right to be forgotten" in the GDPR). How should deletion be formalized in complex systems that interact with many users and store derivative information? We argue that prior approaches fall short. Definitions of machine unlearning Cao and Yang [2015] are too narrowly scoped and do not apply to general interactive settings. The natural approach of deletion-as-confidentiality Garg et al. [2020] is too restrictive: by requiring secrecy of deleted data, it rules out social functionalities. We propose a new formalism: deletion-as-control. It allows users' data to be freely used before deletion, while also imposing a meaningful requirement after deletion--thereby giving users more control. Deletion-as-control provides new ways of achieving deletion in diverse settings. We apply it to social functionalities, and give a new unified view of various machine unlearning definitions from the literature. This is done by way of a new adaptive generalization of history independence. Deletion-as-control also provides a new approach to the goal of machine unlearning, that is, to maintaining a model while honoring users' deletion requests. We show that publishing a sequence of updated models that are differentially private under continual release satisfies deletion-as-control. The accuracy of such an algorithm does not depend on the number of deleted points, in contrast to the machine unlearning literature

    Collision-Resistance from Multi-Collision-Resistance

    Get PDF
    Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {2,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t\u27, we can transform a t-MCRH into a t\u27-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Fine-grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: NC1^1-cryptography: Under the assumption that NC1^1 \neq \oplusL/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC1^1 and secure against all NC1^1 circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes. AC0^0-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC0^0 and secure against all AC^00 circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC0^0 and secure against AC0^0 circuits were known from the works of Håstad and Braverman

    Doubly-Efficient Batch Verification in Statistical Zero-Knowledge

    Get PDF
    A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem Π\Pi admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge batch verification protocol. Namely, an NISZK protocol for proving that x1,,xkΠx_1,\dots,x_k \in \Pi with communication that only scales poly-logarithmically with kk. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a doubly-efficient proof - that is, a prover that runs in polynomial-time given the kk NP witnesses. In this work we show that every problem in NISZKUPNISZK \cap UP has a doubly-efficient interactive statistical zero-knowledge proof with communication poly(n,log(k))poly(n,\log(k)) and poly(log(k),log(n))poly(\log(k),\log(n)) rounds. The prover runs in time poly(n,k)poly(n,k) given access to the kk UP witnesses. Here nn denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses. This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature

    The Planted kk-SUM Problem: Algorithms, Lower Bounds, Hardness Amplification, and Cryptography

    Get PDF
    In the average-case kk-SUM problem, given rr integers chosen uniformly at random from {0,,M1}\{0,\ldots,M-1\}, the objective is to find a set of kk numbers that sum to 0 modulo MM (this set is called a solution ). In the related kk-XOR problem, given kk uniformly random Boolean vectors of length log MM, the objective is to find a set of kk of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of kk, rr, and MM. The dense regime of MrkM \leq r^k, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of MrkM\gg r^k, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted kk-SUM and kk-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As MM increases past rkr^k, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems. Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) kk-SUM when M=rkM = r^k, we show non-trivial lower bounds on the running time of algorithms for planted kk-SUM when rkMr2kr^k\leq M\leq r^{2k}. We show the same for kk-XOR as well. Search-to-Decision Reduction. For any M>rkM>r^k, suppose there is an algorithm running in time TT that can distinguish between a random kk-SUM instance and a random instance with a planted solution, with success probability (1o(1))(1-o(1)). Then, for the same MM, there is an algorithm running in time O~(T)\tilde{O}(T) that solves planted kk-SUM with constant probability. The same holds for kk-XOR as well. Hardness Amplification. For any MrkM \geq r^k, if an algorithm running in time TT solves planted kk-XOR with success probability Ω(1/polylog(r))\Omega(1/\text{polylog}(r)), then there is an algorithm running in time O~(T)\tilde O(T) that solves it with probability (1o(1))(1-o(1)). We show this by constructing a rapidly mixing random walk over kk-XOR instances that preserves the planted solution. Cryptography. For some M2polylog(r)M \leq 2^{\text{polylog}(r)}, the hardness of the kk-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for 2n0.012^{n^{0.01}}-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of O(1/n)O(1/\sqrt{n}), or hardness for 2n0.52^{n^{0.5}}-time algorithms. Algorithms. For any M2r2M \geq 2^{r^2}, there is a constant cc (independent of kk) and an algorithm running in time rcr^c that, for any kk, solves planted kk-SUM with success probability Ω(1/8k)\Omega(1/8^k). We get this by showing an average-case reduction from planted kk-SUM to the Subset Sum problem. For rkM2r2r^k \leq M \ll 2^{r^2}, the best known algorithms are still the worst-case kk-SUM algorithms running in time rk/2o(1)r^{\lceil{k/2}\rceil-o(1)}

    Public-Coin Statistical Zero-Knowledge Batch Verification against Malicious Verifiers

    Get PDF
    Suppose that a problem Π\Pi has a statistical zero-knowledge (SZK) proof with communication complexity mm. The question of batch verification for SZK asks whether one can prove that kk instances x1,,xkx_1,\ldots,x_k all belong to Π\Pi with a statistical zero-knowledge proof whose communication complexity is better than kmk \cdot m (which is the complexity of the trivial solution of executing the original protocol independently on each input). In a recent work, Kaslasi et al. (TCC, 2020) constructed such a batch verification protocol for any problem having a non-interactive SZK (NISZK) proof-system. Two drawbacks of their result are that their protocol is private-coin and is only zero-knowledge with respect to the honest verifier. In this work, we eliminate these two drawbacks by constructing a public-coin malicious-verifier SZK protocol for batch verification of NISZK. Similarly to the aforementioned prior work, the communication complexity of our protocol is (k+poly(m))polylog(k,m)\big(k+poly(m) \big) \cdot polylog(k,m)
    corecore