14 research outputs found

    Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

    Get PDF
    In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known

    Collision-Resistance from Multi-Collision-Resistance

    Get PDF
    Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {2,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t\u27, we can transform a t-MCRH into a t\u27-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Fine-grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: NC1^1-cryptography: Under the assumption that NC1^1 \neq \oplusL/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC1^1 and secure against all NC1^1 circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes. AC0^0-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC0^0 and secure against all AC^00 circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC0^0 and secure against AC0^0 circuits were known from the works of Håstad and Braverman

    The Planted kk-SUM Problem: Algorithms, Lower Bounds, Hardness Amplification, and Cryptography

    Get PDF
    In the average-case kk-SUM problem, given rr integers chosen uniformly at random from {0,,M1}\{0,\ldots,M-1\}, the objective is to find a set of kk numbers that sum to 0 modulo MM (this set is called a solution ). In the related kk-XOR problem, given kk uniformly random Boolean vectors of length log MM, the objective is to find a set of kk of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of kk, rr, and MM. The dense regime of MrkM \leq r^k, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of MrkM\gg r^k, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted kk-SUM and kk-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As MM increases past rkr^k, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems. Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) kk-SUM when M=rkM = r^k, we show non-trivial lower bounds on the running time of algorithms for planted kk-SUM when rkMr2kr^k\leq M\leq r^{2k}. We show the same for kk-XOR as well. Search-to-Decision Reduction. For any M>rkM>r^k, suppose there is an algorithm running in time TT that can distinguish between a random kk-SUM instance and a random instance with a planted solution, with success probability (1o(1))(1-o(1)). Then, for the same MM, there is an algorithm running in time O~(T)\tilde{O}(T) that solves planted kk-SUM with constant probability. The same holds for kk-XOR as well. Hardness Amplification. For any MrkM \geq r^k, if an algorithm running in time TT solves planted kk-XOR with success probability Ω(1/polylog(r))\Omega(1/\text{polylog}(r)), then there is an algorithm running in time O~(T)\tilde O(T) that solves it with probability (1o(1))(1-o(1)). We show this by constructing a rapidly mixing random walk over kk-XOR instances that preserves the planted solution. Cryptography. For some M2polylog(r)M \leq 2^{\text{polylog}(r)}, the hardness of the kk-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for 2n0.012^{n^{0.01}}-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of O(1/n)O(1/\sqrt{n}), or hardness for 2n0.52^{n^{0.5}}-time algorithms. Algorithms. For any M2r2M \geq 2^{r^2}, there is a constant cc (independent of kk) and an algorithm running in time rcr^c that, for any kk, solves planted kk-SUM with success probability Ω(1/8k)\Omega(1/8^k). We get this by showing an average-case reduction from planted kk-SUM to the Subset Sum problem. For rkM2r2r^k \leq M \ll 2^{r^2}, the best known algorithms are still the worst-case kk-SUM algorithms running in time rk/2o(1)r^{\lceil{k/2}\rceil-o(1)}

    Public-Coin Statistical Zero-Knowledge Batch Verification against Malicious Verifiers

    Get PDF
    Suppose that a problem Π\Pi has a statistical zero-knowledge (SZK) proof with communication complexity mm. The question of batch verification for SZK asks whether one can prove that kk instances x1,,xkx_1,\ldots,x_k all belong to Π\Pi with a statistical zero-knowledge proof whose communication complexity is better than kmk \cdot m (which is the complexity of the trivial solution of executing the original protocol independently on each input). In a recent work, Kaslasi et al. (TCC, 2020) constructed such a batch verification protocol for any problem having a non-interactive SZK (NISZK) proof-system. Two drawbacks of their result are that their protocol is private-coin and is only zero-knowledge with respect to the honest verifier. In this work, we eliminate these two drawbacks by constructing a public-coin malicious-verifier SZK protocol for batch verification of NISZK. Similarly to the aforementioned prior work, the communication complexity of our protocol is (k+poly(m))polylog(k,m)\big(k+poly(m) \big) \cdot polylog(k,m)

    Average-Case Fine-Grained Hardness

    Get PDF
    We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL \u2794), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2o(1)n^{2-o(1)} time to compute on average, and that of APSP gives us a function that requires n3o(1)n^{3-o(1)} time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO \u2703)

    Proofs of Useful Work

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on a wide array of computational problems, including Orthogonal Vectors, 3SUM, All-Pairs Shortest Path, and any problem that reduces to them (this includes deciding any graph property that is statable in first-order logic). This results in PoWs whose completion does not waste energy but instead is useful for the solution of computational problems of practical interest. The PoWs that we propose are based on delegating the evaluation of low-degree polynomials originating from the study of average-case fine-grained complexity. We prove that, beyond being hard on the average (based on worst-case hardness assumptions), the task of evaluating our polynomials cannot be amortized across multiple~instances. For applications such as Bitcoin, which use PoWs on a massive scale, energy is typically wasted in huge proportions. We give a framework that can utilize such otherwise wasteful work. Note: An updated version of this paper is available at https://eprint.iacr.org/2018/559. The update is to accommodate the fact (pointed out by anonymous reviewers) that the definition of Proof of Useful Work in this paper is already satisfied by a generic naive construction

    Proofs of Work from Worst-Case Assumptions

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC \u2717), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept\u27 of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs\u27 structure can be exploited in ways previous heuristic PoWs could not. This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS \u2716)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC \u2717) can be modified to have much faster verification time, can be proved in zero knowledge, and more. Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs

    A study of efficient secret sharing

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 49-52).We show a general connection between various types of statistical zero-knowledge (SZK) proof systems and (unconditionally secure) secret sharing schemes. Viewed through the SZK lens, we obtain several new results on secret-sharing: " Characterizations: We obtain an almost-characterization of access structures for which there are secret-sharing schemes with an efficient sharing algorithm (but not necessarily efficient reconstruction). In particular, we show that for every language L [set membership] SZKL (the class of languages that have statistical zero knowledge proofs with log-space verifiers and simulators), a (monotonized) access structure associated with L has such a secret-sharing scheme. Conversely, we show that such secret-sharing schemes can only exist for languages in SZK. " Constructions: We show new constructions of secret-sharing schemes with efficient sharing and reconstruction for access structures that are in P, but are not known to be in NC, namely Bounded-Degree Graph Isomorphism and constant-dimensional lattice problems. In particular, this gives us the first combinatorial access structure that is conjectured to be outside NC but has an efficient secret-sharing scheme. Previous such constructions (Beimel and Ishai; CCC 2001) were algebraic and number-theoretic in nature. " Limitations: We show that universally-efficient secret-sharing schemes, where the complexity of computing the shares is a polynomial independent of the complexity of deciding the access structure, cannot exist for all (monotone languages in) P, unless there is a polynomial q such that P [subset] DSPACE(q(n)).by Prashant Vasudevan.S.M
    corecore