55 research outputs found

    On Zero-Testable Homomorphic Encryption and Publicly Verifiable Non-Interactive Arguments

    Get PDF
    We define and study zero-testable homomorphic encryption (ZTHE) -- a semantically secure, somewhat homomorphic encryption scheme equipped with a weak zero test that can identify trivial zeros. These are ciphertexts that result from homomorphically evaluating an arithmetic circuit computing the zero polynomial over the integers. This is a relaxation of the (strong) zero test provided by the notion of graded encodings, which identifies all encodings of zero. We show that ZTHE can suffice for powerful applications. Based on any ZTHE scheme that satisfies the additional properties of correctness on adversarial ciphertexts and multi-key homomorphism, we construct publicly verifiable non-interactive arguments for delegating computation. Such arguments were previously constructed from indistinguishability obfuscation or based on so-called knowledge assumptions. The arguments we construct are adaptively sound, based on an efficiently falsifiable assumption, and only make black-box use of the underlying cryptographic primitives. We also show that a ZTHE scheme that is sufficient for our application can be constructed based on an efficiently-falsifiable assumption over so-called clean graded encodings

    How to Delegate Computations Publicly

    Get PDF
    We construct a delegation scheme for all polynomial time computations. Our scheme is publicly verifiable and completely non-interactive in the common reference string (CRS) model. Our scheme is based on an efficiently falsifiable decisional assumption on groups with bilinear maps. Prior to this work, publicly verifiable non-interactive delegation schemes were only known under knowledge assumptions (or in the Random Oracle model) or under non-standard assumptions related to obfuscation or multilinear maps. We obtain our result in two steps. First, we construct a scheme with a long CRS (polynomial in the running time of the computation) by following the blueprint of Paneth and Rothblum (TCC 2017). Then we bootstrap this scheme to obtain a short CRS. Our bootstrapping theorem exploits the fact that our scheme can securely delegate certain non-deterministic computations

    Incrementally Verifiable Computation via Incremental PCPs

    Get PDF
    If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called ``knowledge assumptions\u27\u27). In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17]. Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results

    On Publicly Verifiable Delegation From Standard Assumptions

    Get PDF
    We construct a publicly verifiable non-interactive delegation scheme for log-space uniform bounded depth computations in the common reference string (CRS) model, where the CRS is long (as long as the time it takes to do the computation). The soundness of our scheme relies on the assumption that there exists a group with a bilinear map, such that given group elements g,h,ht,ht2,g,h,h^t,h^{t^2}, it is hard to output ga,gb,gcg^a,g^b,g^c and ha,hb,hch^a,h^b,h^c such that aâ‹…t2+bâ‹…t+c=0a \cdot t^2 + b \cdot t + c = 0, but a,b,ca,b,c are not all zero. Previously, such a result was only known under knowledge assumptions (or in the Random Oracle model), or under non-standard assumptions related to obfuscation or zero-testable homomorphic encryption. We obtain our result by converting the interactive delegation scheme of Goldwasser, Kalai and Rothblum (J. ACM 2015) into a publicly verifiable non-interactive one. As a stepping stone, we give a publicly verifiable non-interactive version of the sum-check protocol of Lund, Fortnow, Karloff, Nisan (J. ACM 1992)

    Privately Constrained Testable Pseudorandom Functions

    Get PDF
    Privately Constrained Pseudorandom Functions allow a PRF key to be delegated to some evaluator in a constrained manner, such that the key’s functionality is restricted with respect to some secret predicate. Variants of Privately Constrained Pseudorandom Func- tions have been applied to rich applications such as Broadcast Encryption, and Secret-key Functional Encryption. Recently, this primitive has also been instantiated from standard assumptions. We extend its functionality to a new tool we call Privately Constrained Testable Pseudorandom functions. For any predicate C, the holder of a secret key sk can produce a delegatable key constrained on C denoted as sk[C]. Evaluations on inputs x produced using the constrained key differ from unconstrained evaluations with respect to the result of C(x). Given an output y evaluated using sk[C], the holder of the unconstrained key sk can verify whether the input x used to produce y satisfied the predicate C. That is, given y, they learn whether C(x) = 1 without needing to evaluate the predicate themselves, and without requiring the original input x. We define two inequivalent security models for this new primitive, a stronger indistinguishability- based definition, and a weaker simulation-based definition. Under the indistinguishability- based definition, we show the new primitive implies Designated-Verifier Non-Interactive Zero-Knowledge Arguments for NP in a black-box manner. Under the simulation-based definition, we construct a provably secure instantiation of the primitive from lattice as- sumptions. We leave the study of the gap between definitions, and discovering techniques to reconcile it as future work

    On Non-Black-Box Simulation and the Impossibility of Approximate Obfuscation

    Get PDF
    The introduction of a non-black-box simulation technique by Barak (FOCS 2001) has been a major landmark in cryptography, breaking the previous barriers of black-box impossibility. Barak\u27s technique has given rise to various powerful applications and it is a key component in all known protocols with non-black-box simulation. We present the first non-black-box simulation technique that does not rely on Barak\u27s technique (or on nonstandard assumptions). Invoking this technique, we obtain new and improved protocols resilient to various resetting attacks. These improvements include weaker computational assumptions and better round complexity. A prominent feature of our technique is its compatibility with rewinding techniques from classic black-box zero-knowledge protocols. The combination of rewinding with non-black-box simulation has proven instrumental in coping with challenging goals as: simultaneously-resettable zero-knowledge, proofs of knowledge, and resettable-security from one-way functions. While previous works required tailored modifications to Barak\u27s technique, we give a general recipe for combining our technique with rewinding. This yields simplified resettable protocols in the above settings, as well as improvements in round complexity and required computational assumptions. The main ingredient in our technique is a new impossibility result for general program obfuscation. The results extend the impossibility result of Barak et al. (CRYPTO 2001) to the case of obfuscation with approximate functionality; thus, settling a question left open by Barak et al.. In the converse direction, we show a generic transformation from any resettably-sound zero-knowledge protocol to a family of functions that cannot be obfuscated

    On the Impossibility of Approximate Obfuscation and Applications to Resettable Cryptography

    Get PDF
    The traditional notion of {\em program obfuscation} requires that an obfuscation f~\tilde{f} of a program ff computes the exact same function as ff, but beyond that, the code of f~\tilde{f} should not leak any information about ff. This strong notion of {\em virtual black-box} security was shown by Barak et al. (CRYPTO 2001) to be impossible to achieve, for certain {\em unobfuscatable function families}. The same work raised the question of {\em approximate obfuscation}, where the obfuscated f~\tilde{f} is only required to approximate f~\tilde{f}; that is, f~\tilde{f} only agrees with ff on some input distribution. We show that, assuming {\em trapdoor permutations}, there exist families of {\em robust unobfuscatable functions} for which even approximate obfuscation is impossible. That is, obfuscation is impossible even if the obfuscated f~\tilde{f} only agrees with ff with probability slightly more than 12\frac{1}{2}, on a uniformly sampled input (below 12\frac{1}{2}-agreement, the function obfuscated by f~\tilde{f} is not uniquely defined). Additionally, we show that, assuming only one-way functions, we can rule out approximate obfuscation where f~\tilde{f} is not allowed to err, but may refuse to compute ff with probability close to 11. We then demonstrate the power of robust unobfuscatable functions by exhibiting new implications to resettable protocols that so far have been out of our reach. Concretely, we obtain a new non-black-box simulation technique that reduces the assumptions required for resettably-sound zero-knowledge protocols to {\em one-way functions}, as well as reduce round-complexity. We also present a new simplified construction of simultaneously resettable zero-knowledge protocols that does not rely on collision-resistent hashing. Finally, we construct a three-message simultaneously resettable \WI {\em argument of knowledge} (with a non-black-box knowledge extractor). Our constructions are based on a special kind of ``resettable slots that are useful for a non-black-box simulator, but not for a resetting prover

    State of the Art Report: Verified Computation

    Full text link
    This report describes the state of the art in verifiable computation. The problem being solved is the following: The Verifiable Computation Problem (Verifiable Computing Problem) Suppose we have two computing agents. The first agent is the verifier, and the second agent is the prover. The verifier wants the prover to perform a computation. The verifier sends a description of the computation to the prover. Once the prover has completed the task, the prover returns the output to the verifier. The output will contain proof. The verifier can use this proof to check if the prover computed the output correctly. The check is not required to verify the algorithm used in the computation. Instead, it is a check that the prover computed the output using the computation specified by the verifier. The effort required for the check should be much less than that required to perform the computation. This state-of-the-art report surveys 128 papers from the literature comprising more than 4,000 pages. Other papers and books were surveyed but were omitted. The papers surveyed were overwhelmingly mathematical. We have summarised the major concepts that form the foundations for verifiable computation. The report contains two main sections. The first, larger section covers the theoretical foundations for probabilistically checkable and zero-knowledge proofs. The second section contains a description of the current practice in verifiable computation. Two further reports will cover (i) military applications of verifiable computation and (ii) a collection of technical demonstrators. The first of these is intended to be read by those who want to know what applications are enabled by the current state of the art in verifiable computation. The second is for those who want to see practical tools and conduct experiments themselves.Comment: 54 page

    Verifiable Elections That Scale for Free

    Get PDF
    In order to guarantee a fair and transparent voting process, electronic voting schemes must be verifiable. Most of the time, however, it is important that elections also be anonymous. The notion of a verifiable shuffle describes how to satisfy both properties at the same time: ballots are submitted to a public bulletin board in encrypted form, verifiably shuffled by several mix servers (thus guaranteeing anonymity), and then verifiably decrypted by an appropriate threshold decryption mechanism. To guarantee transparency, the intermediate shuffles and decryption results, together with proofs of their correctness, are posted on the bulletin board throughout this process. In this paper, we present a verifiable shuffle and threshold decryption scheme in which, for security parameter k, L voters, M mix servers, and N decryption servers, the proof that the end tally corresponds to the original encrypted ballots is only O(k(L + M + N)) bits long. Previous verifiable shuffle constructions had proofs of size O(kLM + kLN), which, for elections with thousands of voters, mix servers, and decryption servers, meant that verifying an election on an ordinary computer in a reasonable amount of time was out of the question. The linchpin of each construction is a controlled-malleable proof (cm-NIZK), which allows each server, in turn, to take a current set of ciphertexts and a proof that the computation done by other servers has proceeded correctly so far. After shuffling or partially decrypting these ciphertexts, the server can also update the proof of correctness, obtaining as a result a cumulative proof that the computation is correct so far. In order to verify the end result, it is therefore sufficient to verify just the proof produced by the last server

    Fiat-Shamir From Simpler Assumptions

    Get PDF
    We present two new protocols: (1) A succinct publicly verifiable non-interactive argument system for log-space uniform NC computations, under the assumption that any one of a broad class of fully homomorphic encryption (FHE) schemes has almost optimal security against polynomial-time adversaries. The class includes all FHE schemes in the literature that are based on the learning with errors (LWE) problem. (2) A non-interactive zero-knowledge argument system for NP in the common random string model, assuming almost optimal hardness of search-LWE against polynomial-time adversaries. Both results are obtained by applying the Fiat-Shamir transform with explicit, efficiently computable functions (specifically, correlation intractable functions) to certain classes of interactive proofs. We improve over prior work by reducing the security of these protocols to qualitatively weaker computational hardness assumptions. Along the way, we also show that the Fiat-Shamir transform can be soundly applied (in the plain model) to a richer class of protocols than was previously known
    • …
    corecore