178 research outputs found

    Output Compression, MPC, and iO for Turing Machines

    Get PDF
    In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output- compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}. We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO): 1. Compact MPC for Turing Machines in the Random Oracle Model: In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubacek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model. 2. Succinct iO for Turing Machines in the Shared Randomness Model: In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}

    Laconic Function Evaluation for Turing Machines

    Get PDF
    Laconic function evaluation (LFE) allows Alice to compress a large circuit C\mathbf{C} into a small digest d\mathsf{d}. Given Alice\u27s digest, Bob can encrypt some input xx under d\mathsf{d} in a way that enables Alice to recover C(x)\mathbf{C}(x), without learning anything beyond that. The scheme is said to be laconiclaconic if the size of d\mathsf{d}, the runtime of the encryption algorithm, and the size of the ciphertext are all sublinear in the size of C\mathbf{C}. Until now, all known LFE constructions have ciphertexts whose size depends on the depthdepth of the circuit C\mathbf{C}, akin to the limitation of levelledlevelled homomorphic encryption. In this work we close this gap and present the first LFE scheme (for Turing machines) with asymptotically optimal parameters. Our scheme assumes the existence of indistinguishability obfuscation and somewhere statistically binding hash functions. As further contributions, we show how our scheme enables a wide range of new applications, including two previously unknown constructions: • Non-interactive zero-knowledge (NIZK) proofs with optimal prover complexity. • Witness encryption and attribute-based encryption (ABE) for Turing machines from falsifiable assumptions

    Oblivious Turing Machine

    Get PDF
    In the ever-evolving landscape of Information Technologies, private decentralized computing on an honest yet curious server has emerged as a prominent paradigm. While numerous schemes exist to safeguard data during computation, the focus has primarily been on protecting the confidentiality of the data itself, often overlooking the potential information leakage arising from the function evaluated by the server. Recognizing this gap, this article aims to address the issue by presenting and implementing an innovative solution for ensuring the privacy of both the data and the program. We introduce a novel approach that combines the power of Fully Homomorphic Encryption with the concept of the Turing Machine model, resulting in the first fully secure practical, non-interactive oblivious Turing Machine. Our Oblivious Turing Machine construction is based on only three hypothesis, the hardness of the Ring Learning With Error problem, the ability to homomorphically evaluate non-linear functions and the capacity to blindly rotate elements of a data structure. Only based on those three assumptions, we propose an implementation of an Oblivious Turing Machine relying on the TFHE cryptosystem and present some implementation results

    On Foundations of Protecting Computations

    Get PDF
    Information technology systems have become indispensable to uphold our way of living, our economy and our safety. Failure of these systems can have devastating effects. Consequently, securing these systems against malicious intentions deserves our utmost attention. Cryptography provides the necessary foundations for that purpose. In particular, it provides a set of building blocks which allow to secure larger information systems. Furthermore, cryptography develops concepts and tech- niques towards realizing these building blocks. The protection of computations is one invaluable concept for cryptography which paves the way towards realizing a multitude of cryptographic tools. In this thesis, we contribute to this concept of protecting computations in several ways. Protecting computations of probabilistic programs. An indis- tinguishability obfuscator (IO) compiles (deterministic) code such that it becomes provably unintelligible. This can be viewed as the ultimate way to protect (deterministic) computations. Due to very recent research, such obfuscators enjoy plausible candidate constructions. In certain settings, however, it is necessary to protect probabilistic com- putations. The only known construction of an obfuscator for probabilistic programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and requires an indistinguishability obfuscator which satisfies extreme security guarantees. We improve this construction and thereby reduce the require- ments on the security of the underlying indistinguishability obfuscator. (Agrikola, Couteau, and Hofheinz, PKC, 2020) Protecting computations in cryptographic groups. To facilitate the analysis of building blocks which are based on cryptographic groups, these groups are often overidealized such that computations in the group are protected from the outside. Using such overidealizations allows to prove building blocks secure which are sometimes beyond the reach of standard model techniques. However, these overidealizations are subject to certain impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018 introduced the algebraic group model (AGM) as a relaxation which is closer to the standard model but in several aspects preserves the power of said overidealizations. However, their model still suffers from implausibilities. We develop a framework which allows to transport several security proofs from the AGM into the standard model, thereby evading the above implausi- bility results, and instantiate this framework using an indistinguishability obfuscator. (Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020) Protecting computations using compression. Perfect compression algorithms admit the property that the compressed distribution is truly random leaving no room for any further compression. This property is invaluable for several cryptographic applications such as “honey encryption” or password-authenticated key exchange. However, perfect compression algorithms only exist for a very small number of distributions. We relax the notion of compression and rigorously study the resulting notion which we call “pseudorandom encodings”. As a result, we identify various surprising connections between seemingly unrelated areas of cryptography. Particularly, we derive novel results for adaptively secure multi-party computation which allows for protecting computations in distributed settings. Furthermore, we instantiate the weakest version of pseudorandom encodings which suffices for adaptively secure multi-party computation using an indistinguishability obfuscator. (Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020

    Succinct Garbling Schemes from Functional Encryption through a Local Simulation Paradigm

    Get PDF
    We study a simulation paradigm, referred to as local simulation, in garbling schemes. This paradigm captures simulation proof strategies in which the simulator consists of many local simulators that generate different blocks of the garbled circuit. A useful property of such a simulation strategy is that only a few of these local simulators depend on the input, whereas the rest of the local simulators only depend on the circuit. We formalize this notion by defining locally simulatable garbling schemes. By suitably realizing this notion, we give a new construction of succinct garbling schemes for Turing machines assuming the polynomial hardness of compact functional encryption and standard assumptions (such as either CDH or LWE). Prior constructions of succinct garbling schemes either assumed sub-exponential hardness of compact functional encryption or were designed only for small-space Turing machines. We also show that a variant of locally simulatable garbling schemes can be used to generically obtain adaptively secure garbling schemes for circuits. All prior constructions of adaptively secure garbling that use somewhere equivocal encryption can be seen as instantiations of our construction

    Adaptively Secure MPC with Sublinear Communication Complexity

    Get PDF
    A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a Bob-optimized 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice\u27s input size. We prove impossibility of Alice-optimized protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS\u2718) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity

    Maliciously Secure Massively Parallel Computation for All-but-One Corruptions

    Get PDF
    The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS ’20), focused on the honest majority setting. Later, Fernando et al. (TCC ’20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against semi-honest attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called malicious security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a programmable random oracle), it is provably impossible to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol

    Succinct Garbling Schemes and Applications

    Get PDF
    Assuming the existence of iO for P/poly and one-way functions, we show how to succinctly garble bounded-space computations (BSC) M: the size of the garbled program (as well as the time needed to generate the garbling) only depends on the size and space (including the input and output) complexity of M, but not its running time. The key conceptual insight behind this construction is a method for using iO to compress a computation that can be performed piecemeal, without revealing anything about it. As corollaries of our succinct garbling scheme, we demonstrate the following: -functional encryption for BSC from iO for P/poly and one-way functions; -reusable succinct garbling schemes for BSC from iO for P/poly and one-way functions; - succinct iO for BSC from sub-exponentially-secure iO for P/poly and sub-exponentially secure one-way functions; - (PerfectNIZK) SNARGS for bounded space and witness NP from sub-exponentially-secure iO for P/poly and sub-exponentially-secure one-way functions. Previously such primitives were only know to exists based on “knowledge-based” assumptions (such as SNARKs and/or differing-input obfuscation). We finally demonstrate the first (non-succinct) iO for RAM programs with bounded input and output lengths, that has poly-logarithmic overhead, based on the existence of sub-exponentially-secure iO for P/poly and sub-exponentially-secure one-way functions
    • …
    corecore