1,456 research outputs found

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,Ï”)(s,\epsilon)-indistinguishable we need the complexity O(s⋅25ℓϔ−2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϔ−2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

    Get PDF

    Private Outsourcing of Polynomial Evaluation and Matrix Multiplication using Multilinear Maps

    Full text link
    {\em Verifiable computation} (VC) allows a computationally weak client to outsource the evaluation of a function on many inputs to a powerful but untrusted server. The client invests a large amount of off-line computation and gives an encoding of its function to the server. The server returns both an evaluation of the function on the client's input and a proof such that the client can verify the evaluation using substantially less effort than doing the evaluation on its own. We consider how to privately outsource computations using {\em privacy preserving} VC schemes whose executions reveal no information on the client's input or function to the server. We construct VC schemes with {\em input privacy} for univariate polynomial evaluation and matrix multiplication and then extend them such that the {\em function privacy} is also achieved. Our tool is the recently developed {mutilinear maps}. The proposed VC schemes can be used in outsourcing {private information retrieval (PIR)}.Comment: 23 pages, A preliminary version appears in the 12th International Conference on Cryptology and Network Security (CANS 2013

    Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding

    Full text link
    Secure computation often benefits from the use of correlated randomness to achieve fast, non-cryptographic online protocols. A recent paradigm put forth by Boyle et al.\textit{et al.} (CCS 2018, Crypto 2019) showed how pseudorandom correlation generators (PCG) can be used to generate large amounts of useful forms of correlated (pseudo)randomness, using minimal interactions followed solely by local computations, yielding silent secure two-party computation protocols (protocols where the preprocessing phase requires almost no communication). An additional property called programmability allows to extend this to build N-party protocols. However, known constructions for programmable PCG's can only produce OLE's over large fields, and use rather new splittable Ring-LPN assumption. In this work, we overcome both limitations. To this end, we introduce the quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which generalises the well-established quasi-cyclic syndrome decoding assumption. Building upon QA-SD, we construct new programmable PCG's for OLE's over any field Fq\mathbb{F}_q with q>2q>2. Our analysis also sheds light on the security of the ring-LPN assumption used in Boyle et al.\textit{et al.} (Crypto 2020). Using our new PCG's, we obtain the first efficient N-party silent secure computation protocols for computing general arithmetic circuit over Fq\mathbb{F}_q for any q>2q>2.Comment: This is a long version of a paper accepted at CRYPTO'2

    On Foundations of Protecting Computations

    Get PDF
    Information technology systems have become indispensable to uphold our way of living, our economy and our safety. Failure of these systems can have devastating effects. Consequently, securing these systems against malicious intentions deserves our utmost attention. Cryptography provides the necessary foundations for that purpose. In particular, it provides a set of building blocks which allow to secure larger information systems. Furthermore, cryptography develops concepts and tech- niques towards realizing these building blocks. The protection of computations is one invaluable concept for cryptography which paves the way towards realizing a multitude of cryptographic tools. In this thesis, we contribute to this concept of protecting computations in several ways. Protecting computations of probabilistic programs. An indis- tinguishability obfuscator (IO) compiles (deterministic) code such that it becomes provably unintelligible. This can be viewed as the ultimate way to protect (deterministic) computations. Due to very recent research, such obfuscators enjoy plausible candidate constructions. In certain settings, however, it is necessary to protect probabilistic com- putations. The only known construction of an obfuscator for probabilistic programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and requires an indistinguishability obfuscator which satisfies extreme security guarantees. We improve this construction and thereby reduce the require- ments on the security of the underlying indistinguishability obfuscator. (Agrikola, Couteau, and Hofheinz, PKC, 2020) Protecting computations in cryptographic groups. To facilitate the analysis of building blocks which are based on cryptographic groups, these groups are often overidealized such that computations in the group are protected from the outside. Using such overidealizations allows to prove building blocks secure which are sometimes beyond the reach of standard model techniques. However, these overidealizations are subject to certain impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018 introduced the algebraic group model (AGM) as a relaxation which is closer to the standard model but in several aspects preserves the power of said overidealizations. However, their model still suffers from implausibilities. We develop a framework which allows to transport several security proofs from the AGM into the standard model, thereby evading the above implausi- bility results, and instantiate this framework using an indistinguishability obfuscator. (Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020) Protecting computations using compression. Perfect compression algorithms admit the property that the compressed distribution is truly random leaving no room for any further compression. This property is invaluable for several cryptographic applications such as “honey encryption” or password-authenticated key exchange. However, perfect compression algorithms only exist for a very small number of distributions. We relax the notion of compression and rigorously study the resulting notion which we call “pseudorandom encodings”. As a result, we identify various surprising connections between seemingly unrelated areas of cryptography. Particularly, we derive novel results for adaptively secure multi-party computation which allows for protecting computations in distributed settings. Furthermore, we instantiate the weakest version of pseudorandom encodings which suffices for adaptively secure multi-party computation using an indistinguishability obfuscator. (Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020

    Lists that are smaller than their parts: A coding approach to tunable secrecy

    Get PDF
    We present a new information-theoretic definition and associated results, based on list decoding in a source coding setting. We begin by presenting list-source codes, which naturally map a key length (entropy) to list size. We then show that such codes can be analyzed in the context of a novel information-theoretic metric, \epsilon-symbol secrecy, that encompasses both the one-time pad and traditional rate-based asymptotic metrics, but, like most cryptographic constructs, can be applied in non-asymptotic settings. We derive fundamental bounds for \epsilon-symbol secrecy and demonstrate how these bounds can be achieved with MDS codes when the source is uniformly distributed. We discuss applications and implementation issues of our codes.Comment: Allerton 2012, 8 page

    An Algebraic System for Constructing Cryptographic Permutations over Finite Fields

    Full text link
    In this paper we identify polynomial dynamical systems over finite fields as the central component of almost all iterative block cipher design strategies over finite fields. We propose a generalized triangular polynomial dynamical system (GTDS), and give a generic algebraic definition of iterative (keyed) permutation using GTDS. Our GTDS-based generic definition is able to describe widely used and well-known design strategies such as substitution permutation network (SPN), Feistel network and their variants among others. We show that the Lai-Massey design strategy for (keyed) permutations is also described by the GTDS. Our generic algebraic definition of iterative permutation is particularly useful for instantiating and systematically studying block ciphers and hash functions over Fp\mathbb{F}_p aimed for multiparty computation and zero-knowledge based cryptographic protocols. Finally, we provide the discrepancy analysis a technique used to measure the (pseudo-)randomness of a sequence, for analyzing the randomness of the sequence generated by the generic permutation or block cipher described by GTDS

    Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

    Full text link
    Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks
    • 

    corecore