348 research outputs found

    Multi-Key FHE from LWE, Revisited

    Get PDF
    Traditional fully homomorphic encryption (FHE) schemes only allow computation on data encrypted under a \emph{single} key. Löpez-Alt, Tromer, and Vaikuntanathan (STOC 2012) proposed the notion of \emph{multi-key} FHE, which allows homomorphic computation on ciphertexts encrypted under different keys, and also gave a construction based on a (somewhat nonstandard) assumption related to NTRU.\@ More recently, Clear and McGoldrick (CRYPTO 2015), followed by Mukherjee and Wichs (EUROCRYPT 2016), proposed a multi-key FHE that builds upon the LWE-based FHE of Gentry, Sahai, and Waters (CRYPTO 2013). However, unlike the original construction of Löpez-Alt \etal, these later LWE-based schemes have the somewhat undesirable property of being ``single-hop for keys:\u27\u27 all relevant keys must be known at the start of the homomorphic computation, and the output cannot be usefully combined with ciphertexts encrypted under other keys (unless an expensive ``bootstrapping\u27\u27 step is performed). In this work we construct two multi-key FHE schemes, based on LWE assumptions, which are \emph{multi-hop for keys}: the output of a homomorphic computation on ciphertexts encrypted under a set of keys can be used in further homomorphic computation involving \emph{additional} keys, and so on. Moreover, incorporating ciphertexts associated with new keys is a relatively efficient ``native\u27\u27 operation akin to homomorphic multiplication, and does not require bootstrapping (in contrast with all other LWE-based solutions). Our systems also have smaller ciphertexts than the previous LWE-based ones; in fact, ciphertexts in our second construction are simply GSW ciphertexts with no auxiliary data

    A Survey on Homomorphic Encryption Schemes: Theory and Implementation

    Full text link
    Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. Especially with popular cloud services, the control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars of achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, as well as extending the state of the art HE, PHE, SWHE, and FHE systems.Comment: - Updated. (October 6, 2017) - This paper is an early draft of the survey that is being submitted to ACM CSUR and has been uploaded to arXiv for feedback from stakeholder

    On the IND-CCA1 Security of FHE Schemes

    Get PDF
    Fully homomorphic encryption (FHE) is a powerful tool in cryptography that allows one to perform arbitrary computations on encrypted material without having to decrypt it first. There are numerous FHE schemes, all of which are expanded from somewhat homomorphic encryption (SHE) schemes, and some of which are considered viable in practice. However, while these FHE schemes are semantically (IND-CPA) secure, the question of their IND-CCA1 security is much less studied, and we therefore provide an overview of the IND-CCA1 security of all acknowledged FHE schemes in this paper. To give this overview, we grouped the SHE schemes into broad categories based on their similarities and underlying hardness problems. For each category, we show that the SHE schemes are susceptible to either known adaptive key recovery attacks, a natural extension of known attacks, or our proposed attacks. Finally, we discuss the known techniques to achieve IND-CCA1-secure FHE and SHE schemes. We concluded that none of the proposed schemes were IND-CCA1-secure and that the known general constructions all had their shortcomings.publishedVersio

    Sanitization of FHE ciphertexts

    Get PDF
    By definition, fully homomorphic encryption (FHE) schemes support homomorphic decryption, and all known FHE constructions are bootstrapped from a Somewhat Homomorphic Encryption (SHE) scheme via this technique. Additionally, when a public key is provided, ciphertexts are also re-randomizable, e.g., by adding to them fresh encryptions of 0. From those two operations we devise an algorithm to sanitize a ciphertext, by making its distribution canonical. In particular, the distribution of the ciphertext does not depend on the circuit that led to it via homomorphic evaluation, thus providing circuit privacy in the honest-but-curious model. Unlike the previous approach based on noise flooding, our approach does not degrade much the security/efficiency trade-off of the underlying FHE. The technique can be applied to all lattice-based FHE proposed so far, without substantially affecting their concrete parameters

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques

    Correlation-Intractable Hash Functions via Shift-Hiding

    Get PDF
    A hash function family H\mathcal{H} is correlation intractable for a tt-input relation R\mathcal{R} if, given a random function hh chosen from H\mathcal{H}, it is hard to find x1,…,xtx_1,\ldots,x_t such that R(x1,…,xt,h(x1),…,h(xt))\mathcal{R}(x_1,\ldots,x_t,h(x_1),\ldots,h(x_t)) is true. Among other applications, such hash functions are a crucial tool for instantiating the Fiat-Shamir heuristic in the plain model, including the only known NIZK for NP based on the learning with errors (LWE) problem (Peikert and Shiehian, CRYPTO 2019). We give a conceptually simple and generic construction of single-input CI hash functions from shift-hiding shiftable functions (Peikert and Shiehian, PKC 2018) satisfying an additional one-wayness property. This results in a clean abstract framework for instantiating CI, and also shows that a previously existing function family (PKC 2018) was already CI under the LWE assumption. In addition, our framework transparently generalizes to other settings, yielding new results: - We show how to instantiate certain forms of multi-input CI under the LWE assumption. Prior constructions either relied on a very strong ``brute-force-is-best\u27\u27 type of hardness assumption (Holmgren and Lombardi, FOCS 2018) or were restricted to ``output-only\u27\u27 relations (Zhandry, CRYPTO 2016). - We construct single-input CI hash functions from indistinguishability obfuscation (iO) and one-way permutations. Prior constructions relied essentially on variants of fully homomorphic encryption that are impossible to construct from such primitives. This result also generalizes to more expressive variants of multi-input CI under iO and additional standard assumptions

    Ad Hoc Multi-Input Functional Encryption

    Get PDF
    Consider sources that supply sensitive data to an aggregator. Standard encryption only hides the data from eavesdroppers, but using specialized encryption one can hope to hide the data (to the extent possible) from the aggregator itself. For flexibility and security, we envision schemes that allow sources to supply encrypted data, such that at any point a dynamically-chosen subset of sources can allow an agreed-upon joint function of their data to be computed by the aggregator. A primitive called multi-input functional encryption (MIFE), due to Goldwasser et al. (EUROCRYPT 2014), comes close, but has two main limitations: - it requires trust in a third party, who is able to decrypt all the data, and - it requires function arity to be fixed at setup time and to be equal to the number of parties. To drop these limitations, we introduce a new notion of ad hoc MIFE. In our setting, each source generates its own public key and issues individual, function-specific secret keys to an aggregator. For successful decryption, an aggregator must obtain a separate key from each source whose ciphertext is being computed upon. The aggregator could obtain multiple such secret-keys from a user corresponding to functions of varying arity. For this primitive, we obtain the following results: - We show that standard MIFE for general functions can be bootstrapped to ad hoc MIFE for free, i.e. without making any additional assumption. - We provide a direct construction of ad hoc MIFE for the inner product functionality based on the Learning with Errors (LWE) assumption. This yields the first construction of this natural primitive based on a standard assumption. At a technical level, our results are obtained by combining standard MIFE schemes and two-round secure multiparty computation (MPC) protocols in novel ways highlighting an interesting interplay between MIFE and two-round MPC

    Quantum Proofs of Deletion for Learning with Errors

    Get PDF
    Quantum information has the property that measurement is an inherently destructive process. This feature is most apparent in the principle of complementarity, which states that mutually incompatible observables cannot be measured at the same time. Recent work by Broadbent and Islam (TCC 2020) builds on this aspect of quantum mechanics to realize a cryptographic notion called certified deletion. While this remarkable notion enables a classical verifier to be convinced that a (private-key) quantum ciphertext has been deleted by an untrusted party, it offers no additional layer of functionality. In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). We construct the first fully homomorphic encryption scheme with certified deletion - an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our scheme has the desirable property that verification of a deletion certificate is public; meaning anyone can verify that deletion has taken place. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. We introduce the notion of Gaussian-collapsing hash functions - a special case of collapsing hash functions defined by Unruh (Eurocrypt 2016) - and we prove the security of our schemes under the assumption that the Ajtai hash function satisfies a certain strong Gaussian-collapsing property in the presence of leakage
    • …
    corecore