11 research outputs found
Optimal-Rate Non-Committing Encryption in a CRS Model
Non-committing encryption (NCE) implements secure channels under adaptive corruptions in situations when data erasures are not trustworthy. In this paper we are interested in the rate of NCE, i.e. in how many bits the sender and receiver need to send per plaintext bit.
In initial constructions (e.g. Canetti, Feige, Goldreich and Naor, STOC 96) the length of both the receiver message, namely the public key, and the sender message, namely the ciphertext, is m * poly(k) for an m-bit message, where k is the security parameter. Subsequent works improve efficiency significantly, achieving rate polylog(k).
We construct the first constant-rate NCE. In fact, our scheme has rate 1+o(1), which is comparable to the rate of plain semantically secure encryption. Our scheme operates in the common reference string (CRS) model. Our CRS has size poly(m, k), but it is reusable for an arbitrary polynomial number of m-bit messages. In addition, it is the first NCE protocol with perfect correctness. We assume one way functions and indistinguishability obfuscation for circuits.
As an essential step in our construction, we develop a technique for dealing with adversaries that modify the inputs to the protocol adaptively depending on a public key or CRS that contains obfuscated programs, while assuming only standard (polynomial) hardness of the obfuscation mechanism. This technique may well be useful elsewhere
On Pseudorandom Encodings
We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography.
The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography
Constant Ciphertext-Rate Non-Committing Encryption from Standard Assumptions
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which ``explain\u27\u27 a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC\u2796] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels.
A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE) or the decisional Diffie-Hellman (DDH). Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT\u2717]
New and Improved Constructions for Partially Equivocable Public Key Encryption
International audienceNon-committing encryption (NCE) is an advanced form of public-key encryption which guarantees the security of a Multi-Party Computation (MPC) protocol in the presence of an adaptive adversary. Brakerski et al. (TCC 2020) recently proposed an intermediate notion, termed Packed Encryption with Partial Equivocality (PEPE), which implies NCE and preserves ciphertext-rate (up to a constant factor). In this work, we propose three new constructions of rate-1 PEPE based on standard assumptions. In particular, we obtain the first constant ciphertextrate NCE construction from the LWE assumption with polynomial modulus, and from the Subgroup Decision assumption. We also propose an alternative DDH-based construction with guaranteed polynomial running time. We also make clarifications on the requirements of PEPE
Non-Committing Encryption with Constant Ciphertext Expansion from Standard Assumptions
Non-committing encryption (NCE) introduced by Canetti et al. (STOC \u2796) is a central tool to achieve multi-party computation protocols secure in the adaptive setting. Recently, Yoshida et al. (ASIACRYPT \u2719) proposed an NCE scheme based on the hardness of the DDH problem, which has ciphertext expansion and public-key expansion .
In this work, we improve their result and propose a methodology to construct an NCE scheme that achieves constant ciphertext expansion.Our methodology can be instantiated from the DDH assumption and the LWE assumption. When instantiated from the LWE assumption, the public-key expansion is . They are the first NCE schemes satisfying constant ciphertext expansion without using iO or common reference strings.
Along the way, we define a weak notion of NCE, which satisfies only weak forms of correctness and security.We show how to amplify such a weak NCE scheme into a full-fledged one using wiretap codes with a new security property
Non-Committing Encryption with Quasi-Optimal Ciphertext-Rate Based on the DDH Problem
Non-committing encryption (NCE) was introduced by Canetti et al. (STOC \u2796). Informally, an encryption scheme is non-committing if it can generate a dummy ciphertext that is indistinguishable from a real one. The dummy ciphertext can be opened to any message later by producing a secret key and an encryption random coin which ``explain\u27\u27 the ciphertext as an encryption of the message. Canetti et al. showed that NCE is a central tool to achieve multi-party computation protocols secure in the adaptive setting. An important measure of the efficiently of NCE is the ciphertext rate, that is the ciphertext length divided by the message length, and previous works studying NCE have focused on constructing NCE schemes with better ciphertext rates.
We propose an NCE scheme satisfying the ciphertext rate based on the decisional Diffie-Hellman (DDH) problem, where is the security parameter. The proposed construction achieves the best ciphertext rate among existing constructions proposed in the plain model, that is, the model without using common reference strings. Previously to our work, an NCE scheme with the best ciphertext rate based on the DDH problem was the one proposed by Choi et al.~(ASIACRYPT \u2709) that has ciphertext rate . Our construction of NCE is similar in spirit to that of the recent construction of the trapdoor function proposed by Garg and Hajiabadi (CRYPTO \u2718)
Constant Ciphertext-Rate Non-committing Encryption from Standard Assumptions
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which “explain” a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC’96] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels.
A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE), the decisional Diffie-Hellman (DDH), or the quadratic residuosity (QR) problem. Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT’17]
On Pseudorandom Encodings
We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a twoway relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography
Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH
We present a new framework for building round-optimal (two-round) secure MPC. We show that a relatively weak notion of OT that we call (r-iOT) is enough to build two-round, adaptively secure MPC against adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN.
Our results allow us to build a two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption(NCE), thereby yielding the first constructions of NCE from isogenies or LPN
On Foundations of Protecting Computations
Information technology systems have become indispensable to uphold our
way of living, our economy and our safety. Failure of these systems can have
devastating effects. Consequently, securing these systems against malicious
intentions deserves our utmost attention.
Cryptography provides the necessary foundations for that purpose. In
particular, it provides a set of building blocks which allow to secure larger
information systems. Furthermore, cryptography develops concepts and tech-
niques towards realizing these building blocks. The protection of computations
is one invaluable concept for cryptography which paves the way towards
realizing a multitude of cryptographic tools. In this thesis, we contribute to
this concept of protecting computations in several ways.
Protecting computations of probabilistic programs. An indis-
tinguishability obfuscator (IO) compiles (deterministic) code such that it
becomes provably unintelligible. This can be viewed as the ultimate way
to protect (deterministic) computations. Due to very recent research, such
obfuscators enjoy plausible candidate constructions.
In certain settings, however, it is necessary to protect probabilistic com-
putations. The only known construction of an obfuscator for probabilistic
programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and
requires an indistinguishability obfuscator which satisfies extreme security
guarantees. We improve this construction and thereby reduce the require-
ments on the security of the underlying indistinguishability obfuscator.
(Agrikola, Couteau, and Hofheinz, PKC, 2020)
Protecting computations in cryptographic groups. To facilitate
the analysis of building blocks which are based on cryptographic groups,
these groups are often overidealized such that computations in the group
are protected from the outside. Using such overidealizations allows to prove
building blocks secure which are sometimes beyond the reach of standard
model techniques. However, these overidealizations are subject to certain
impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018
introduced the algebraic group model (AGM) as a relaxation which is closer
to the standard model but in several aspects preserves the power of said
overidealizations. However, their model still suffers from implausibilities.
We develop a framework which allows to transport several security proofs
from the AGM into the standard model, thereby evading the above implausi-
bility results, and instantiate this framework using an indistinguishability
obfuscator.
(Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020)
Protecting computations using compression. Perfect compression
algorithms admit the property that the compressed distribution is truly
random leaving no room for any further compression. This property is
invaluable for several cryptographic applications such as “honey encryption”
or password-authenticated key exchange. However, perfect compression
algorithms only exist for a very small number of distributions. We relax the
notion of compression and rigorously study the resulting notion which we
call “pseudorandom encodings”. As a result, we identify various surprising
connections between seemingly unrelated areas of cryptography. Particularly,
we derive novel results for adaptively secure multi-party computation which
allows for protecting computations in distributed settings. Furthermore, we
instantiate the weakest version of pseudorandom encodings which suffices
for adaptively secure multi-party computation using an indistinguishability
obfuscator.
(Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020