58 research outputs found
On the Impossibility of Sender-Deniable Public Key Encryption
The primitive of deniable encryption was first introduced by Canetti et al. (CRYPTO, 1997). Deniable encryption is a regular public key encryption scheme with the added feature that after running the protocol honestly and transmitting a message , both Sender and Receiver may produce random coins showing that the transmitted ciphertext was an encryption of any message in the message space. Deniable encryption is a key tool for constructing incoercible protocols, since it allows a party to send one message and later provide apparent evidence to a coercer that a different message was sent. In addition, deniable encryption may be used to obtain \emph{adaptively}-secure multiparty computation (MPC) protocols and is secure under \emph{selective-opening} attacks.
Different flavors such as sender-deniable and receiver-deniable encryption, where only the Sender or Receiver can produce fake random coins, have been considered.
Recently, several open questions regarding the feasibility of deniable encryption have been resolved (c.f. (O\u27Neill et al., CRYPTO, 2011), (Bendlin et al., ASIACRYPT, 2011)). A fundamental remaining open question is whether it is possible to construct sender-deniable Encryption Schemes with super-polynomial security, where an adversary has negligible advantage in distinguishing real and fake openings.
The primitive of simulatable public key encryption (PKE), introduced by Damgård and Nielsen (CRYPTO, 2000), is a public key encryption scheme with additional properties that allow oblivious sampling of public keys and ciphertexts. It is one of the low-level primitives used to construct adaptively-secure MPC protocols and was used by O\u27Neill et al. in their construction of bi-deniable encryption in the multi-distributional model (CRYPTO, 2011). Moreover, the original construction of sender-deniable encryption with polynomial security given by Canetti et al. can be instantiated with simulatable PKE. Thus, a natural question to ask is whether it is possible to construct sender-deniable encryption with \emph{super-polynomial security} from simulatable PKE.
In this work, we investigate the possibility of constructing sender-deniable public key encryption from the primitive of simulatable PKE
in a black-box manner. We show that, in fact, there is no black-box construction of sender-deniable encryption with super-polynomial security from simulatable PKE. This indicates that the original construction of sender-deniable public key encryption given by Canetti et al. is in some sense optimal, since improving on it will require the use of non-black-box techniques, stronger underlying assumptions or interaction
Recommended from our members
On Black-Box Complexity and Adaptive, Universal Composability of Cryptographic Tasks
Two main goals of modern cryptography are to identify the minimal assumptions necessary to construct secure cryptographic primitives as well as to construct secure protocols in strong and realistic adversarial models. In this thesis, we address both of these fundamental questions. In the first part of this thesis, we present results on the black-box complexity of two basic cryptographic primitives: non-malleable encryption and optimally-fair coin tossing. Black-box reductions are reductions in which both the underlying primitive as well as the adversary are accessed only in an input-output (or black-box) manner. Most known cryptographic reductions are black-box. Moreover, black-box reductions are typically more efficient than non-black-box reductions. Thus, the black-box complexity of cryptographic primitives is a meaningful and important area of study which allows us to gain insight into the primitive. We study the black box complexity of non-malleable encryption and optimally-fair coin tossing, showing a positive result for the former and a negative one for the latter. Non-malleable encryption is a strong security notion for public-key encryption, guaranteeing that it is impossible to "maul" a ciphertext of a message m into a ciphertext of a related message. This security guarantee is essential for many applications such as auctions. We show how to transform, in a black-box manner, any public-key encryption scheme satisfying a weak form of security, semantic security, to a scheme satisfying non-malleability. Coin tossing is perhaps the most basic cryptographic primitive, allowing two distrustful parties to flip a coin whose outcome is 0 or 1 with probability 1/2. A fair coin tossing protocol is one in which the outputted bit is unbiased, even in the case where one of the parties may abort early. However, in the setting where parties may abort early, there is always a strategy for one of the parties to impose bias of Omega(1/r) in an r-round protocol. Thus, achieving bias of O(1/r) in r rounds is optimal, and it was recently shown that optimally-fair coin tossing can be achieved via a black-box reduction to oblivious transfer. We show that it cannot be achieved via a black-box reduction to one-way function, unless the number of rounds is at least Omega(n/log n), where n is the input/output length of the one-way function. In the second part of this thesis, we present protocols for multiparty computation (MPC) in the Universal Composability (UC) model that are secure against malicious, adaptive adversaries. In the standard model, security is only guaranteed in a stand-alone setting; however, nothing is guaranteed when multiple protocols are arbitrarily composed. In contrast, the UC model, introduced by (Canetti, 2000), considers the execution of an unbounded number of concurrent protocols, in an arbitrary, and adversarially controlled network environment. Another drawback of the standard model is that the adversary must decide which parties to corrupt before the execution of the protocol commences. A more realistic model allows the adversary to adaptively choose which parties to corrupt based on its evolving view during the protocol. In our work we consider the the adaptive UC model, which combines these two security requirements by allowing both arbitrary composition of protocols and adaptive corruption of parties. In our first result, we introduce an improved, efficient construction of non-committing encryption (NCE) with optimal round complexity, from a weaker primitive we introduce called trapdoor-simulatable public key encryption (PKE). NCE is a basic primitive necessary to construct protocols secure under adaptive corruptions and in particular, is used to construct oblivious transfer (OT) protocols secure against semi-honest, adaptive adversaries. Additionally, we show how to realize trapdoor-simulatable PKE from hardness of factoring Blum integers, thus achieving the first construction of NCE from hardness of factoring. In our second result, we present a compiler for transforming an OT protocol secure against a semi-honest, adaptive adversary into one that is secure against a malicious, adaptive adversary. Our compiler achieves security in the UC model, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity. Combining our two results with the work of (Ishai et al., 2008), we obtain the first black-box construction of UC and adaptively secure MPC from trapdoor-simulatable PKE and the ideal commitment functionality
Limits to Non-Malleability
There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question:
When can we rule out the existence of a non-malleable code for a tampering class ??
First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes:
- Functions that change d/2 symbols, where d is the distance of the code;
- Functions where each input symbol affects only a single output symbol;
- Functions where each of the n output bits is a function of n-log n input bits.
Furthermore, we rule out constructions of non-malleable codes for certain classes ? via reductions to the assumption that a distributional problem is hard for ?, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ? NC
Approximate resilience, monotonicity, and the complexity of agnostic learning
A function is -resilient if all its Fourier coefficients of degree at
most are zero, i.e., is uncorrelated with all low-degree parities. We
study the notion of of Boolean
functions, where we say that is -approximately -resilient if
is -close to a -valued -resilient function in
distance. We show that approximate resilience essentially characterizes the
complexity of agnostic learning of a concept class over the uniform
distribution. Roughly speaking, if all functions in a class are far from
being -resilient then can be learned agnostically in time and
conversely, if contains a function close to being -resilient then
agnostic learning of in the statistical query (SQ) framework of Kearns has
complexity of at least . This characterization is based on the
duality between approximation by degree- polynomials and
approximate -resilience that we establish. In particular, it implies that
approximation by low-degree polynomials, known to be sufficient for
agnostic learning over product distributions, is in fact necessary.
Focusing on monotone Boolean functions, we exhibit the existence of
near-optimal -approximately
-resilient monotone functions for all
. Prior to our work, it was conceivable even that every monotone
function is -far from any -resilient function. Furthermore, we
construct simple, explicit monotone functions based on and that are close to highly resilient functions. Our constructions are
based on a fairly general resilience analysis and amplification. These
structural results, together with the characterization, imply nearly optimal
lower bounds for agnostic learning of monotone juntas
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. tampering
functions), our codes have codeword length for a -bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
. Our construction remains efficient for circuit depths as
large as (indeed, our codeword length remains
, and extending our result beyond this would require
separating from .
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure
Non-Malleable Codes for Bounded Polynomial-Depth Tampering
Non-malleable codes allow one to encode data in such a way that, after tampering, the modified codeword is guaranteed to decode to either the original message, or a completely unrelated one. Since the introduction of the notion by Dziembowski, Pietrzak, and Wichs (ICS \u2710 and J. ACM \u2718), a large body of work has focused on realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, non-malleable codes in the plain model (i.e., no trusted setup) secure against polynomial size tampering are not known and obtaining such a code has been a major open problem.
We present the first construction of a non-malleable code secure against polynomial size tampering functions that have . This is an even larger class than all bounded polynomial functions and, in particular, we capture all functions in non-uniform (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we obtain sub-exponentially small error
New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions and Interaction
We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model
to ZAPs (two message witness indistinguishable proofs) with inefficient provers.
While such a transformation was known for the case where the prover is efficient, the security
proof breaks down if the prover is inefficient.
Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally
introduced in the derandomization literature.
We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as .
We also present -fine-grained NIZK in the URS model for all of
from the worst-case assumption \oplus L/\mathsf{\poly} \not\subseteq \mathsf{NC}^1.
Our techniques yield the following applications:
1. ZAPs for from Minicrypt assumptions (with super-polynomial time provers),
2. -fine-grained ZAPs for from worst-case assumptions,
3. Protocols achieving an offline\u27\u27 notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in
both the super-polynomial setting (from Minicrypt assumptions) and
the -fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs
Locally Decodable and Updatable Non-Malleable Codes in the Bounded Retrieval Model
In a recent result, Dachman-Soled et al.(TCC \u2715) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering.
The bounded retrieval model (BRM) (cf. [Alwen et al., CRYPTO \u2709] and [Alwen et al., EUROCRYPT \u2710]) has been studied extensively in the setting of leakage resilience for cryptographic primitives. This threat model assumes that an attacker can learn information about the secret key, subject only to the constraint that the overall amount of leaked information is upper bounded by some value. The goal is then to construct cryptosystems whose secret key length grows with the amount of leakage, but whose runtime (assuming random access to the secret key) is independent of the leakage amount.
In this work, we combine the above two notions and construct locally decodable and updatable non-malleable codes in the split-state model, that are secure against bounded retrieval adversaries. Specifically, given leakage parameter l, we show how to construct an efficient, 3-split-state, locally decodable and updatable code (with CRS) that is secure against one-time leakage of any polynomial time, 3-split-state leakage function whose output length is at most l, and one-time tampering via any polynomial-time 3-split-state tampering function. The locality we achieve is polylogarithmic in the security parameter
Breaking RSA Generically is Equivalent to Factoring, with Preprocessing
We investigate the relationship between the classical RSA and factoring problems when preprocessing is considered. In such a model, adversaries can use an unbounded amount of precomputation to produce an “advice” string to then use during the online phase, when a problem instance becomes known. Previous work (e.g., [Bernstein, Lange ASI- ACRYPT ’13]) has shown that preprocessing attacks significantly im- prove the runtime of the best-known factoring algorithms. Due to these improvements, we ask whether the relationship between factoring and RSA fundamentally changes when preprocessing is allowed. Specifically, we investigate whether there is a superpolynomial gap between the run- time of the best attack on RSA with preprocessing and on factoring with preprocessing.
Our main result rules this out with respect to algorithms in a careful adaptation of the generic ring model [Aggarwal and Maurer, Eurocrypt 2009] to the preprocessing setting. In particular, in this setting we show the existence of a factoring algorithm with polynomially related parameters, for any setting of RSA parameters
- …