93 research outputs found

    Adaptively Secure MPC with Sublinear Communication Complexity

    Get PDF
    A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a Bob-optimized 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice\u27s input size. We prove impossibility of Alice-optimized protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS\u2718) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity

    The paradigm of partial erasures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 137-145).This thesis is a study of erasures in cryptographic protocols. Erasing old data and keys is an important capability of honest parties in cryptographic protocols. It is useful in many settings, including proactive security in the presence of a mobile adversary, adaptive security in the presence of an adaptive adversary, forward security, and intrusion resilience. Some of these settings, such as achieving proactive security, is provably impossible without some form of erasures. Other settings, such as designing protocols that are secure against adaptive adversaries, are much simpler to achieve when erasures are allowed. Protocols for all these contexts typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice. We propose a model of imperfect or partial erasures where erasure instructions are only partially effective and leave almost all the data intact, thus giving the honest parties only a limited capability to dispose old data. Nonetheless, we show how to design protocols for all of the above settings (including proactive security, adaptive security, forward security, and intrusion resilience) for which this weak form of erasures suffices. We do not have to invent entirely new protocols, but rather show how to automatically modify protocols relying on perfect erasures into ones for which partial erasures suffices. Stated most generally, we provide a compiler that transforms any protocol relying on perfect erasures for security into one with the same functionality that remains secure even if the erasures are only partial. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead. We also give computationally more efficient compilers for a number of special cases: (1) when all the computations on secrets can be done in constant parallel time (NC⁰); (2) for a class of proactive secret sharing protocols where we leave the protocol intact except for changing the representation of the shares of the secret and the instructions that modify the shares (to correspondingly modify the new representation instead).by Dah-Yoh Lim.Ph.D

    On Black-Box Complexity and Adaptive, Universal Composability of Cryptographic Tasks

    Get PDF
    Two main goals of modern cryptography are to identify the minimal assumptions necessary to construct secure cryptographic primitives as well as to construct secure protocols in strong and realistic adversarial models. In this thesis, we address both of these fundamental questions. In the first part of this thesis, we present results on the black-box complexity of two basic cryptographic primitives: non-malleable encryption and optimally-fair coin tossing. Black-box reductions are reductions in which both the underlying primitive as well as the adversary are accessed only in an input-output (or black-box) manner. Most known cryptographic reductions are black-box. Moreover, black-box reductions are typically more efficient than non-black-box reductions. Thus, the black-box complexity of cryptographic primitives is a meaningful and important area of study which allows us to gain insight into the primitive. We study the black box complexity of non-malleable encryption and optimally-fair coin tossing, showing a positive result for the former and a negative one for the latter. Non-malleable encryption is a strong security notion for public-key encryption, guaranteeing that it is impossible to "maul" a ciphertext of a message m into a ciphertext of a related message. This security guarantee is essential for many applications such as auctions. We show how to transform, in a black-box manner, any public-key encryption scheme satisfying a weak form of security, semantic security, to a scheme satisfying non-malleability. Coin tossing is perhaps the most basic cryptographic primitive, allowing two distrustful parties to flip a coin whose outcome is 0 or 1 with probability 1/2. A fair coin tossing protocol is one in which the outputted bit is unbiased, even in the case where one of the parties may abort early. However, in the setting where parties may abort early, there is always a strategy for one of the parties to impose bias of Omega(1/r) in an r-round protocol. Thus, achieving bias of O(1/r) in r rounds is optimal, and it was recently shown that optimally-fair coin tossing can be achieved via a black-box reduction to oblivious transfer. We show that it cannot be achieved via a black-box reduction to one-way function, unless the number of rounds is at least Omega(n/log n), where n is the input/output length of the one-way function. In the second part of this thesis, we present protocols for multiparty computation (MPC) in the Universal Composability (UC) model that are secure against malicious, adaptive adversaries. In the standard model, security is only guaranteed in a stand-alone setting; however, nothing is guaranteed when multiple protocols are arbitrarily composed. In contrast, the UC model, introduced by (Canetti, 2000), considers the execution of an unbounded number of concurrent protocols, in an arbitrary, and adversarially controlled network environment. Another drawback of the standard model is that the adversary must decide which parties to corrupt before the execution of the protocol commences. A more realistic model allows the adversary to adaptively choose which parties to corrupt based on its evolving view during the protocol. In our work we consider the the adaptive UC model, which combines these two security requirements by allowing both arbitrary composition of protocols and adaptive corruption of parties. In our first result, we introduce an improved, efficient construction of non-committing encryption (NCE) with optimal round complexity, from a weaker primitive we introduce called trapdoor-simulatable public key encryption (PKE). NCE is a basic primitive necessary to construct protocols secure under adaptive corruptions and in particular, is used to construct oblivious transfer (OT) protocols secure against semi-honest, adaptive adversaries. Additionally, we show how to realize trapdoor-simulatable PKE from hardness of factoring Blum integers, thus achieving the first construction of NCE from hardness of factoring. In our second result, we present a compiler for transforming an OT protocol secure against a semi-honest, adaptive adversary into one that is secure against a malicious, adaptive adversary. Our compiler achieves security in the UC model, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity. Combining our two results with the work of (Ishai et al., 2008), we obtain the first black-box construction of UC and adaptively secure MPC from trapdoor-simulatable PKE and the ideal commitment functionality

    Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares

    Get PDF
    International audienceThreshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into nn shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1nt+1 \leq n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to tt corrupted servers; {\it i.e.}, it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical {\it fully distributed} (the private key is born distributed), non-interactive schemes -- where the servers can compute their partial signatures without communication with other servers -- with adaptive security ({\it i.e.}, the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1)O(1), where certain solutions incur O(n)O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems). To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player)

    On Adaptively Secure Multiparty Computation with a Short CRS

    Get PDF
    In the setting of multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function of their private inputs. A protocol is adaptively secure if honest parties might get corrupted \emph{after} the protocol has started. Recently (TCC 2015) three constant-round adaptively secure protocols were presented [CGP15, DKR15, GP15]. All three constructions assume that the parties have access to a \emph{common reference string} (CRS) whose size depends on the function to compute, even when facing semi-honest adversaries. It is unknown whether constant-round adaptively secure protocols exist, without assuming access to such a CRS. In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute. First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol. Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds. Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function
    corecore