29 research outputs found

    A High-Assurance Evaluator for Machine-Checked Secure Multiparty Computation

    Get PDF
    Secure Multiparty Computation (MPC) enables a group of nn distrusting parties to jointly compute a function using private inputs. MPC guarantees correctness of computation and confidentiality of inputs if no more than a threshold tt of the parties are corrupted. Proactive MPC (PMPC) addresses the stronger threat model of a mobile adversary that controls a changing set of parties (but only up to tt at any instant), and may eventually corrupt all nn parties over a long time. This paper takes a first stab at developing high-assurance implementations of (P)MPC. We formalize in EasyCrypt, a tool-assisted framework for building high-confidence cryptographic proofs, several abstract and reusable variations of secret sharing and of (P)MPC protocols building on them. Using those, we prove a series of abstract theorems for the proactive setting. We implement and perform computer-checked security proofs of concrete instantiations of the required (abstract) protocols in EasyCrypt. We also develop a new tool-chain to extract high-assurance executable implementations of protocols formalized and verified in EasyCrypt. Our tool-chain uses Why as an intermediate tool, and enables us to extract executable code from our (P)MPC formalizations. We conduct an evaluation of the extracted executables by comparing their performance to performance of manually implemented versions using Python-based Charm framework for prototyping cryptographic schemes. We argue that the small overhead of our high-assurance executables is a reasonable price to pay for the increased confidence about their correctness and security

    Communication-Efficient Proactive MPC for Dynamic Groups with Dishonest Majorities

    Get PDF
    International audienceSecure multiparty computation (MPC) has recently been increasingly adopted to secure cryptographic keys in enterprises, cloud infrastructure, and cryptocurrency and blockchain-related settings such as wallets and exchanges. Using MPC in blockchains and other distributed systems highlights the need to consider dynamic settings. In such dynamic settings, parties, and potentially even parameters of underlying secret sharing and corruption tolerance thresholds of sub-protocols, may change over the lifetime of the protocol. In particular, stronger threat models-in which mobile adversaries control a changing set of parties (up to t out of n involved parties at any instant), and may eventually corrupt all n parties over the course of a protocol's execution-are becoming increasingly important for such real world deployments; secure protocols designed for such models are known as Proactive MPC (PMPC). In this work, we construct the first efficient PMPC protocol for dynamic groups (where the set of parties changes over time) secure against a dishonest majority of parties. Our PMPC protocol only requires O(n 2) (amortized) communication per secret, compared to existing PMPC protocols that require O(n 4) and only consider static groups with dishonest majorities. At the core of our PMPC protocol is a new efficient technique to perform multiplication of secret shared data (shared using a bivariate scheme) with O(n √ n) communication with security against a dishonest majority without requiring pre-computation. We also develop a new efficient bivariate batched proactive secret sharing (PSS) protocol for dishonest majorities, which may be of independent interest. This protocol enables multiple dealers to contribute different secrets that are efficiently shared together in one batch; previous batched PSS schemes required all secrets to come from a single dealer

    Composing Timed Cryptographic Protocols: Foundations and Applications

    Get PDF
    Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. Unfortunately, current analysis techniques of time-lock primitives provide no sound mechanism to build multi-party cryptographic protocols which use expiring security as a building block. We explain in this paper that all other attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone; yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. This paper presents a new complexity theoretic based framework and new structural theorems to analyze timed primitives with full generality and in composition (which is the central modular protocol design tool). The framework includes a model of security based on fine-grained circuit complexity which we call residual complexity, which accounts for possible leakage on timed primitives as they expire. Our definitions for multi-party computation protocols generalize the literature standards by accounting for fine-grained polynomial circuit depth to model computational hardness which expires in feasible time. Our composition theorems incur degradation of (fine-grained) security as items are composed. In our framework, simulators are given a polynomial “budget” for how much time they spend, and in composition these polynomials interact. Finally, we demonstrate via a prototypical auction application how to apply our framework and theorems. For the first time, we show that it is possible to prove – in a way that is fully consistent, with falsifiable assumptions – properties of multi-party applications based on leaky, temporarily secure components

    How Byzantine is a Send Corruption?

    Get PDF
    Consensus protocols enable nn parties, each holding some input string, to agree on a common output even in the presence of corrupted parties. While the problem is well understood in the classic byzantine setting, recent work has pushed to understand the problem when realistic types of failures are considered and a majority of parties may be corrupt. Garay and Perry consider a model with both byzantine and crash faults and develop a corruption-optimal protocol with perfect security tolerating tct_c crash faults and tbt_b byzantine faults for n>tc+3tbn>t_c+3t_b. Follow up work by Zikas, Hauser, and Maurer extends the model by considering receive-corrupt parties that may not receive messages sent to them, and send-corrupt parties whose sent messages may be dropped. Otherwise, receive-corrupt and send-corrupt parties behave honestly and their inputs and outputs are considered by the security definitions. Zikas, Hauser, and Maurer gave a perfectly secure, linear-round protocol for n>tr+ts+3tbn > t_r+t_s+3t_b, where trt_r and tst_s represent thresholds on the number of parties that are receive- or send-corrupted. In this paper we ask ``what are optimal thresholds in the cryptographic setting that can be tolerated with such mixes of corruptions and faults? We develop an expected-constant round protocol tolerating n>tr+2ts+2tbn > t_r+2t_s+2t_b. We are unable to prove optimality of our protocol\u27s corruption budget in the general case; however, when we constrain the adversary to either drop all or none of a sender\u27s messages in a round, we prove our protocol achieves an optimal threshold of n>tr+ts+2tbn > t_r+t_s+2t_b. We denote this weakening of a send corruption a \emph{spotty send corruption}. In light of this difference in corruption tolerance due to our weakening of a send corruption, we ask ``how close (with respect to corruption thresholds) to a byzantine corruption is a send corruption? We provide a treatment of the difficulty of dealing with send corruptions in protocols with sublinear rounds. As an illustrative and surprising example (even though not in sublinear rounds), we show that the classical Dolev-Strong broadcast protocol degrades from n>tbn > t_b corruptions in the byzantine-only model to n>2ts+2tbn > 2t_s+2t_b when send-corrupt parties\u27 outputs must be consistent with honest parties; we also show why other recent dishonest-majority broadcast protocols degrade similarly. We leave open the question of optimal corruption tolerance for both send- and byzantine corruptions

    On the Hardness of Scheme-Switching Between SIMD FHE Schemes

    Get PDF
    Fully homomorphic encryption (FHE) schemes are either lightweight and can evaluate boolean circuits or are relatively heavy and can evaluate arithmetic circuits on encrypted vectors, i.e., they perform single instruction multiple data operations (SIMD). SIMD FHE schemes can either perform exact modular arithmetic in the case of the Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski-Fan-Vercauteren (BFV) schemes or approximate arithmetic in the case of the Cheon-Kim-Kim-Song (CKKS) scheme. While one can homomorphically switch between BGV/BFV and CKKS using the computationally expensive bootstrapping procedure, it is unknown how to switch between these schemes without bootstrapping. Finding more efficient methods than bootstrapping of converting between these schemes was stated as an open problem by Halevi and Shoup, Eurocrypt 2015. In this work, we provide strong evidence that homomorphic switching between BGV/BFV and CKKS is as hard as bootstrapping. In more detail, if one could efficiently switch between these SIMD schemes, then one could bootstrap these SIMD FHE schemes using a single call to a homomorphic scheme-switching algorithm without applying homomorphic linear transformations. Thus, one cannot hope to obtain significant improvements to homomorphic scheme-switching without also significantly improving the state-of-the-art for bootstrapping. We also explore the relative hardness of computing homomorphic comparison in these same SIMD FHE schemes as a secondary contribution. We show that given a comparison algorithm, one can bootstrap these schemes using a few calls to the comparison algorithm for typical parameter settings. While we focus on the comparison function in this work, the overall approach to demonstrate relative hardness of computing specific functions homomorphically extends beyond comparison to other useful functions such as min/max or ReLU

    Optimizing Registration Based Encryption

    Get PDF
    The recent work of Garg et al. from TCC\u2718 introduced the notion of registration based encryption (RBE). The principal motivation behind RBE is to remove the key escrow problem of identity based encryption (IBE), where the IBE authority is trusted to generate private keys for all the users in the system. Although RBE has excellent asymptotic properties, it is currently impractical. In our estimate, ciphertext size would be about 11 terabytes in an RBE deployment supporting 2 billion users. Motivated by this observation, our work attempts to reduce the concrete communication and computation cost of the current state-of-the-art construction. Our contribution is two-fold. First, we replace Merkle trees with crit-bit trees, a form of PATRICIA trie, without relaxing any of the original RBE efficiency requirements introduced by Garg et al. This change reduces the ciphertext size by 15% and the computation cost of decryption by 30%. Second, we observe that increasing RBE\u27s public parameters by a few hundred kilobytes could reduce the ciphertext size by an additional 50%. Overall, our work decreases the ciphertext size by 57.5%
    corecore