5,792 research outputs found

    Robust Secret Sharing Schemes Against Local Adversaries

    Get PDF
    We study robust secret sharing schemes in which between one third and one half of the players are corrupted. In this scenario, robust secret sharing is possible only with a share size larger than the secrets, and allowing a positive probability of reconstructing the wrong secret. In the standard model, it is known that at least m+km+k bits per share are needed to robustly share a secret of bit-length mm with an error probability of 2k2^{-k}; however, to the best of our knowledge, the efficient scheme that gets closest to this lower bound has share size m+O~(n+k)m+\widetilde O (n+k), where nn is the number of players in the scheme. We show that it is possible to obtain schemes with close to minimal share size in a model of local adversaries, i.e. in which corrupt players cannot communicate between receiving their respective honest shares and submitting corrupted shares to the reconstruction procedure, but may coordinate before the execution of the protocol and can also gather information afterwards. In this limited adversarial model, we prove a lower bound of roughly m+km+k bits on the minimal share size, which is (somewhat surprisingly) similar to the lower bound in the standard model, where much stronger adversaries are allowed. We then present an efficient secret sharing scheme that essentially meets our lower bound, therefore improving upon the best known constructions in the standard model by removing a linear dependence on the number of players. For our construction, we introduce a novel procedure that compiles an error correcting code into a new randomized one, with the following two properties: a single local portion of a codeword leaks no information on the encoded message itself, and any set of portions of a codeword reconstructs the message with error probability exponentially low in the set size

    Nearly optimal robust secret sharing

    Get PDF
    Abstract: We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for n parties against any collusion of size δn, for any constant δ ∈ (0; 1/2). This result holds in the so-called “nonrushing” model in which the n shares are submitted simultaneously for reconstruction. We thus finally obtain a simple, fully explicit, and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is k(1+o(1))+O(κ), where k is the secret length and κ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than δn honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, the share length can be decreased to a constant (only depending on δ) while the number of shares n can grow independently. In this case, when n is large enough, the scheme satisfies the “threshold” requirement in an approximate sense; i.e., any set of δn(1 + ρ) honest parties, for arbitrarily small ρ > 0, can efficiently reconstruct the secret

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems

    Quantum Cryptography Beyond Quantum Key Distribution

    Get PDF
    Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries---including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives. In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference

    A distributed key establishment scheme for wireless mesh networks using identity-based cryptography

    Get PDF
    In this paper, we propose a secure and efficient key establishment scheme designed with respect to the unique requirements of Wireless Mesh Networks. Our security model is based on Identity-based key establishment scheme without the utilization of a trusted authority for private key operations. Rather, this task is performed by a collaboration of users; a threshold number of users come together in a coalition so that they generate the private key. We performed simulative performance evaluation in order to show the effect of both the network size and the threshold value. Results show a tradeoff between resiliency and efficiency: increasing the threshold value or the number of mesh nodes also increases the resiliency but negatively effects the efficiency. For threshold values smaller than 8 and for number of mesh nodes in between 40 and 100, at least 90% of the mesh nodes can compute their private keys within at most 70 seconds. On the other hand, at threshold value 8, an increase in the number of mesh nodes from 40 to 100 results in 25% increase in the rate of successful private key generations

    Revisiting Shared Data Protection Against Key Exposure

    Full text link
    This paper puts a new light on secure data storage inside distributed systems. Specifically, it revisits computational secret sharing in a situation where the encryption key is exposed to an attacker. It comes with several contributions: First, it defines a security model for encryption schemes, where we ask for additional resilience against exposure of the encryption key. Precisely we ask for (1) indistinguishability of plaintexts under full ciphertext knowledge, (2) indistinguishability for an adversary who learns: the encryption key, plus all but one share of the ciphertext. (2) relaxes the "all-or-nothing" property to a more realistic setting, where the ciphertext is transformed into a number of shares, such that the adversary can't access one of them. (1) asks that, unless the user's key is disclosed, noone else than the user can retrieve information about the plaintext. Second, it introduces a new computationally secure encryption-then-sharing scheme, that protects the data in the previously defined attacker model. It consists in data encryption followed by a linear transformation of the ciphertext, then its fragmentation into shares, along with secret sharing of the randomness used for encryption. The computational overhead in addition to data encryption is reduced by half with respect to state of the art. Third, it provides for the first time cryptographic proofs in this context of key exposure. It emphasizes that the security of our scheme relies only on a simple cryptanalysis resilience assumption for blockciphers in public key mode: indistinguishability from random, of the sequence of diferentials of a random value. Fourth, it provides an alternative scheme relying on the more theoretical random permutation model. It consists in encrypting with sponge functions in duplex mode then, as before, secret-sharing the randomness

    Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

    Get PDF
    We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to 13.43×13.43\times, and also achieves a 2.36×2.36\times-12.65×12.65\times speedup over the state-of-the-art straggler mitigation strategies
    corecore