16 research outputs found

    Batch Verification of Elliptic Curve Digital Signatures

    Get PDF
    This thesis investigates the efficiency of batching the verification of elliptic curve signatures. The first signature scheme considered is a modification of ECDSA proposed by Antipa et al.\ along with a batch verification algorithm by Cheon and Yi. Next, Bernstein's EdDSA signature scheme and the Bos-Coster multi-exponentiation algorithm are presented and the asymptotic runtime is examined. Following background on bilinear pairings, the Camenisch-Hohenberger-Pedersen (CHP) pairing-based signature scheme is presented in the Type 3 setting, along with the derivative BN-IBV due to Zhang, Lu, Lin, Ho and Shen. We proceed to count field operations for each signature scheme and an exact analysis of the results is given. When considered in the context of batch verification, we find that the Cheon-Yi and Bos-Coster methods have similar costs in practice (assuming the same curve model). We also find that when batch verifying signatures, CHP is only 11\% slower than EdDSA with Bos-Coster, a significant improvement over the gap in single verification cost between the two schemes

    Fast batched asynchronous distributed key generation

    Get PDF
    We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent presignatures ) and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on polynomial commitments . The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called super-invertible matrix along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures

    Automatic generation of high speed elliptic curve cryptography code

    Get PDF
    Apparently, trust is a rare commodity when power, money or life itself are at stake. History is full of examples. Julius Caesar did not trust his generals, so that: ``If he had anything confidential to say, he wrote it in cipher, that is, by so changing the order of the letters of the alphabet, that not a word could be made out. If anyone wishes to decipher these, and get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A, and so with the others.'' And so the history of cryptography began moving its first steps. Nowadays, encryption has decayed from being an emperor's prerogative and became a daily life operation. Cryptography is pervasive, ubiquitous and, the best of all, completely transparent to the unaware user. Each time we buy something on the Internet we use it. Each time we search something on Google we use it. Everything without (almost) realizing that it silently protects our privacy and our secrets. Encryption is a very interesting instrument in the "toolbox of security" because it has very few side effects, at least on the user side. A particularly important one is the intrinsic slow down that its use imposes in the communications. High speed cryptography is very important for the Internet, where busy servers proliferate. Being faster is a double advantage: more throughput and less server overhead. In this context, however, the public key algorithms starts with a big handicap. They have very bad performances if compared to their symmetric counterparts. Due to this reason their use is often reduced to the essential operations, most notably key exchanges and digital signatures. The high speed public key cryptography challenge is a very practical topic with serious repercussions in our technocentric world. Using weak algorithms with a reduced key length to increase the performances of a system can lead to catastrophic results. In 1985, Miller and Koblitz independently proposed to use the group of rational points of an elliptic curve over a finite field to create an asymmetric algorithm. Elliptic Curve Cryptography (ECC) is based on a problem known as the ECDLP (Elliptic Curve Discrete Logarithm Problem) and offers several advantages with respect to other more traditional encryption systems such as RSA and DSA. The main benefit is that it requires smaller keys to provide the same security level since breaking the ECDLP is much harder. In addition, a good ECC implementation can be very efficient both in time and memory consumption, thus being a good candidate for performing high speed public key cryptography. Moreover, some elliptic curve based techniques are known to be extremely resilient to quantum computing attacks, such as the SIDH (Supersingular Isogeny Diffie-Hellman). Traditional elliptic curve cryptography implementations are optimized by hand taking into account the mathematical properties of the underlying algebraic structures, the target machine architecture and the compiler facilities. This process is time consuming, requires a high degree of expertise and, ultimately, error prone. This dissertation' ultimate goal is to automatize the whole optimization process of cryptographic code, with a special focus on ECC. The framework presented in this thesis is able to produce high speed cryptographic code by automatically choosing the best algorithms and applying a number of code-improving techniques inspired by the compiler theory. Its central component is a flexible and powerful compiler able to translate an algorithm written in a high level language and produce a highly optimized C code for a particular algebraic structure and hardware platform. The system is generic enough to accommodate a wide array of number theory related algorithms, however this document focuses only on optimizing primitives based on elliptic curves defined over binary fields

    Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning

    Get PDF
    This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS ’20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work. We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system

    Diogenes: Lightweight Scalable RSA Modulus Generation with a Dishonest Majority

    Get PDF
    In this work, we design and implement the first protocol for RSA modulus construction that can support thousands of parties and offers security against an arbitrary number of corrupted parties. In a nutshell, we design the ``best\u27\u27 protocol for this scale that is secure against passive corruption, then amplify it to obtain active security using efficient non-interactive zero-knowledge arguments. Our protocol satisfies a stronger security guarantee where a deviating party can be identified when the protocol aborts (referred to as security with identifiable-abort) and allows for ``public verifiability\u27\u27. Our passively secure protocol extends the recent work of Chen et al. that, in turn, is based on the blueprint introduced in the original work of Boneh-Franklin protocol (CRYPTO 1997, J. ACM, 2001). Specifically, we reduce the task of sampling a modulus to secure distributed multiplication, which we implement via an efficient threshold additively homomorphic encryption (AHE) scheme based on the Ring-LWE assumption. This results in a protocol where the amortized per-party communication cost grows logarithmically in the number of parties. In order to keep the parties lightweight, we employ an ``untrusted\u27\u27 coordinator that is connected to all parties and performs all public and broadcast operations. We amplify this protocol to obtain active security (with identifiable-abort) by attaching zero-knowledge proofs. We instantiate our ZK proof system by composing two different types of ZK proof systems: (1) the Ligero sub-linear zero-knowledge proof system (Ames et al., CCS 2017), and (2) Sigma-protocol for proving the knowledge of a discrete logarithm in unknown order groups (Shoup, Eurocrypt 2000). We implemented both the passive and the active variants of our protocol and ran experiments using 2 to 4,000 parties. This is the first such implementation of any MPC protocol that can scale to more than 1,000 parties. For generating a 2048-bit modulus among 1,000 parties, our passive protocol executed in under 4 minutes and the active variant ran in 22 minutes

    Functional encryption based approaches for practical privacy-preserving machine learning

    Get PDF
    Machine learning (ML) is increasingly being used in a wide variety of application domains. However, deploying ML solutions poses a significant challenge because of increasing privacy concerns, and requirements imposed by privacy-related regulations. To tackle serious privacy concerns in ML-based applications, significant recent research efforts have focused on developing privacy-preserving ML (PPML) approaches by integrating into ML pipeline existing anonymization mechanisms or emerging privacy protection approaches such as differential privacy, secure computation, and other architectural frameworks. While promising, existing secure computation based approaches, however, have significant computational efficiency issues and hence, are not practical. In this dissertation, we address several challenges related to PPML and propose practical secure computation based approaches to solve them. We consider both two-tier cloud-based and three-tier hybrid cloud-edge based PPML architectures and address both emerging deep learning models and federated learning approaches. The proposed approaches enable us to outsource data or update a locally trained model in a privacy-preserving manner by employing computation over encrypted datasets or local models. Our proposed secure computation solutions are based on functional encryption (FE) techniques. Evaluation of the proposed approaches shows that they are efficient and more practical than existing approaches, and provide strong privacy guarantees. We also address issues related to the trustworthiness of various entities within the proposed PPML infrastructures. This includes a third-party authority (TPA) which plays a critical role in the proposed FE-based PPML solutions, and cloud service providers. To ensure that such entities can be trusted, we propose a transparency and accountability framework using blockchain. We show that the proposed transparency framework is effective and guarantees security properties. Experimental evaluation shows that the proposed framework is efficient

    Mobile Network and Cloud Based Privacy-Preserving Data Aggregation and Processing

    Get PDF
    The emerging technology of mobile devices and cloud computing has brought a new and efficient way for data to be collected, processed and stored by mobile users. With improved specifications of mobile devices and various mobile applications provided by cloud servers, mobile users can enjoy tremendous advantages to manage their daily life through those applications instantaneously, conveniently and productively. However, using such applications may lead to the exposure of user data to unauthorised access when the data is outsourced for processing and storing purposes. Furthermore, such a setting raises the privacy breach and security issue to mobile users. As a result, mobile users would be reluctant to accept those applications without any guarantee on the safety of their data. The recent breakthrough of Fully Homomorphic Encryption (FHE) has brought a new solution for data processing in a secure motion. Several variants and improvements on the existing methods have been developed due to efficiency problems. Experience of such problems has led us to explore two areas of studies, Mobile Sensing Systems (MSS) and Mobile Cloud Computing (MCC). In MSS, the functionality of smartphones has been extended to sense and aggregate surrounding data for processing by an Aggregation Server (AS) that may be operated by a Cloud Service Provider (CSP). On the other hand, MCC allows resource-constraint devices like smartphones to fully leverage services provided by powerful and massive servers of CSPs for data processing. To support the above two application scenarios, this thesis proposes two novel schemes: an Accountable Privacy-preserving Data Aggregation (APDA) scheme and a Lightweight Homomorphic Encryption (LHE) scheme. MSS is a kind of WSNs, which implements a data aggregation approach for saving the battery lifetime of mobile devices. Furthermore, such an approach could improve the security of the outsourced data by mixing the data prior to be transmitted to an AS, so as to prevent the collusion between mobile users and the AS (or its CSP). The exposure of users’ data to other mobile users leads to a privacy breach and existing methods on preserving users’ privacy only provide an integrity check on the aggregated data without being able to identify any misbehaved nodes once the integrity check has failed. Thus, to overcome such problems, our first scheme APDA is proposed to efficiently preserve privacy and support accountability of mobile users during the data aggregation. Furthermore, APDA is designed with three versions to provide balanced solutions in terms of misbehaved node detection and data aggregation efficiency for different application scenarios. In addition, the successfully aggregated data also needs to be accompanied by some summary information based on necessary additive and non-additive functions. To preserve the privacy of mobile users, such summary could be executed by implementing existing privacy-preserving data aggregation techniques. Nevertheless, those techniques have limitations in terms of applicability, efficiency and functionality. Thus, our APDA has been extended to allow maximal value finding to be computed on the ciphertext data so as to preserve user privacy with good efficiency. Furthermore, such a solution could also be developed for other comparative operations like Average, Percentile and Histogram. Three versions of Maximal value finding (Max) are introduced and analysed in order to differentiate their efficiency and capability to determine the maximum value in a privacy-preserving manner. Moreover, the formal security proof and extensive performance evaluation of our proposed schemes demonstrate that APDA and its extended version can achieve stronger security with an optimised efficiency advantage over the state-of-the-art in terms of both computational and communication overheads. In the MCC environment, the new LHE scheme is proposed with a significant difference so as to allow arbitrary functions to be executed on ciphertext data. Such a scheme will enable rich-mobile applications provided by CSPs to be leveraged by resource-constraint devices in a privacy-preserving manner. The scheme works well as long as noise (a random number attached to the plaintext for security reasons) is less than the encryption key, which makes it flexible. The flexibility of the key size enables the scheme to incorporate with any computation functions in order to produce an accurate result. In addition, this scheme encrypts integers rather than individual bits so as to improve the scheme’s efficiency. With a proposed process that allows three or more parties to communicate securely, this scheme is suited to the MCC environment due to its lightweight property and strong security. Furthermore, the efficacy and efficiency of this scheme are thoroughly evaluated and compared with other schemes. The result shows that this scheme can achieve stronger security under a reasonable cost

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
    corecore