14 research outputs found
Improvements on making BKW practical for solving LWE
The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the BlumâKalaiâWasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.publishedVersio
Consistency-or-Die: Consistency for Key Transparency
In this paper we point out the problem of insufficient tools for protecting against split-view attacks in Key Transparency protocols, and propose a solution to fill the void. We discuss why current approaches are not suitable and then propose a novel notion, GOD-less broadcast, that solves the issue. Like conventional notions of broadcast, GOD-less broadcast guarantees consistency. However, it does not provide Guaranteed Output Delivery (GOD). We provide an efficient realization of this new notion using a hidden committee of randomly selected and initially undisclosed users which endorse the current view. We also introduce and analyze a new concept of a quorum which ensures consistency among all honest active users. Our GOD-less broadcast protocol is practical and applicable to existing large scale Key Transparency systems, e.g. the Key Transparency used in WhatsApp
PAPR: Publicly Auditable Privacy Revocation for Anonymous Credentials
We study the notion of anonymous credentials with Publicly Auditable Privacy Revocation (PAPR). PAPR credentials simultaneously provide conditional user privacy and auditable privacy revocation. The first property implies that users keep their identity private when authenticating unless and until an appointed authority requests to revoke this privacy, retroactively. The second property enforces that auditors can verify whether or not this authority has revoked privacy from an issued credential (i.e. learned the identity of the user who owns that credential), holding the authority accountable. In other words, the second property enriches conditionally anonymous credential systems with transparency by design, effectively discouraging such systems from being used for mass surveillance. In this work, we introduce the notion of a PAPR anonymous credential scheme, formalize it as an ideal functionality, and present constructions that are provably secure under standard assumptions in the Universal Composability framework. The core tool in our PAPR construction is a mechanism for randomly selecting an anonymous committee which users secret share their identity information towards, while hiding the identities of the committee members from the authority. As a consequence, in order to initiate the revocation process for a given credential, the authority is forced to post a request on a public bulletin board used as a broadcast channel to contact the anonymous committee that holds the keys needed to decrypt the identity connected to the credential. This mechanism makes the user de-anonymization publicly auditable
26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15â20 July 2017
This work was produced as part of the activities of FAPESP Research,\ud
Disseminations and Innovation Center for Neuromathematics (grant\ud
2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud
FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud
supported by a CNPq fellowship (grant 306251/2014-0)
Modeling and simulating the sample complexity of solving LWE using BKW-style algorithms
The Learning with Errors (LWE) problem receives much attention in cryptography, mainly due to its fundamental significance in post-quantum cryptography. Among its solving algorithms, the Blum-Kalai-Wasserman (BKW) algorithm, originally proposed for solving the Learning Parity with Noise (LPN) problem, performs well, especially for certain parameter settings with cryptographic importance. The BKW algorithm consists of two phases, the reduction phase and the solving phase. In this work, we study the performance of distinguishers used in the solving phase. We show that the Fast Fourier Transform (FFT) distinguisher from Eurocryptâ15 has the same sample complexity as the optimal distinguisher, when making the same number of hypotheses. We also show via simulation that it performs much better than previous theory predicts and develop a sample complexity model that matches the simulations better. We also introduce an improved, pruned version of the FFT distinguisher. Finally, we indicate, via extensive experiments, that the sample dependency due to both LF2 and sample amplification is limited
On the Sample Complexity of solving LWE using BKW-Style Algorithms
The Learning with Errors (LWE) problem receives much attention in cryptography, mainly due to its fundamental significance in post-quantum cryptography. Among its solving algorithms, the Blum-Kalai-Wasserman (BKW) algorithm, originally proposed for solving the Learning Parity with Noise (LPN) problem, performs well, especially for certain parameter settings with cryptographic importance. The BKW algorithm consists of two phases, the reduction phase and the solving phase.In this work, we study the performance of distinguishers used in the solving phase. We show that the Fast Fourier Transform (FFT) distinguisher from Eurocrypt'15 has the same sample complexity as the optimal distinguisher, when making the same number of hypotheses. We also show that it performs much better than theory predicts and introduce an improvement of it called the pruned FFT distinguisher. Finally, we indicate, via extensive experiments, that the sample dependency due to both LF2 and sample amplification is limited
A Key Recovery Reaction Attack on QC-MDPC
Algorithms for secure encryption in a post-quantum world are currently receiving a lot of attention in the research community. One of the most promising such algorithms is the code-based scheme called QC-MDPC, which has excellent performance and a small public key size. In this work we present a very efficient key recovery attack on the QC-MDPC scheme using the fact that decryption uses an iterative decoding step and this can fail with some small probability. We identify a dependence between the secret key and the failure in decoding. This can be used to build what we refer to as a distance spectrum for the secret key, which is the set of all distances between any two ones in the secret key. In a reconstruction step we then determine the secret key from the distance spectrum. The attack has been implemented and tested on a proposed instance of QC-MDPC for 80-bit security. It successfully recovers the secret key in minutes. A slightly modified version of the attack can be applied on proposed versions of the QC-MDPC scheme that provides INDCCA security. The attack is a bit more complex in this case, but still very much below the security level. The reason why we can break schemes with proved CCA security is that the model for these proofs typically does not include the decoding error possibility. Last, we present several algorithms for key reconstruction from an empirical distance spectrum. We first improve the naïve algorithm for key reconstruction by a factor of about 30,000, when the parameters for 80-bit security are implemented. We further develop the algorithm to deal with errors in the distance spectrum. This ultimately reduces the requirement on the number of ciphertexts that need to be collected for a successful key recovery