23 research outputs found

    Revisiting the Expected Cost of Solving uSVP and Applications to LWE

    Get PDF
    Abstract: Reducing the Learning with Errors problem (LWE) to the Unique-SVP problem and then applying lattice reduction is a commonly relied-upon strategy for estimating the cost of solving LWE-based constructions. In the literature, two different conditions are formulated under which this strategy is successful. One, widely used, going back to Gama & Nguyen\u27s work on predicting lattice reduction (Eurocrypt 2008) and the other recently outlined by Alkim et al. (USENIX 2016). Since these two estimates predict significantly different costs for solving LWE parameter sets from the literature, we revisit the Unique-SVP strategy. We present empirical evidence from lattice-reduction experiments exhibiting a behaviour in line with the latter estimate. However, we also observe that in some situations lattice-reduction behaves somewhat better than expected from Alkim et al.\u27s work and explain this behaviour under standard assumptions. Finally, we show that the security estimates of some LWE-based constructions from the literature need to be revised and give refined expected solving costs

    On the Hardness of LWE with Binary Error: Revisiting the Hybrid Lattice-Reduction and Meet-in-the-Middle Attack

    Get PDF
    The security of many cryptographic schemes has been based on special instances of the Learning with Errors (LWE) problem, e.g., Ring-LWE, LWE with binary secret, or LWE with ternary error. However, recent results show that some subclasses are weaker than expected. In this work we show that LWE with binary error, introduced by Micciancio and Peikert, is one such subclass. We achieve this by applying the Howgrave-Graham attack on NTRU, which is a combination of lattice techniques and a Meet-in-the-Middle approach, to this setting. We show that the attack outperforms all other currently existing algorithms for several natural parameter sets. For instance, for the parameter set n = 256, m = 512, q = 256, this attack on LWE with binary error only requires 2^85 operations, while the previously best attack requires 2^117 operations. We additionally present a complete and improved analysis of the attack, using analytic techniques. Finally, based on the attack, we give concrete hardness estimations that can be used to select secure parameters for schemes based on LWE with binary erro

    Valid statements by the crowd: statistical measures for precision in crowdsourced mobile measurements

    No full text
    Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user's perspective on a large scale. Here, network measurements are performed directly on the end-users' devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider

    An Unconditionally Hiding and Long-Term Binding Post-Quantum Commitment Scheme

    Get PDF
    Commitment schemes are among cryptography's most important building blocks. Besides their basic properties, hidingness and bindingness, for many applications it is important that the schemes applied support proofs of knowledge. However, all existing solutions which have been proven to provide these protocols are only computationally hiding or are not resistant against quantum adversaries. This is not suitable for long-lived systems, such as long-term archives, where commitments have to provide security also in the long run. Thus, in this work we present a new post-quantum unconditionally hiding commitment scheme that supports (statistical) zero-knowledge protocols and allows to refreshes the binding property over time. The bindingness of our construction relies on the approximate shortest vector problem, a lattice problem which is conjectured to be hard for polynomial approximation factors, even for a quantum adversary. Furthermore, we provide a protocol that allows the committer to prolong the bindingness property of a given commitment while showing in zero-knowledge fashion that the value committed to did not change. In addition, our construction yields two more interesting features: one is the ability to "convert" a Pedersen commitment into a lattice-based one, and the other one is the construction of a hybrid approach whose bindingness relies on the discrete logarithm and approximate shortest vector problems

    Valid Statements by the Crowd: Statistical Measures for Precision in Crowdsourced Mobile Measurements

    No full text
    Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user’s perspective on a large scale. Here, network measurements are performed directly on the end-users’ devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider
    corecore