145 research outputs found

    The Asymptotic Complexity of Coded-BKW with Sieving Using Increasing Reduction Factors

    Full text link
    The Learning with Errors problem (LWE) is one of the main candidates for post-quantum cryptography. At Asiacrypt 2017, coded-BKW with sieving, an algorithm combining the Blum-Kalai-Wasserman algorithm (BKW) with lattice sieving techniques, was proposed. In this paper, we improve that algorithm by using different reduction factors in different steps of the sieving part of the algorithm. In the Regev setting, where q=n2q = n^2 and σ=n1.5/(2πlog⁥22n)\sigma = n^{1.5}/(\sqrt{2\pi}\log_2^2 n), the asymptotic complexity is 20.8917n2^{0.8917n}, improving the previously best complexity of 20.8927n2^{{0.8927n}}. When a quantum computer is assumed or the number of samples is limited, we get a similar level of improvement.Comment: Longer version of a paper to be presented at ISIT 2019. Updated after comments from the peer-review process. Includes an appendix with a proof of Theorem

    Hydrodynamic modelling and estimation of exchange rates for Bardawil Lagoon, Egypt. An investigation of governing forces and physical processes using numerical models.

    Get PDF
    Bardawil Lagoon, a natural lagoon located on the northern coast of the Sinai Peninsula, Egypt, there are three inlets of which two are man-made connecting the lagoon with the Mediterranean Sea. The inlets are subjected to changes in morphology, due to sediment transport, over time and this may lead to degrees of closure of either inlet. The sediment transport is governed by longshore currents and the sediments originate from the eroding Nile delta. Inlet closure adversely affects the lagoon water quality which will have a detrimental effect on the ecosystem. In this dissertation the dominant coastal processes governing the water exchange between the lagoon and the Mediterranean are studied. Finite element conceptual computer models are applied to simulate and investigate the hydrodynamics of the inlets and the lagoon itself. Two types of models are applied within the Surface Modelling System (SMS) software; ADCIRC, a regional tidal model and CMS-Flow, a local circulation model. In order to estimate the exchange rate of the lagoon two methods are utilized; based on net intertidal volume and cross sectional flow through the inlets. The methods show a high degree of correlation. The renewal time was estimated to 9.0 days, which is equivalent to a daily replaced volume of 53?106 m3. Tidal forcing governs the water exchange while wind is responsible for the internal circulation

    Improved Estimation of Key Enumeration with Applications to Solving LWE

    Get PDF
    In post-quantum cryptography (PQC), Learning With Errors (LWE) is one of the dominant underlying mathematical problems. For example, in NIST\u27s PQC standardization process, the Key Encapsulation Mechanism (KEM) protocol chosen for standardization was Kyber, an LWE-based scheme. Recently the dual attack surpassed the primal attack in terms of concrete complexity for solving the underlying LWE problem for multiple cryptographic schemes, including Kyber. The dual attack consists of a reduction part and a distinguishing part. When estimating the cost of the distinguishing part, one has to estimate the expected cost of enumerating over a certain number of positions of the secret key. Our contribution consists of giving a polynomial-time approach for calculating the expected complexity of such an enumeration procedure. This allows us to revise the complexity of the dual attack on the LWE-based protocols Kyber, Saber and TFHE. For all these schemes we improve upon the total bit-complexity in both the classical and the quantum setting. As our method of calculating the expected cost of enumeration is fairly general, it might be of independent interest in other areas of cryptography or even in other research areas

    Further Improvements of the Estimation of Key Enumeration with Applications to Solving LWE

    Get PDF
    In post-quantum cryptography, Learning With Errors (LWE) is one of the dominant underlying mathematical problems. The dual attack is one of the main strategies for solving the LWE problem, and it has recently gathered significant attention within the research community. A series of studies initially suggested that it might be more efficient than the other main strategy, the primal attack. Then, a subsequent work by Ducas and Pulles (Crypto’23) raised doubts on the estimated complexity of such an approach. The dual attack consists of a reduction part and a distinguishing part. When estimating the cost of the distinguishing part, one has to estimate the expected cost of enumerating over a certain number of positions of the secret key. Our contribution consists of giving a polynomial-time approach for calculating the expected complexity of such an enumeration procedure. This allows us to decrease the estimated cost of this procedure and, hence, of the whole attack both classically and quantumly. In addition, we explore different enumeration strategies to achieve some further improvements. Our work is independent from the questions raised by Ducas and Pulles, which do not concern the estimation of the enumeration procedure in the dual attack. As our method of calculating the expected cost of enumeration is fairly general, it might be of independent interest in other areas of cryptanalysis or even in other research areas

    Do Not Bound to a Single Position: Near-Optimal Multi-Positional Mismatch Attacks Against Kyber and Saber

    Get PDF
    Misuse resilience is an important security criterion in the evaluation of the NIST Post-quantum cryptography standardization process. In this paper, we propose new key mismatch attacks against Kyber and Saber, NIST\u27s selected scheme for encryption and one of the finalists in the third round of the NIST competition, respectively. Our novel idea is to recover partial information of multiple secret entries in each mismatch oracle call. These multi-positional attacks greatly reduce the expected number of oracle calls needed to fully recover the secret key. They also have significance in side-channel analysis. From the perspective of lower bounds, our new attacks falsify the Huffman bounds proposed in [Qin et al. ASIACRYPT 2021], where a one- positional mismatch adversary is assumed. Our new attacks can be bounded by the Shannon lower bounds, i.e., the entropy of the distribution generating each secret coefficient times the number of secret entries. We call the new attacks near-optimal since their query complexities are close to the Shannon lower bounds

    Agronomic performance, nitrogen acquisition and water-use efficiency of the perennial grain crop Thinopyrum intermedium in a monoculture and intercropped with alfalfa in Scandinavia

    Get PDF
    The perennial forage grass Thinopyrum intermedium (Host) Barkworth & Dewey, commonly known as intermediate wheatgrass (IWG) or by the commercial name Kernza (TM), is being developed as a perennial grain crop, i.e. being bred for its improved agronomic performance and food qualities. Intercropping legumes and grasses is a strategy for improving resource use and sustainability in cropping systems. Here, we show for the first time the agronomic performance of IWG as a perennial cereal grown as a monocrop and as an intercrop (alternate row, 0.5:0.5) with Medicago sativa L. (alfalfa/lucerne) in southern Sweden. The seeds of cycle 3 IWG were accessed from The Land Institute (TLI) of Salinas, Kansas, USA, and used to establish a local seed production plot (in 2014) for the establishment of the perennial systems (in 2016) utilised in this study. Both the monocrop and intercrop were sown with 25 cm row spacing with alternate rows of IWG and alfalfa in the intercrop (i.e. replacement design) with unknown sowing density. Intercropping provided sustained IWG grain production under the dry conditions of 2018, but also in the following year. This was evidently associated with a higher nitrogen accumulation in intercropped practice. Thus, intercropping seems to have stabilised the IWG grain production in the dry conditions of 2018, when the grain production in the intercrop was similar to that of the monocrop in the same year. This result was further supported by the lower discrimination against C-13 (as an indicator of water use efficiency) in the intercrop components compared to the sole crop in 2018. The lower discrimination indicates high water use efficiency in the intercropped IWG in comparison to the IWG in monoculture, and we conclude that intercropping perennial cereal grain crops with legumes provides better growing conditions in terms of nitrogen acquisition, and water status, to cope with more extreme drought spells expected from climate change

    Improvements on making BKW practical for solving LWE

    Get PDF
    The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the Blum–Kalai–Wasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.publishedVersio

    Feasibility Study of FPGA-Based Equalizer for 112-Gbit/s Optical Fiber Receivers

    Get PDF
    With ever increasing demands on spectral efficiency, complex modulation schemes are being introduced in fiber communication. However, these schemes are challenging to implement as they drastically increase the computational burden at the fiber receiver’s end. We perform a feasibility study of implementing a 16-QAM 112-Gbit/s decision directed equalizer on a state-of-the-art FPGA platform. An FPGA offers the reconfigurability needed to allow for modulation scheme updates, however, its clock rate is limited. For this purpose, we introduce a new phase correction technique to significantly relax the delay requirement on the critical phase-recovery feedback loop

    Belief Propagation Meets Lattice Reduction: Security Estimates for Error-Tolerant Key Recovery from Decryption Errors

    Get PDF
    In LWE-based KEMs, observed decryption errors leak information about the secret key in the form of equations or inequalities. Several practical fault attacks have already exploited such leakage by either directly applying a fault or enabling a chosen-ciphertext attack using a fault. When the leaked information is in the form of inequalities, the recovery of the secret key is not trivial. Recent methods use either statistical or algebraic methods (but not both), with some being able to handle incorrect information. Having in mind that integration of the side-channel information is a crucial part of several classes of implementation attacks on LWE-based schemes, it is an important question whether statistically processed information can be successfully integrated in lattice reduction algorithms. We answer this question positively by proposing an error-tolerant combination of statistical and algebraic methods that make use of the advantages of both approaches. The combination enables us to improve upon existing methods -- we use both fewer inequalities and are more resistant to errors. We further provide precise security estimates based on the number of available inequalities. Our recovery method applies to several types of implementation attacks in which decryption errors are used in a chosen-ciphertext attack. We practically demonstrate the improved performance of our approach in a key-recovery attack against Kyber with fault-induced decryption errors

    Update on the EFFECTS study of fluoxetine for stroke recovery: a randomised controlled trial in Sweden

    Get PDF
    Studies have suggested that fluoxetine might improve neurological recovery after stroke, but the results remain inconclusive. The EFFECTS (Efficacy oF Fluoxetine – a randomisEd Controlled Trial in Stroke) reached its recruitment target of 1500 patients in June 2019. The purpose of this article is to present all amendments to the protocol and describe how we formed the EFFECTS trial collaboration in Sweden. Methods In this investigator-led, multicentre, parallel-group, randomised, placebo-controlled trial, we enrolled non-depressed stroke patients aged 18 years or older between 2 and 15 days after stroke onset. The patients had a clinical diagnosis of stroke (ischaemic or intracerebral haemorrhage) with persisting focal neurological deficits. Patients were randomised to fluoxetine 20 mg or matching placebo capsules once daily for 6 months. Results Seven amendments were made and included clarification of drug interaction between fluoxetine and metoprolol and the use of metoprolol for severe heart failure as an exclusion criterion, inclusion of data from central Swedish registries and the Swedish Stroke Register, changes in informed consent from patients, and clarification of design of some sub-studies. EFFECTS recruited 1500 patients at 35 centres in Sweden between 20 October 2014 and 28 June 2019. We plan to unblind the data in January 2020 and report the primary outcome in May 2020. Conclusion EFFECTS will provide data on the safety and efficacy of 6 months of treatment with fluoxetine after stroke in a Swedish health system setting. The data from EFFECTS will also contribute to an individual patient data meta-analysis
    • 

    corecore