14 research outputs found

    A physical study of the LLL algorithm

    Full text link
    This paper presents a study of the LLL algorithm from the perspective of statistical physics. Based on our experimental and theoretical results, we suggest that interpreting LLL as a sandpile model may help understand much of its mysterious behavior. In the language of physics, our work presents evidence that LLL and certain 1-d sandpile models with simpler toppling rules belong to the same universality class. This paper consists of three parts. First, we introduce sandpile models whose statistics imitate those of LLL with compelling accuracy, which leads to the idea that there must exist a meaningful connection between the two. Indeed, on those sandpile models, we are able to prove the analogues of some of the most desired statements for LLL, such as the existence of the gap between the theoretical and the experimental RHF bounds. Furthermore, we test the formulas from the finite-size scaling theory (FSS) against the LLL algorithm itself, and find that they are in excellent agreement. This in particular explains and refines the geometric series assumption (GSA), and allows one to extrapolate various quantities of interest to the dimension limit. In particular, we predict the empirical average RHF converges to 1.02265\approx 1.02265 as dimension goes to infinity.Comment: Augmented version of 1804.03285; expect some overlap

    Second order statistical behavior of LLL and BKZ

    Get PDF
    The LLL algorithm (from Lenstra, Lenstra and Lovász) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography. Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g. the Geometric Series Assumption and Chen-Nguyen’s BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments —in particular in the head and tail regions— and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order

    Terminating BKZ

    Get PDF
    Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT\u2791] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt\u2708] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)inQn×n(b_i)_{i\leq n} \in Q^{n \times n} of a lattice L and a block-size β\beta, and if terminated after Ω(n3β2(logn+loglogmaxibi))\Omega\left(\frac{n^3}{\beta^2}(\log n + \log \log \max_i \|\vec{b}_i\|)\right) calls to a β\beta-dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm 2γβn12(β1)+32(detL)1n\leq 2 \gamma_{\beta}^{\frac{n-1}{2(\beta-1)}+\frac{3}{2}} \cdot (\det L)^{\frac{1}{n}}, where~γββ\gamma_{\beta} \leq \beta is the maximum of Hermite\u27s constants in dimensions β\leq \beta. To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms

    25 Years of Self-Organized Criticality: Solar and Astrophysics

    Get PDF
    Shortly after the seminal paper {\sl "Self-Organized Criticality: An explanation of 1/f noise"} by Bak, Tang, and Wiesenfeld (1987), the idea has been applied to solar physics, in {\sl "Avalanches and the Distribution of Solar Flares"} by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme.Comment: 139 pages, 28 figures, Review based on ISSI workshops "Self-Organized Criticality and Turbulence" (2012, 2013, Bern, Switzerland

    Towards realistic interactive sand : a GPU-based framework

    Get PDF
    Includes bibliographical references (leaves 147-160).Many real-time computer games contain virtual worlds built upon terrestrial landscapes, in particular, "sandy" terrains, such as deserts and beaches. These terrains often contain large quantities of granular material, including sand, soil, rubble, and gravel. Allowing other environmental elements, such as trees or bodies of water, as well as players, to interact naturally and realistically with sand, is an important milestone for achieving realism in games. In the past, game developers have resorted to approximating sand with flat. textured surfaces that are static, non-granular, and do not behave like the physical material they model. A reasonable expectation is that sand be granular in its composition and governed by the laws of physics in its behaviour. However, for a single PC user, physics-based models are too computationally expensive to simulate and animate in real-time. An alternative is to use computer clusters to handle numerically intensive simulation, but at the loss of single-user affordability and real-time interactivity. Instead, we propose a GPU-based simulation framework that exploits the massive computational parallelism of a modern GPU to achieve interactive frame rates, on a single PC. We base our method on a discrete elements approach that represents each sand granule as a rigid arrangement of particles. Our model shows highly dynamic phenomena, such as splashing and avalanching, as well as static dune formation. Moreover, by utilising standard metrics taken from granular material science, we show that the simulated sand behaves in accordance with previous numerical and experimental research. We also support general rigid bodies in the simulation by automated particle-based sampling of their surfaces. This allows sand to interact naturally with its environment without extensive modification to underlying physics engine. The generality of our physics framework also allows for real-time physically-based rigid body simulation sans sand, as demonstrated in our testing. Finally, we describe an accelerated real-time method for lighting sand that supports both self-shadowing and environmental shadowing effects

    The General Sieve Kernel and New Records in Lattice Reduction

    Get PDF
    textabstractWe propose the General Sieve Kernel (G6K, pronounced /Ze.si.ka/), an abstract stateful machine supporting a wide variety of lattice reduction strategies based on sieving algorithms. Using the basic instruction set of this abstract stateful machine, we first give concise formulations of previous sieving strategies from the literature and then propose new ones. We then also give a light variant of BKZ exploiting the features of our abstract stateful machine. This encapsulates several recent suggestions (Ducas at Eurocrypt 2018; Laarhoven and Mariano at PQCrypto 2018) to move beyond treating sieving as a blackbox SVP oracle and to utilise strong lattice reduction as preprocessing for sieving. Furthermore, we propose new tricks to minimise the sieving computation required for a given reduction quality with mechanisms such as recycling vectors between sieves, on-the-fly lifting and flexible insertions akin to Deep LLL and recent variants of Random Sampling Reduction. Moreover, we provide a highly optimised, multi-threaded and tweakable implementation of this machine which we make open-source. We then illustrate the performance of this implementation of our sieving strategies by applying G6K to various lattice challenges. In particular, our approach allows us to solve previously unsolved instances of the Darmstadt SVP (151, 153, 155) and LWE (e.g. (75, 0.005)) challenges. Our solution for the SVP-151 challenge was found 400 times faster than the time reported for the SVP-150 challenge, the previous record. For exact SVP, we observe a performance crossover between G6K and FPLLL’s state of the art implementation of enumeration at dimension 70

    Topics in Lattice Sieving

    Get PDF

    Measurement of granular contact forces using frequency scanning interferometry

    Get PDF
    The propagation of stress within a granular material has been studied for many years, but only recently have models and theories focused on the micromechanical (single grain) level. Experiments at this level are still rather limited in number. For this reason, a system using optical techniques has been developed. The substrate on which the granular bed is assembled is a double layer elastic substrate with high modulus epoxy constituting the top layer and silicone rubber as the bottom layer. In between the two layers, gold is coated which acts as a reflective film. To design the substrate, a Finite Element Analysis package called LUSAS was used. By performing a non-linear contact analysis, the design of the substrate was optimised so as to give a linear response, high stiffness, deflection in the measurable range, and negligible cross-talk between neighbouring grains. Fabrication and inspection techniques were developed to enable samples to be manufactured to this design. The deformation of the gold interface layer is measured using interferometry. The interferometer utilised a frequency tunable laser which acts both as the light source and the phase shifting device. The optical arrangement is based on the Fizeau set-up. This has removed several problems such as multiple reflections and sensitivity to vibration that occurred when using a Mach-Zehnder configuration. A fifteen-frame phase shifting algorithm, was developed based on a Hanning window, which allows the phase difference map to be obtained. This is then unwrapped in order to obtain the indentation profile. The deflection profile is then converted to a single indentation depth value by fitting a Lorentzian curve to the measured data. Calibration of the substrate is carried out by loading at 9 different locations simultaneously. Spatial and temporal variations of the calibration constants are found to be of order 10-15%. Results are presents showing contact force distributions under both piles of sand and under face-centred cubic arrangements of stainless steel balls. Reasonable agreement was obtained in the latter case with both the expected mean force and the probability density function predicted by the so-called 'q' model. The experimental techniques are able to measure small displacements down to a few nanometers. To the best of my knowledge these experiments are the first to employ the interferometer method in attempting to measure the contact force distribution at the base of a granular bed
    corecore