89 research outputs found

    Terminating BKZ

    Get PDF
    Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT\u2791] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt\u2708] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)inQn×n(b_i)_{i\leq n} \in Q^{n \times n} of a lattice L and a block-size β\beta, and if terminated after Ω(n3β2(logn+loglogmaxibi))\Omega\left(\frac{n^3}{\beta^2}(\log n + \log \log \max_i \|\vec{b}_i\|)\right) calls to a β\beta-dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm 2γβn12(β1)+32(detL)1n\leq 2 \gamma_{\beta}^{\frac{n-1}{2(\beta-1)}+\frac{3}{2}} \cdot (\det L)^{\frac{1}{n}}, where~γββ\gamma_{\beta} \leq \beta is the maximum of Hermite\u27s constants in dimensions β\leq \beta. To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms

    Shortest vector from lattice sieving: A few dimensions for free

    Get PDF
    Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension n are sieve algorithms, which have heuristic complexity estimates ranging from (4/3)n+o(n) down to (3/2)n/2+o(n) when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitude, despite the larger super-exponential asymptotical complexity 2Θ(n log n) of the latter. In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than n - d solves SVP in dimension n, where d = Θ(n/ log n). Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with (4/3)n+o(n) complexity, and it outperforms the best sieve algorithms from the literature by a factor of 10 in dimensions 7080. It performs less than an order of magnitude slower than pruned enumeration in the same range. By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future

    Discrete-Time Quantum Field Theory and the Deformed Super Virasoro Algebra

    Get PDF
    We show that the deformations of Virasoro and super Virasoro algebra, constructed earlier on an abstract mathematical background, emerge after Wick rotation, within an exact treatment of discrete-time free field models on a circle. The deformation parameter is eλe^\lambda, where λ=τ/ρ\lambda=\tau/\rho is the ratio of the discrete-time scale τ\tau and the radius ρ\rho of the compact space.Comment: 3 pages, latex, no figure

    Improved Progressive BKZ Algorithms and their Precise Cost Estimation by Sharp Simulator

    Get PDF
    In this paper, we investigate a variant of the BKZ algorithm, called progressive BKZ, which performs BKZ reductions by starting with a small blocksize and gradually switching to larger blocks as the process continues. We discuss techniques to accelerate the speed of the progressive BKZ algorithm by optimizing the following parameters: blocksize, searching radius and probability for pruning of the local enumeration algorithm, and the constant in the geometric series assumption (GSA). We then propose a simulator for predicting the length of the Gram-Schmidt basis obtained from the BKZ reduction. We also present a model for estimating the computational cost of the proposed progressive BKZ by considering the efficient implementation of the local enumeration algorithm and the LLL algorithm. Finally, we compare the cost of the proposed progressive BKZ with that of other algorithms using instances from the Darmstadt SVP Challenge. The proposed algorithm is approximately 50 times faster than BKZ 2.0 (proposed by Chen-Nguyen) for solving the SVP Challenge up to 160 dimensions

    Practical, Predictable Lattice Basis Reduction

    Get PDF
    Lattice reduction algorithms are notoriously hard to predict, both in terms of running time and output quality, which poses a major problem for cryptanalysis. While easy to analyze algorithms with good worst-case behavior exist, previous experimental evidence suggests that they are outperformed in practice by algorithms whose behavior is still not well understood, despite more than 30 years of intensive research. This has lead to a situation where a rather complex simulation procedure seems to be the most common way to predict the result of their application to an instance. In this work we present new algorithmic ideas towards bridging this gap between theory and practice. We report on an extensive experimental study of several lattice reduction algorithms, both novel and from the literature, that shows that theoretical algorithms are in fact surprisingly practical and competitive. In light of our results we come to the conclusion that in order to predict lattice reduction, simulation is superfluous and can be replaced by a closed formula using weaker assumptions. One key technique to achieving this goal is a novel algorithm to solve the Shortest Vector Problem (SVP) in the dual without computing the dual basis. Our algorithm enjoys the same practical efficiency as the corresponding primal algorithm and can be easily added to an existing implementation of it

    Extremal Non-Compactness of Composition Operators with Linear Fractional Symbol

    Get PDF
    We realize norms of most composition operators acting on the Hardy space with linear fractional symbol as roots of hypergeometric functions. This realization leads to simple necessary and sufficient conditions on the symbol to exhibit extremal non-compactness, establishes equivalence of cohyponormality and cosubnormality of composition operators with linear fractional symbol, and yields a complete classification of those linear fractional that induce composition operators whose norms are determined by the action of the adjoint on the normalized reproducing kernels in the Hardy space
    corecore