90 research outputs found

    Counting Co-Cyclic Lattices

    Full text link
    There is a well-known asymptotic formula, due to W. M. Schmidt (1968) for the number of full-rank integer lattices of index at most VV in Zn\mathbb{Z}^n. This set of lattices LL can naturally be partitioned with respect to the factor group Zn/L\mathbb{Z}^n/L. Accordingly, we count the number of full-rank integer lattices L⊆ZnL \subseteq \mathbb{Z}^n such that Zn/L\mathbb{Z}^n/L is cyclic and of order at most VV, and deduce that these co-cyclic lattices are dominant among all integer lattices: their natural density is (ζ(6)∏k=4nζ(k))−1≈85%\left(\zeta(6) \prod_{k=4}^n \zeta(k)\right)^{-1} \approx 85\%. The problem is motivated by complexity theory, namely worst-case to average-case reductions for lattice problems

    Approximating the densest sublattice from Rankin's inequality

    Get PDF
    Proceedings of Algorithmic Number Theory Symposium XI, GyeongJu, Korea, 6-11 August 2014International audienceWe present a higher-dimensional generalization of the Gama{Nguyen algorithm (STOC '08) for approximating the shortest vector problem in a lattice. This generalization approximates the densest sublattice by using a subroutine solving the exact problem in low dimension, such as the Dadush{Micciancio algorithm (SODA '13). Our approximation factor corresponds to a natural inequality on Rankin's constant derived from Rankin's inequality

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+Ï”)^n polynomial-time operations, and whose space requirement is (4/3+ Ï”)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+Ï”)^n polynomial-time operations, and whose space requirement is (4/3+ Ï”)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Floating-Point LLL Revisited

    Get PDF
    Everybody knows the Lenstra-Lenstra-LovĂĄsz lattice basis reduction algorithm (LLL), which has proved invaluable in public-key cryptanalysis and in many other fields. Given an integer dd-dimensional lattice basis which vectors have norms smaller than BB, LLL outputs a so-called LLL-reduced basis in time O(d6log3B)O(d^6 log^3 B), using arithmetic operations on integers of bit-length O(dO(d log B)B). This worst-case complexity is problematic for lattices arising in cryptanalysis where dd or/and log BB are often large. As a result, the original LLL is almost never used in practice. Instead, one applies floating-point variants of LLL, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in LLL) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point LLL is not even guaranteed to terminate, and the output basis may not be LLL-reduced at all. In this article, we introduce the LLL2^2 algorithm, a new and natural floating-point variant of LLL which provably outputs LLL-reduced bases in polynomial time O(d5(dO(d^5(d + log BB) log B)B). This is the first LLL algorithm which running time provably grows only quadratically with respect to log BB without fast integer arithmetic, like the famous Gaussian and Euclidean algorithms. The growth is cubic for all other LLL algorithms known

    Random Sampling Revisited: Lattice Enumeration with Discrete Pruning

    Get PDF
    International audienceIn 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT '10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellip-soid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration

    A Complete Analysis of the BKZ Lattice Reduction Algorithm

    Get PDF
    We present the first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL. Previous analyses were either heuristic or only applied to variants of BKZ. Namely, we provide guarantees on the quality of the current lattice basis during execution. Our analysis extends to a generic BKZ algorithm where the SVP-oracle is replaced by an approximate oracle and/or the basis update is not necessarily performed by LLL. Interestingly, it also provides currently the best and simplest bounds for both the output quality and the running time. As an application, we observe that in certain approximation regimes, it is more efficient to use BKZ with an approximate rather than exact SVP-oracle

    Low-dimensional lattice basis reduction revisited

    Get PDF
    International audienceLattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are two-fold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm

    Sound-Dr: Reliable Sound Dataset and Baseline Artificial Intelligence System for Respiratory Illnesses

    Full text link
    As the burden of respiratory diseases continues to fall on society worldwide, this paper proposes a high-quality and reliable dataset of human sounds for studying respiratory illnesses, including pneumonia and COVID-19. It consists of coughing, mouth breathing, and nose breathing sounds together with metadata on related clinical characteristics. We also develop a proof-of-concept system for establishing baselines and benchmarking against multiple datasets, such as Coswara and COUGHVID. Our comprehensive experiments show that the Sound-Dr dataset has richer features, better performance, and is more robust to dataset shifts in various machine learning tasks. It is promising for a wide range of real-time applications on mobile devices. The proposed dataset and system will serve as practical tools to support healthcare professionals in diagnosing respiratory disorders. The dataset and code are publicly available here: https://github.com/ReML-AI/Sound-Dr/.Comment: 9 pages, PHMAP2023, PH

    The H-1 and C-13 chemical shifts of 5-5 lignin model dimers : An evaluation of DFT functionals

    Get PDF
    The calculations of H-1 and C-13 NMR chemical shifts were performed on three 5-5 lignin dimers, prominent substructures in softwood lignins, to compare with experimental data. Initially, 10 DFT functionals (B3LYP, B3PW91, BPV86, CAM-B3LYP, HCTH, HSEH1PBE, mPW1PW91, PBEPBE, TPSSTPSS, and omega B97XD) combined with the gage-including atomic orbital (GIAO) method and basic set 6-31G(d,p) were tested on 3,3'-(6,6'-dihydroxy-5,5'-dimethoxy-[1,1'-biphenyl]-3,3'-diyl)dipropionic acid (1), efficiently synthesized from ferulic acid. HSEH1PBE, mPW1PW91, and omega B97XD were found to be the three best performing functionals with strong correlations (r(2) >= 0.9988) and low errors (CMAEsPeer reviewe
    • 

    corecore