3,253 research outputs found

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    Statistical Pruning for Near-Maximum Likelihood Decoding

    Get PDF
    In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance

    Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation

    Get PDF
    The lattice basis reduction algorithm is a method for solving the Shortest Vector Problem (SVP) on lattices. There are many variants of the lattice basis reduction algorithm such as LLL, BKZ, and RSR. Though BKZ has been used most widely, it is shown recently that some variants of RSR are quite efficient for solving a high-dimensional SVP (they achieved many best scores in TU Darmstadt SVP challenge). RSR repeats alternately the generation of new very short lattice vectors from the current basis (we call this procedure ``random sampling\u27\u27) and the improvement of the current basis by utilizing the generated very short lattice vectors. Therefore, it is important for investigating and ameliorating RSR to estimate the success probability of finding very short lattice vectors by combining the current basis. In this paper, we propose a new method for estimating the success probability by the Gram-Charlier approximation, which is a basic asymptotic expansion of any probability distribution by utilizing the higher order cumulants such as the skewness and the kurtosis. The proposed method uses a ``parametric\u27\u27 model for estimating the probability, which gives a closed-form expression with a few parameters. Therefore, the proposed method is much more efficient than the previous methods using the non-parametric estimation. This enables us to investigate the lattice basis reduction algorithm intensively in various situations and clarify its properties. Numerical experiments verified that the Gram-Charlier approximation can estimate the actual distribution quite accurately. In addition, we investigated RSR and its variants by the proposed method. Consequently, the results showed that the weighted random sampling is useful for generating shorter lattice vectors. They also showed that it is crucial for solving the SVP to improve the current basis periodically

    The Quantum Frontier

    Full text link
    The success of the abstract model of computation, in terms of bits, logical operations, programming language constructs, and the like, makes it easy to forget that computation is a physical process. Our cherished notions of computation and information are grounded in classical mechanics, but the physics underlying our world is quantum. In the early 80s researchers began to ask how computation would change if we adopted a quantum mechanical, instead of a classical mechanical, view of computation. Slowly, a new picture of computation arose, one that gave rise to a variety of faster algorithms, novel cryptographic mechanisms, and alternative methods of communication. Small quantum information processing devices have been built, and efforts are underway to build larger ones. Even apart from the existence of these devices, the quantum view on information processing has provided significant insight into the nature of computation and information, and a deeper understanding of the physics of our universe and its connections with computation. We start by describing aspects of quantum mechanics that are at the heart of a quantum view of information processing. We give our own idiosyncratic view of a number of these topics in the hopes of correcting common misconceptions and highlighting aspects that are often overlooked. A number of the phenomena described were initially viewed as oddities of quantum mechanics. It was quantum information processing, first quantum cryptography and then, more dramatically, quantum computing, that turned the tables and showed that these oddities could be put to practical effect. It is these application we describe next. We conclude with a section describing some of the many questions left for future work, especially the mysteries surrounding where the power of quantum information ultimately comes from.Comment: Invited book chapter for Computation for Humanity - Information Technology to Advance Society to be published by CRC Press. Concepts clarified and style made more uniform in version 2. Many thanks to the referees for their suggestions for improvement

    Approximate Voronoi cells for lattices, revisited

    Get PDF
    We revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis-Laarhoven-De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than 20.076d+o(d)2^{0.076d + o(d)} memory, and we show how to obtain time-memory trade-offs even when using less than 20.048d+o(d)2^{0.048d + o(d)} memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as dd/2+o(d)d^{d/2 + o(d)}, matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.Comment: 18 pages, 1 figur

    Lattice Enumeration with Discrete Pruning: Improvement, Cost Estimation and Optimal Parameters

    Get PDF
    Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects: First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed. Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration. Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided

    The White-Box Adversarial Data Stream Model

    Full text link
    We study streaming algorithms in the white-box adversarial model, where the stream is chosen adaptively by an adversary who observes the entire internal state of the algorithm at each time step. We show that nontrivial algorithms are still possible. We first give a randomized algorithm for the L1L_1-heavy hitters problem that outperforms the optimal deterministic Misra-Gries algorithm on long streams. If the white-box adversary is computationally bounded, we use cryptographic techniques to reduce the memory of our L1L_1-heavy hitters algorithm even further and to design a number of additional algorithms for graph, string, and linear algebra problems. The existence of such algorithms is surprising, as the streaming algorithm does not even have a secret key in this model, i.e., its state is entirely known to the adversary. One algorithm we design is for estimating the number of distinct elements in a stream with insertions and deletions achieving a multiplicative approximation and sublinear space; such an algorithm is impossible for deterministic algorithms. We also give a general technique that translates any two-player deterministic communication lower bound to a lower bound for {\it randomized} algorithms robust to a white-box adversary. In particular, our results show that for all pโ‰ฅ0p\ge 0, there exists a constant Cp>1C_p>1 such that any CpC_p-approximation algorithm for FpF_p moment estimation in insertion-only streams with a white-box adversary requires ฮฉ(n)\Omega(n) space for a universe of size nn. Similarly, there is a constant C>1C>1 such that any CC-approximation algorithm in an insertion-only stream for matrix rank requires ฮฉ(n)\Omega(n) space with a white-box adversary. Our algorithmic results based on cryptography thus show a separation between computationally bounded and unbounded adversaries. (Abstract shortened to meet arXiv limits.)Comment: PODS 202

    CRYSTALS - Kyber: A CCA-secure Module-Lattice-Based KEM

    Get PDF
    Rapid advances in quantum computing, together with the announcement by the National Institute of Standards and Technology (NIST) to define new standards for digital-signature, encryption, and key-establishment protocols, have created significant interest in post-quantum cryptographic schemes. This paper introduces Kyber (part of CRYSTALS - Cryptographic Suite for Algebraic Lattices - a package submitted to NIST post-quantum standardization effort in November 2017), a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM), based on hardness assumptions over module lattices. Our KEM is most naturally seen as a successor to the NEWHOPE KEM (Usenix 2016). In particular, the key and ciphertext sizes of our new construction are about half the size, the KEM offers CCA instead of only passive security, the security is based on a more general (and flexible) lattice problem, and our optimized implementation results in essentially the same running time as the aforementioned scheme. We first introduce a CPA-secure public-key encryption scheme, apply a variant of the Fujisaki-Okamoto transform to create a CCA-secure KEM, and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes. The security of our primitives is based on the hardness of Module-LWE in the classical and quantum random oracle models, and our concrete parameters conservatively target more than 128 bits of post-quantum security

    Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

    Get PDF
    We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in Rn\mathbb{R}^n with roughly logโกn\log n Gaussian components given poly(n)\mathsf{poly}(n) samples requires time quasi-polynomial in nn. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for nฯตn^{\epsilon} Gaussians for any constant ฯต>0\epsilon > 0, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least n\sqrt{n} Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with kk-sparse secrets where the multiplicative increase in the noise is only O(k)O(\sqrt{k}), independent of the ambient dimension nn

    ์žก์Œํ‚ค๋ฅผ ๊ฐ€์ง€๋Š” ์‹ ์›๊ธฐ๋ฐ˜ ๋™ํ˜•์•”ํ˜ธ์— ๊ด€ํ•œ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ์ˆ˜๋ฆฌ๊ณผํ•™๋ถ€,2020. 2. ์ฒœ์ •ํฌ.ํด๋ผ์šฐ๋“œ ์ƒ์˜ ๋ฐ์ดํ„ฐ ๋ถ„์„ ์œ„์ž„ ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ๋™ํ˜•์•”ํ˜ธ์˜ ๊ฐ€์žฅ ํšจ๊ณผ์ ์ธ ์‘์šฉ ์‹œ๋‚˜๋ฆฌ์˜ค ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ ์ œ๊ณต์ž์™€ ๋ถ„์„๊ฒฐ๊ณผ ์š”๊ตฌ์ž๊ฐ€ ์กด์žฌํ•˜๋Š” ์‹ค์ œ ํ˜„์‹ค์˜ ๋ชจ๋ธ์—์„œ๋Š” ๊ธฐ๋ณธ์ ์ธ ์•”๋ณตํ˜ธํ™”์™€ ๋™ํ˜• ์—ฐ์‚ฐ ์™ธ์—๋„ ์—ฌ์ „ํžˆ ํ•ด๊ฒฐํ•ด์•ผ ํ•  ๊ณผ์ œ๋“ค์ด ๋‚จ์•„์žˆ๋Š” ์‹ค์ •์ด๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์—์„œ ํ•„์š”ํ•œ ์—ฌ๋Ÿฌ ์š”๊ตฌ์‚ฌํ•ญ๋“ค์„ ํฌ์ฐฉํ•˜๊ณ , ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ๋ฐฉ์•ˆ์„ ๋…ผํ•˜์˜€๋‹ค. ๋จผ์ €, ๊ธฐ์กด์˜ ์•Œ๋ ค์ง„ ๋™ํ˜• ๋ฐ์ดํ„ฐ ๋ถ„์„ ์†”๋ฃจ์…˜๋“ค์€ ๋ฐ์ดํ„ฐ ๊ฐ„์˜ ์ธต์œ„๋‚˜ ์ˆ˜์ค€์„ ๊ณ ๋ คํ•˜์ง€ ๋ชปํ•œ๋‹ค๋Š” ์ ์— ์ฐฉ์•ˆํ•˜์—ฌ, ์‹ ์›๊ธฐ๋ฐ˜ ์•”ํ˜ธ์™€ ๋™ํ˜•์•”ํ˜ธ๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์‚ฌ์ด์— ์ ‘๊ทผ ๊ถŒํ•œ์„ ์„ค์ •ํ•˜์—ฌ ํ•ด๋‹น ๋ฐ์ดํ„ฐ ์‚ฌ์ด์˜ ์—ฐ์‚ฐ์„ ํ—ˆ์šฉํ•˜๋Š” ๋ชจ๋ธ์„ ์ƒ๊ฐํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ด ๋ชจ๋ธ์˜ ํšจ์œจ์ ์ธ ๋™์ž‘์„ ์œ„ํ•ด์„œ ๋™ํ˜•์•”ํ˜ธ ์นœํ™”์ ์ธ ์‹ ์›๊ธฐ๋ฐ˜ ์•”ํ˜ธ์— ๋Œ€ํ•˜์—ฌ ์—ฐ๊ตฌํ•˜์˜€๊ณ , ๊ธฐ์กด์— ์•Œ๋ ค์ง„ NTRU ๊ธฐ๋ฐ˜์˜ ์•”ํ˜ธ๋ฅผ ํ™•์žฅํ•˜์—ฌ module-NTRU ๋ฌธ์ œ๋ฅผ ์ •์˜ํ•˜๊ณ  ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์‹ ์›๊ธฐ๋ฐ˜ ์•”ํ˜ธ๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ, ๋™ํ˜•์•”ํ˜ธ์˜ ๋ณตํ˜ธํ™” ๊ณผ์ •์—๋Š” ์—ฌ์ „ํžˆ ๋น„๋ฐ€ํ‚ค๊ฐ€ ๊ด€์—ฌํ•˜๊ณ  ์žˆ๊ณ , ๋”ฐ๋ผ์„œ ๋น„๋ฐ€ํ‚ค ๊ด€๋ฆฌ ๋ฌธ์ œ๊ฐ€ ๋‚จ์•„์žˆ๋‹ค๋Š” ์ ์„ ํฌ์ฐฉํ•˜์˜€๋‹ค. ์ด๋Ÿฌํ•œ ์ ์—์„œ ์ƒ์ฒด์ •๋ณด๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ณตํ˜ธํ™” ๊ณผ์ •์„ ๊ฐœ๋ฐœํ•˜์—ฌ ํ•ด๋‹น ๊ณผ์ •์„ ๋™ํ˜•์•”ํ˜ธ ๋ณตํ˜ธํ™”์— ์ ์šฉํ•˜์˜€๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์•”๋ณตํ˜ธํ™”์™€ ๋™ํ˜• ์—ฐ์‚ฐ์˜ ์ „ ๊ณผ์ •์„ ์–ด๋Š ๊ณณ์—๋„ ํ‚ค๊ฐ€ ์ €์žฅ๋˜์ง€ ์•Š์€ ์ƒํƒœ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์•”ํ˜ธ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋™ํ˜•์•”ํ˜ธ์˜ ๊ตฌ์ฒด์ ์ธ ์•ˆ์ „์„ฑ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๊ณ ๋ คํ•˜์˜€๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋™ํ˜•์•”ํ˜ธ๊ฐ€ ๊ธฐ๋ฐ˜ํ•˜๊ณ  ์žˆ๋Š” ์ด๋ฅธ๋ฐ” Learning With Errors (LWE) ๋ฌธ์ œ์˜ ์‹ค์ œ์ ์ธ ๋‚œํ•ด์„ฑ์„ ๋ฉด๋ฐ€ํžˆ ๋ถ„์„ํ•˜์˜€๊ณ , ๊ทธ ๊ฒฐ๊ณผ ๊ธฐ์กด์˜ ๊ณต๊ฒฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋ณด๋‹ค ํ‰๊ท ์ ์œผ๋กœ 1000๋ฐฐ ์ด์ƒ ๋น ๋ฅธ ๊ณต๊ฒฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ํ˜„์žฌ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ๋™ํ˜•์•”ํ˜ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ์•ˆ์ „ํ•˜์ง€ ์•Š์Œ์„ ๋ณด์˜€๊ณ , ์ƒˆ๋กœ์šด ๊ณต๊ฒฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ • ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ ๋…ผํ•˜์˜€๋‹ค.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems. First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood. For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1 1.1 Access Control based on Identity 2 1.2 Biometric Key Management 3 1.3 Concrete Security of HE 3 1.4 List of Papers 4 2 Background 6 2.1 Notation 6 2.2 Lattices 7 2.2.1 Lattice Reduction Algorithm 7 2.2.2 BKZ cost model 8 2.2.3 Geometric Series Assumption (GSA) 8 2.2.4 The Nearest Plane Algorithm 9 2.3 Gaussian Measures 9 2.3.1 Kullback-Leibler Divergence 11 2.4 Lattice-based Hard Problems 12 2.4.1 The Learning With Errors Problem 12 2.4.2 NTRU Problem 13 2.5 One-way and Pseudo-random Functions 14 3 ID-based Data Access Control 16 3.1 Module-NTRU Lattices 16 3.1.1 Construction of MNTRU lattice and trapdoor 17 3.1.2 Minimize the Gram-Schmidt norm 22 3.2 IBE-Scheme from Module-NTRU 24 3.2.1 Scheme Construction 24 3.2.2 Security Analysis by Attack Algorithms 29 3.2.3 Parameter Selections 31 3.3 Application to Signature 33 4 Noisy Key Cryptosystem 36 4.1 Reusable Fuzzy Extractors 37 4.2 Local Functions 40 4.2.1 Hardness over Non-uniform Sources 40 4.2.2 Flipping local functions 43 4.2.3 Noise stability of predicate functions: Xor-Maj 44 4.3 From Pseudorandom Local Functions 47 4.3.1 Basic Construction: One-bit Fuzzy Extractor 48 4.3.2 Expansion to multi-bit Fuzzy Extractor 50 4.3.3 Indistinguishable Reusability 52 4.3.4 One-way Reusability 56 4.4 From Local One-way Functions 59 5 Concrete Security of Homomorphic Encryption 63 5.1 Albrecht's Improved Dual Attack 64 5.1.1 Simple Dual Lattice Attack 64 5.1.2 Improved Dual Attack 66 5.2 Meet-in-the-Middle Attack on LWE 69 5.2.1 Noisy Collision Search 70 5.2.2 Noisy Meet-in-the-middle Attack on LWE 74 5.3 The Hybrid-Dual Attack 76 5.3.1 Dimension-error Trade-o of LWE 77 5.3.2 Our Hybrid Attack 79 5.4 The Hybrid-Primal Attack 82 5.4.1 The Primal Attack on LWE 83 5.4.2 The Hybrid Attack for SVP 86 5.4.3 The Hybrid-Primal attack for LWE 93 5.4.4 Complexity Analysis 96 5.5 Bit-security estimation 102 5.5.1 Estimations 104 5.5.2 Application to PKE 105 6 Conclusion 108 Abstract (in Korean) 120Docto
    • โ€ฆ
    corecore