3,253 research outputs found
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
Statistical Pruning for Near-Maximum Likelihood Decoding
In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance
Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation
The lattice basis reduction algorithm is a method for solving the
Shortest Vector Problem (SVP) on lattices. There are many variants of
the lattice basis reduction algorithm such as LLL, BKZ, and RSR. Though
BKZ has been used most widely, it is shown recently that some variants
of RSR are quite efficient for solving a high-dimensional SVP (they
achieved many best scores in TU Darmstadt SVP challenge). RSR repeats
alternately the generation of new very short lattice vectors from the
current basis (we call this procedure ``random sampling\u27\u27) and the
improvement of the current basis by utilizing the generated very short
lattice vectors. Therefore, it is important for investigating and
ameliorating RSR to estimate the success probability of finding very
short lattice vectors by combining the current basis. In this paper,
we propose a new method for estimating the success probability by the
Gram-Charlier approximation, which is a basic asymptotic expansion of
any probability distribution by utilizing the higher order cumulants
such as the skewness and the kurtosis. The proposed method uses a
``parametric\u27\u27 model for estimating the probability, which gives a
closed-form expression with a few parameters. Therefore, the proposed
method is much more efficient than the previous methods using the
non-parametric estimation. This enables us to investigate the lattice
basis reduction algorithm intensively in various situations and clarify
its properties. Numerical experiments verified that the Gram-Charlier
approximation can estimate the actual distribution quite accurately.
In addition, we investigated RSR and its variants by the proposed
method. Consequently, the results showed that the weighted random
sampling is useful for generating shorter lattice vectors. They also
showed that it is crucial for solving the SVP to improve the current
basis periodically
The Quantum Frontier
The success of the abstract model of computation, in terms of bits, logical
operations, programming language constructs, and the like, makes it easy to
forget that computation is a physical process. Our cherished notions of
computation and information are grounded in classical mechanics, but the
physics underlying our world is quantum. In the early 80s researchers began to
ask how computation would change if we adopted a quantum mechanical, instead of
a classical mechanical, view of computation. Slowly, a new picture of
computation arose, one that gave rise to a variety of faster algorithms, novel
cryptographic mechanisms, and alternative methods of communication. Small
quantum information processing devices have been built, and efforts are
underway to build larger ones. Even apart from the existence of these devices,
the quantum view on information processing has provided significant insight
into the nature of computation and information, and a deeper understanding of
the physics of our universe and its connections with computation.
We start by describing aspects of quantum mechanics that are at the heart of
a quantum view of information processing. We give our own idiosyncratic view of
a number of these topics in the hopes of correcting common misconceptions and
highlighting aspects that are often overlooked. A number of the phenomena
described were initially viewed as oddities of quantum mechanics. It was
quantum information processing, first quantum cryptography and then, more
dramatically, quantum computing, that turned the tables and showed that these
oddities could be put to practical effect. It is these application we describe
next. We conclude with a section describing some of the many questions left for
future work, especially the mysteries surrounding where the power of quantum
information ultimately comes from.Comment: Invited book chapter for Computation for Humanity - Information
Technology to Advance Society to be published by CRC Press. Concepts
clarified and style made more uniform in version 2. Many thanks to the
referees for their suggestions for improvement
Approximate Voronoi cells for lattices, revisited
We revisit the approximate Voronoi cells approach for solving the closest
vector problem with preprocessing (CVPP) on high-dimensional lattices, and
settle the open problem of Doulgerakis-Laarhoven-De Weger [PQCrypto, 2019] of
determining exact asymptotics on the volume of these Voronoi cells under the
Gaussian heuristic. As a result, we obtain improved upper bounds on the time
complexity of the randomized iterative slicer when using less than memory, and we show how to obtain time-memory trade-offs even when using
less than memory. We also settle the open problem of
obtaining a continuous trade-off between the size of the advice and the query
time complexity, as the time complexity with subexponential advice in our
approach scales as , matching worst-case enumeration bounds,
and achieving the same asymptotic scaling as average-case enumeration
algorithms for the closest vector problem.Comment: 18 pages, 1 figur
Lattice Enumeration with Discrete Pruning: Improvement, Cost Estimation and Optimal Parameters
Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects:
First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed.
Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration.
Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided
The White-Box Adversarial Data Stream Model
We study streaming algorithms in the white-box adversarial model, where the
stream is chosen adaptively by an adversary who observes the entire internal
state of the algorithm at each time step. We show that nontrivial algorithms
are still possible. We first give a randomized algorithm for the -heavy
hitters problem that outperforms the optimal deterministic Misra-Gries
algorithm on long streams. If the white-box adversary is computationally
bounded, we use cryptographic techniques to reduce the memory of our
-heavy hitters algorithm even further and to design a number of additional
algorithms for graph, string, and linear algebra problems. The existence of
such algorithms is surprising, as the streaming algorithm does not even have a
secret key in this model, i.e., its state is entirely known to the adversary.
One algorithm we design is for estimating the number of distinct elements in a
stream with insertions and deletions achieving a multiplicative approximation
and sublinear space; such an algorithm is impossible for deterministic
algorithms.
We also give a general technique that translates any two-player deterministic
communication lower bound to a lower bound for {\it randomized} algorithms
robust to a white-box adversary. In particular, our results show that for all
, there exists a constant such that any -approximation
algorithm for moment estimation in insertion-only streams with a
white-box adversary requires space for a universe of size .
Similarly, there is a constant such that any -approximation algorithm
in an insertion-only stream for matrix rank requires space with a
white-box adversary. Our algorithmic results based on cryptography thus show a
separation between computationally bounded and unbounded adversaries.
(Abstract shortened to meet arXiv limits.)Comment: PODS 202
CRYSTALS - Kyber: A CCA-secure Module-Lattice-Based KEM
Rapid advances in quantum computing, together with the announcement by the National Institute of Standards and Technology (NIST) to define new standards for digital-signature, encryption, and key-establishment protocols, have created significant interest in post-quantum cryptographic schemes. This paper introduces Kyber (part of CRYSTALS - Cryptographic Suite for Algebraic Lattices - a package submitted to NIST post-quantum standardization effort in November 2017), a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM), based on hardness assumptions over module lattices. Our KEM is most naturally seen as a successor to the NEWHOPE KEM (Usenix 2016). In particular, the key and ciphertext sizes of our new construction are about half the size, the KEM offers CCA instead of only passive security, the security is based on a more general (and flexible) lattice problem, and our optimized implementation results in essentially the same running time as the aforementioned scheme. We first introduce a CPA-secure public-key encryption scheme, apply a variant of the Fujisaki-Okamoto transform to create a CCA-secure KEM, and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes. The security of our primitives is based on the hardness of Module-LWE in the classical and quantum random oracle models, and our concrete parameters conservatively target more than 128 bits of post-quantum security
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
We show direct and conceptually simple reductions between the classical
learning with errors (LWE) problem and its continuous analog, CLWE (Bruna,
Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful
machinery of LWE-based cryptography to the applications of CLWE. For example,
we obtain the hardness of CLWE under the classical worst-case hardness of the
gap shortest vector problem. Previously, this was known only under quantum
worst-case hardness of lattice problems. More broadly, with our reductions
between the two problems, any future developments to LWE will also apply to
CLWE and its downstream applications.
As a concrete application, we show an improved hardness result for density
estimation for mixtures of Gaussians. In this computational problem, given
sample access to a mixture of Gaussians, the goal is to output a function that
estimates the density function of the mixture. Under the (plausible and widely
believed) exponential hardness of the classical LWE problem, we show that
Gaussian mixture density estimation in with roughly
Gaussian components given samples requires time
quasi-polynomial in . Under the (conservative) polynomial hardness of LWE,
we show hardness of density estimation for Gaussians for any
constant , which improves on Bruna, Regev, Song and Tang (STOC
2021), who show hardness for at least Gaussians under polynomial
(quantum) hardness assumptions.
Our key technical tool is a reduction from classical LWE to LWE with
-sparse secrets where the multiplicative increase in the noise is only
, independent of the ambient dimension
์ก์ํค๋ฅผ ๊ฐ์ง๋ ์ ์๊ธฐ๋ฐ ๋ํ์ํธ์ ๊ดํ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :์์ฐ๊ณผํ๋ํ ์๋ฆฌ๊ณผํ๋ถ,2020. 2. ์ฒ์ ํฌ.ํด๋ผ์ฐ๋ ์์ ๋ฐ์ดํฐ ๋ถ์ ์์ ์๋๋ฆฌ์ค๋ ๋ํ์ํธ์ ๊ฐ์ฅ ํจ๊ณผ์ ์ธ ์์ฉ ์๋๋ฆฌ์ค ์ค ํ๋์ด๋ค. ๊ทธ๋ฌ๋, ๋ค์ํ ๋ฐ์ดํฐ ์ ๊ณต์์ ๋ถ์๊ฒฐ๊ณผ ์๊ตฌ์๊ฐ ์กด์ฌํ๋ ์ค์ ํ์ค์ ๋ชจ๋ธ์์๋ ๊ธฐ๋ณธ์ ์ธ ์๋ณตํธํ์ ๋ํ ์ฐ์ฐ ์ธ์๋ ์ฌ์ ํ ํด๊ฒฐํด์ผ ํ ๊ณผ์ ๋ค์ด ๋จ์์๋ ์ค์ ์ด๋ค. ๋ณธ ํ์๋
ผ๋ฌธ์์๋ ์ด๋ฌํ ๋ชจ๋ธ์์ ํ์ํ ์ฌ๋ฌ ์๊ตฌ์ฌํญ๋ค์ ํฌ์ฐฉํ๊ณ , ์ด์ ๋ํ ํด๊ฒฐ๋ฐฉ์์ ๋
ผํ์๋ค.
๋จผ์ , ๊ธฐ์กด์ ์๋ ค์ง ๋ํ ๋ฐ์ดํฐ ๋ถ์ ์๋ฃจ์
๋ค์ ๋ฐ์ดํฐ ๊ฐ์ ์ธต์๋ ์์ค์ ๊ณ ๋ คํ์ง ๋ชปํ๋ค๋ ์ ์ ์ฐฉ์ํ์ฌ, ์ ์๊ธฐ๋ฐ ์ํธ์ ๋ํ์ํธ๋ฅผ ๊ฒฐํฉํ์ฌ ๋ฐ์ดํฐ ์ฌ์ด์ ์ ๊ทผ ๊ถํ์ ์ค์ ํ์ฌ ํด๋น ๋ฐ์ดํฐ ์ฌ์ด์ ์ฐ์ฐ์ ํ์ฉํ๋ ๋ชจ๋ธ์ ์๊ฐํ์๋ค. ๋ํ ์ด ๋ชจ๋ธ์ ํจ์จ์ ์ธ ๋์์ ์ํด์ ๋ํ์ํธ ์นํ์ ์ธ ์ ์๊ธฐ๋ฐ ์ํธ์ ๋ํ์ฌ ์ฐ๊ตฌํ์๊ณ , ๊ธฐ์กด์ ์๋ ค์ง NTRU ๊ธฐ๋ฐ์ ์ํธ๋ฅผ ํ์ฅํ์ฌ module-NTRU ๋ฌธ์ ๋ฅผ ์ ์ํ๊ณ ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ ์ ์๊ธฐ๋ฐ ์ํธ๋ฅผ ์ ์ํ์๋ค.
๋์งธ๋ก, ๋ํ์ํธ์ ๋ณตํธํ ๊ณผ์ ์๋ ์ฌ์ ํ ๋น๋ฐํค๊ฐ ๊ด์ฌํ๊ณ ์๊ณ , ๋ฐ๋ผ์ ๋น๋ฐํค ๊ด๋ฆฌ ๋ฌธ์ ๊ฐ ๋จ์์๋ค๋ ์ ์ ํฌ์ฐฉํ์๋ค. ์ด๋ฌํ ์ ์์ ์์ฒด์ ๋ณด๋ฅผ ํ์ฉํ ์ ์๋ ๋ณตํธํ ๊ณผ์ ์ ๊ฐ๋ฐํ์ฌ ํด๋น ๊ณผ์ ์ ๋ํ์ํธ ๋ณตํธํ์ ์ ์ฉํ์๊ณ , ์ด๋ฅผ ํตํด ์๋ณตํธํ์ ๋ํ ์ฐ์ฐ์ ์ ๊ณผ์ ์ ์ด๋ ๊ณณ์๋ ํค๊ฐ ์ ์ฅ๋์ง ์์ ์ํ๋ก ์ํํ ์ ์๋ ์ํธ์์คํ
์ ์ ์ํ์๋ค.
๋ง์ง๋ง์ผ๋ก, ๋ํ์ํธ์ ๊ตฌ์ฒด์ ์ธ ์์ ์ฑ ํ๊ฐ ๋ฐฉ๋ฒ์ ๊ณ ๋ คํ์๋ค. ์ด๋ฅผ ์ํด ๋ํ์ํธ๊ฐ ๊ธฐ๋ฐํ๊ณ ์๋ ์ด๋ฅธ๋ฐ Learning With Errors (LWE) ๋ฌธ์ ์ ์ค์ ์ ์ธ ๋ํด์ฑ์ ๋ฉด๋ฐํ ๋ถ์ํ์๊ณ , ๊ทธ ๊ฒฐ๊ณผ ๊ธฐ์กด์ ๊ณต๊ฒฉ ์๊ณ ๋ฆฌ์ฆ๋ณด๋ค ํ๊ท ์ ์ผ๋ก 1000๋ฐฐ ์ด์ ๋น ๋ฅธ ๊ณต๊ฒฉ ์๊ณ ๋ฆฌ์ฆ๋ค์ ๊ฐ๋ฐํ์๋ค. ์ด๋ฅผ ํตํด ํ์ฌ ์ฌ์ฉํ๊ณ ์๋ ๋ํ์ํธ ํ๋ผ๋ฏธํฐ๊ฐ ์์ ํ์ง ์์์ ๋ณด์๊ณ , ์๋ก์ด ๊ณต๊ฒฉ ์๊ณ ๋ฆฌ์ฆ์ ํตํ ํ๋ผ๋ฏธํฐ ์ค์ ๋ฐฉ๋ฒ์ ๋ํด์ ๋
ผํ์๋ค.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems.
First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood.
For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1
1.1 Access Control based on Identity 2
1.2 Biometric Key Management 3
1.3 Concrete Security of HE 3
1.4 List of Papers 4
2 Background 6
2.1 Notation 6
2.2 Lattices 7
2.2.1 Lattice Reduction Algorithm 7
2.2.2 BKZ cost model 8
2.2.3 Geometric Series Assumption (GSA) 8
2.2.4 The Nearest Plane Algorithm 9
2.3 Gaussian Measures 9
2.3.1 Kullback-Leibler Divergence 11
2.4 Lattice-based Hard Problems 12
2.4.1 The Learning With Errors Problem 12
2.4.2 NTRU Problem 13
2.5 One-way and Pseudo-random Functions 14
3 ID-based Data Access Control 16
3.1 Module-NTRU Lattices 16
3.1.1 Construction of MNTRU lattice and trapdoor 17
3.1.2 Minimize the Gram-Schmidt norm 22
3.2 IBE-Scheme from Module-NTRU 24
3.2.1 Scheme Construction 24
3.2.2 Security Analysis by Attack Algorithms 29
3.2.3 Parameter Selections 31
3.3 Application to Signature 33
4 Noisy Key Cryptosystem 36
4.1 Reusable Fuzzy Extractors 37
4.2 Local Functions 40
4.2.1 Hardness over Non-uniform Sources 40
4.2.2 Flipping local functions 43
4.2.3 Noise stability of predicate functions: Xor-Maj 44
4.3 From Pseudorandom Local Functions 47
4.3.1 Basic Construction: One-bit Fuzzy Extractor 48
4.3.2 Expansion to multi-bit Fuzzy Extractor 50
4.3.3 Indistinguishable Reusability 52
4.3.4 One-way Reusability 56
4.4 From Local One-way Functions 59
5 Concrete Security of Homomorphic Encryption 63
5.1 Albrecht's Improved Dual Attack 64
5.1.1 Simple Dual Lattice Attack 64
5.1.2 Improved Dual Attack 66
5.2 Meet-in-the-Middle Attack on LWE 69
5.2.1 Noisy Collision Search 70
5.2.2 Noisy Meet-in-the-middle Attack on LWE 74
5.3 The Hybrid-Dual Attack 76
5.3.1 Dimension-error Trade-o of LWE 77
5.3.2 Our Hybrid Attack 79
5.4 The Hybrid-Primal Attack 82
5.4.1 The Primal Attack on LWE 83
5.4.2 The Hybrid Attack for SVP 86
5.4.3 The Hybrid-Primal attack for LWE 93
5.4.4 Complexity Analysis 96
5.5 Bit-security estimation 102
5.5.1 Estimations 104
5.5.2 Application to PKE 105
6 Conclusion 108
Abstract (in Korean) 120Docto
- โฆ