202 research outputs found

    Revisiting the Hardness of Binary Error LWE

    Get PDF
    Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in {0,1}\{0,1\}. It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension nn. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than nn samples. In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any ϵ>0\epsilon >0, binary error LWE can be solved in polynomial time nO(1/ϵ)n^{O(1/\epsilon)} given ϵn2\epsilon\cdot n^{2} samples. Similarly, it can be solved in subexponential time 2O~(n1α)2^{\tilde O(n^{1-\alpha})} given n1+αn^{1+\alpha} samples, for 0<α<10<\alpha<1. As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate 0<p<10 < p < 1, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than nn, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible

    잡음키를 가지는 신원기반 동형암호에 관한 연구

    Get PDF
    학위논문(박사)--서울대학교 대학원 :자연과학대학 수리과학부,2020. 2. 천정희.클라우드 상의 데이터 분석 위임 시나리오는 동형암호의 가장 효과적인 응용 시나리오 중 하나이다. 그러나, 다양한 데이터 제공자와 분석결과 요구자가 존재하는 실제 현실의 모델에서는 기본적인 암복호화와 동형 연산 외에도 여전히 해결해야 할 과제들이 남아있는 실정이다. 본 학위논문에서는 이러한 모델에서 필요한 여러 요구사항들을 포착하고, 이에 대한 해결방안을 논하였다. 먼저, 기존의 알려진 동형 데이터 분석 솔루션들은 데이터 간의 층위나 수준을 고려하지 못한다는 점에 착안하여, 신원기반 암호와 동형암호를 결합하여 데이터 사이에 접근 권한을 설정하여 해당 데이터 사이의 연산을 허용하는 모델을 생각하였다. 또한 이 모델의 효율적인 동작을 위해서 동형암호 친화적인 신원기반 암호에 대하여 연구하였고, 기존에 알려진 NTRU 기반의 암호를 확장하여 module-NTRU 문제를 정의하고 이를 기반으로 한 신원기반 암호를 제안하였다. 둘째로, 동형암호의 복호화 과정에는 여전히 비밀키가 관여하고 있고, 따라서 비밀키 관리 문제가 남아있다는 점을 포착하였다. 이러한 점에서 생체정보를 활용할 수 있는 복호화 과정을 개발하여 해당 과정을 동형암호 복호화에 적용하였고, 이를 통해 암복호화와 동형 연산의 전 과정을 어느 곳에도 키가 저장되지 않은 상태로 수행할 수 있는 암호시스템을 제안하였다. 마지막으로, 동형암호의 구체적인 안전성 평가 방법을 고려하였다. 이를 위해 동형암호가 기반하고 있는 이른바 Learning With Errors (LWE) 문제의 실제적인 난해성을 면밀히 분석하였고, 그 결과 기존의 공격 알고리즘보다 평균적으로 1000배 이상 빠른 공격 알고리즘들을 개발하였다. 이를 통해 현재 사용하고 있는 동형암호 파라미터가 안전하지 않음을 보였고, 새로운 공격 알고리즘을 통한 파라미터 설정 방법에 대해서 논하였다.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems. First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood. For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1 1.1 Access Control based on Identity 2 1.2 Biometric Key Management 3 1.3 Concrete Security of HE 3 1.4 List of Papers 4 2 Background 6 2.1 Notation 6 2.2 Lattices 7 2.2.1 Lattice Reduction Algorithm 7 2.2.2 BKZ cost model 8 2.2.3 Geometric Series Assumption (GSA) 8 2.2.4 The Nearest Plane Algorithm 9 2.3 Gaussian Measures 9 2.3.1 Kullback-Leibler Divergence 11 2.4 Lattice-based Hard Problems 12 2.4.1 The Learning With Errors Problem 12 2.4.2 NTRU Problem 13 2.5 One-way and Pseudo-random Functions 14 3 ID-based Data Access Control 16 3.1 Module-NTRU Lattices 16 3.1.1 Construction of MNTRU lattice and trapdoor 17 3.1.2 Minimize the Gram-Schmidt norm 22 3.2 IBE-Scheme from Module-NTRU 24 3.2.1 Scheme Construction 24 3.2.2 Security Analysis by Attack Algorithms 29 3.2.3 Parameter Selections 31 3.3 Application to Signature 33 4 Noisy Key Cryptosystem 36 4.1 Reusable Fuzzy Extractors 37 4.2 Local Functions 40 4.2.1 Hardness over Non-uniform Sources 40 4.2.2 Flipping local functions 43 4.2.3 Noise stability of predicate functions: Xor-Maj 44 4.3 From Pseudorandom Local Functions 47 4.3.1 Basic Construction: One-bit Fuzzy Extractor 48 4.3.2 Expansion to multi-bit Fuzzy Extractor 50 4.3.3 Indistinguishable Reusability 52 4.3.4 One-way Reusability 56 4.4 From Local One-way Functions 59 5 Concrete Security of Homomorphic Encryption 63 5.1 Albrecht's Improved Dual Attack 64 5.1.1 Simple Dual Lattice Attack 64 5.1.2 Improved Dual Attack 66 5.2 Meet-in-the-Middle Attack on LWE 69 5.2.1 Noisy Collision Search 70 5.2.2 Noisy Meet-in-the-middle Attack on LWE 74 5.3 The Hybrid-Dual Attack 76 5.3.1 Dimension-error Trade-o of LWE 77 5.3.2 Our Hybrid Attack 79 5.4 The Hybrid-Primal Attack 82 5.4.1 The Primal Attack on LWE 83 5.4.2 The Hybrid Attack for SVP 86 5.4.3 The Hybrid-Primal attack for LWE 93 5.4.4 Complexity Analysis 96 5.5 Bit-security estimation 102 5.5.1 Estimations 104 5.5.2 Application to PKE 105 6 Conclusion 108 Abstract (in Korean) 120Docto

    A Survey on Homomorphic Encryption Schemes: Theory and Implementation

    Full text link
    Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. Especially with popular cloud services, the control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars of achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, as well as extending the state of the art HE, PHE, SWHE, and FHE systems.Comment: - Updated. (October 6, 2017) - This paper is an early draft of the survey that is being submitted to ACM CSUR and has been uploaded to arXiv for feedback from stakeholder

    SALSA: Attacking Lattice Cryptography with Transformers

    Full text link
    Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and combine half-trained models with statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real-world LWE-based cryptosystems.Comment: Extended version of work published at NeurIPS 202

    Securely Instantiating Cryptographic Schemes Based on the Learning with Errors Assumption

    Get PDF
    Since its proposal by Regev in 2005, the Learning With Errors (LWE) problem was used as the underlying problem for a great variety of schemes. Its applications are many-fold, reaching from basic and highly practical primitives like key exchange, public-key encryption, and signature schemes to very advanced solutions like fully homomorphic encryption, group signatures, and identity based encryption. One of the underlying reasons for this fertility is the flexibility with that LWE can be instantiated. Unfortunately, this comes at a cost: It makes selecting parameters for cryptographic applications complicated. When selecting parameters for a new LWE-based primitive, a researcher has to take the influence of several parameters on the efficiency of the scheme and the runtime of a variety of attacks into consideration. In fact, the missing trust in the concrete hardness of LWE is one of the main problems to overcome to bring LWE-based schemes to practice. This thesis aims at closing the gap between the theoretical knowledge of the hardness of LWE, and the concrete problem of selecting parameters for an LWE-based scheme. To this end, we analyze the existing methods to estimate the hardness of LWE, and introduce new estimation techniques where necessary. Afterwards, we show how to transfer this knowledge into instantiations that are at the same time secure and efficient. We show this process on three examples: - A highly optimized public-key encryption scheme for embedded devices that is based on a variant of Ring-LWE. - A practical signature scheme that served as the foundation of one of the best lattice-based signature schemes based on standard lattices. - An advanced public-key encryption scheme that enjoys the unique property of natural double hardness based on LWE instances similar to those used for fully homomorphic encryption

    On Dual Lattice Attacks Against Small-Secret LWE and Parameter Choices in HElib and SEAL

    Get PDF
    We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL v2.0 yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of (2L)/(2L+1)(2\,L)/(2\,L+1) when logq=Θ(Llogn)\log q = \Theta{\left(L \log n\right)}, when the secret has constant hamming weight hh and where LL is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of 2h2^{h} operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with n=1024n=1024 and log2q47\log_2 q \approx {47}, while the techniques described in this work lead to estimated costs of 68 bits (SEAL v2.0) and 62 bits (HElib)

    On the Security of Lattice-Based Cryptography Against Lattice Reduction and Hybrid Attacks

    Get PDF
    Over the past decade, lattice-based cryptography has emerged as one of the most promising candidates for post-quantum public-key cryptography. For most current lattice-based schemes, one can recover the secret key by solving a corresponding instance of the unique Shortest Vector Problem (uSVP), the problem of finding a shortest non-zero vector in a lattice which is unusually short. This work is concerned with the concrete hardness of the uSVP. In particular, we study the uSVP in general as well as instances of the problem with particularly small or sparse short vectors, which are used in cryptographic constructions to increase their efficiency. We study solving the uSVP in general via lattice reduction, more precisely, the Block-wise Korkine-Zolotarev (BKZ) algorithm. In order to solve an instance of the uSVP via BKZ, the applied block size, which specifies the BKZ algorithm, needs to be sufficiently large. However, a larger block size results in higher runtimes of the algorithm. It is therefore of utmost interest to determine the minimal block size that guarantees the success of solving the uSVP via BKZ. In this thesis, we provide a theoretical and experimental validation of a success condition for BKZ when solving the uSVP which can be used to determine the minimal required block size. We further study the practical implications of using so-called sparsification techniques in combination with the above approach. With respect to uSVP instances with particularly small or sparse short vectors, we investigate so-called hybrid attacks. We first adapt the “hybrid lattice reduction and meet-in-the-middle attack” (or short: the hybrid attack) by Howgrave-Graham on the NTRU encryption scheme to the uSVP. Due to this adaption, the attack can be applied to a larger class of lattice-based cryptosystems. In addition, we enhance the runtime analysis of the attack, e.g., by an explicit calculation of the involved success probabilities. As a next step, we improve the hybrid attack in two directions as described in the following. To reflect the potential of a modern attacker on classical computers, we show how to parallelize the attack. We show that our parallel version of the hybrid attack scales well within realistic parameter ranges. Our theoretical analysis is supported by practical experiments, using our implementation of the parallel hybrid attack which employs Open Multi-Processing and the Message Passing Interface. To reflect the power of a potential future attacker who has access to a large-scale quantum computer, we develop a quantum version of the hybrid attack which replaces the classical meet-in-the-middle search by a quantum search. Not only is the quantum hybrid attack faster than its classical counterpart, but also applicable to a wider range of uSVP instances (and hence to a larger number of lattice-based schemes) as it uses a quantum search which is sensitive to the distribution on the search space. Finally, we demonstrate the practical relevance of our results by using the techniques developed in this thesis to evaluate the concrete security levels of the lattice-based schemes submitted to the US National Institute of Standards and Technology’s process of standardizing post-quantum public-key cryptography

    Parallel Implementation of BDD enumeration for LWE

    Get PDF
    One of the most attractive problems for post-quantum secure cryptographic schemes is the LWE problem. Beside combinatorial and algebraic attacks, LWE can be solved by a lattice-based Bounded Distance Decoding (BDD) approach. We provide the first parallel implementation of an enumeration-based BDD algorithm that employs the Lindner-Peikert and Linear Length pruning strategies. We ran our algorithm on a large variety of LWE parameters, from which we derive the following interesting results. First, our parallel enumeration achieves almost perfect speed-up, which allows us to provide for the first time practical cryptanalytic results on standard LWE parameters of meaningful size. Second, we conclude that lattice-based attacks perform better than recent advanced BKW-type algorithms even for small noise, while requiring way less samples. Third, we experimentally show weaknesses for a binary matrix LWE proposal of Galbraith
    corecore