187 research outputs found

    Adaptive learning and cryptography

    Get PDF
    Significant links exist between cryptography and computational learning theory. Cryptographic functions are the usual method of demonstrating significant intractability results in computational learning theory as they can demonstrate that certain problems are hard in a representation independent sense. On the other hand, hard learning problems have been used to create efficient cryptographic protocols such as authentication schemes, pseudo-random permutations and functions, and even public key encryption schemes.;Learning theory / coding theory also impacts cryptography in that it enables cryptographic primitives to deal with the issues of noise or bias in their inputs. Several different constructions of fuzzy primitives exist, a fuzzy primitive being a primitive which functions correctly even in the presence of noisy , or non-uniform inputs. Some examples of these primitives include error-correcting blockciphers, fuzzy identity based cryptosystems, fuzzy extractors and fuzzy sketches. Error correcting blockciphers combine both encryption and error correction in a single function which results in increased efficiency. Fuzzy identity based encryption allows the decryption of any ciphertext that was encrypted under a close enough identity. Fuzzy extractors and sketches are methods of reliably (re)-producing a uniformly random secret key given an imperfectly reproducible string from a biased source, through a public string that is called the sketch .;While hard learning problems have many qualities which make them useful in constructing cryptographic protocols, such as their inherent error tolerance and simple algebraic structure, it is often difficult to utilize them to construct very secure protocols due to assumptions they make on the learning algorithm. Due to these assumptions, the resulting protocols often do not have security against various types of adaptive adversaries. to help deal with this issue, we further examine the inter-relationships between cryptography and learning theory by introducing the concept of adaptive learning . Adaptive learning is a rather weak form of learning in which the learner is not expected to closely approximate the concept function in its entirety, rather it is only expected to answer a query of the learner\u27s choice about the target. Adaptive learning allows for a much weaker learner than in the standard model, while maintaining the the positive properties of many learning problems in the standard model, a fact which we feel makes problems that are hard to adaptively learn more useful than standard model learning problems in the design of cryptographic protocols. We argue that learning parity with noise is hard to do adaptively and use that assumption to construct a related key secure, efficient MAC as well as an efficient authentication scheme. In addition we examine the security properties of fuzzy sketches and extractors and demonstrate how these properties can be combined by using our related key secure MAC. We go on to demonstrate that our extractor can allow a form of related-key hardening for protocols in that, by affecting how the key for a primitive is stored it renders that protocol immune to related key attacks

    Forward Secure Fuzzy Extractors

    Get PDF

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Code Offset in the Exponent

    Get PDF
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable

    μž‘μŒν‚€λ₯Ό κ°€μ§€λŠ” μ‹ μ›κΈ°λ°˜ λ™ν˜•μ•”ν˜Έμ— κ΄€ν•œ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :μžμ—°κ³Όν•™λŒ€ν•™ μˆ˜λ¦¬κ³Όν•™λΆ€,2020. 2. μ²œμ •ν¬.ν΄λΌμš°λ“œ μƒμ˜ 데이터 뢄석 μœ„μž„ μ‹œλ‚˜λ¦¬μ˜€λŠ” λ™ν˜•μ•”ν˜Έμ˜ κ°€μž₯ 효과적인 μ‘μš© μ‹œλ‚˜λ¦¬μ˜€ 쀑 ν•˜λ‚˜μ΄λ‹€. κ·ΈλŸ¬λ‚˜, λ‹€μ–‘ν•œ 데이터 μ œκ³΅μžμ™€ 뢄석결과 μš”κ΅¬μžκ°€ μ‘΄μž¬ν•˜λŠ” μ‹€μ œ ν˜„μ‹€μ˜ λͺ¨λΈμ—μ„œλŠ” 기본적인 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚° 외에도 μ—¬μ „νžˆ ν•΄κ²°ν•΄μ•Ό ν•  κ³Όμ œλ“€μ΄ λ‚¨μ•„μžˆλŠ” 싀정이닀. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” μ΄λŸ¬ν•œ λͺ¨λΈμ—μ„œ ν•„μš”ν•œ μ—¬λŸ¬ μš”κ΅¬μ‚¬ν•­λ“€μ„ ν¬μ°©ν•˜κ³ , 이에 λŒ€ν•œ ν•΄κ²°λ°©μ•ˆμ„ λ…Όν•˜μ˜€λ‹€. λ¨Όμ €, 기쑴의 μ•Œλ €μ§„ λ™ν˜• 데이터 뢄석 μ†”λ£¨μ…˜λ“€μ€ 데이터 κ°„μ˜ μΈ΅μœ„λ‚˜ μˆ˜μ€€μ„ κ³ λ €ν•˜μ§€ λͺ»ν•œλ‹€λŠ” 점에 μ°©μ•ˆν•˜μ—¬, μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ™€ λ™ν˜•μ•”ν˜Έλ₯Ό κ²°ν•©ν•˜μ—¬ 데이터 사이에 μ ‘κ·Ό κΆŒν•œμ„ μ„€μ •ν•˜μ—¬ ν•΄λ‹Ή 데이터 μ‚¬μ΄μ˜ 연산을 ν—ˆμš©ν•˜λŠ” λͺ¨λΈμ„ μƒκ°ν•˜μ˜€λ‹€. λ˜ν•œ 이 λͺ¨λΈμ˜ 효율적인 λ™μž‘μ„ μœ„ν•΄μ„œ λ™ν˜•μ•”ν˜Έ μΉœν™”μ μΈ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ— λŒ€ν•˜μ—¬ μ—°κ΅¬ν•˜μ˜€κ³ , 기쑴에 μ•Œλ €μ§„ NTRU 기반의 μ•”ν˜Έλ₯Ό ν™•μž₯ν•˜μ—¬ module-NTRU 문제λ₯Ό μ •μ˜ν•˜κ³  이λ₯Ό 기반으둜 ν•œ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έλ₯Ό μ œμ•ˆν•˜μ˜€λ‹€. λ‘˜μ§Έλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ λ³΅ν˜Έν™” κ³Όμ •μ—λŠ” μ—¬μ „νžˆ λΉ„λ°€ν‚€κ°€ κ΄€μ—¬ν•˜κ³  있고, λ”°λΌμ„œ λΉ„λ°€ν‚€ 관리 λ¬Έμ œκ°€ λ‚¨μ•„μžˆλ‹€λŠ” 점을 ν¬μ°©ν•˜μ˜€λ‹€. μ΄λŸ¬ν•œ μ μ—μ„œ 생체정보λ₯Ό ν™œμš©ν•  수 μžˆλŠ” λ³΅ν˜Έν™” 과정을 κ°œλ°œν•˜μ—¬ ν•΄λ‹Ή 과정을 λ™ν˜•μ•”ν˜Έ λ³΅ν˜Έν™”μ— μ μš©ν•˜μ˜€κ³ , 이λ₯Ό 톡해 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚°μ˜ μ „ 과정을 μ–΄λŠ 곳에도 ν‚€κ°€ μ €μž₯λ˜μ§€ μ•Šμ€ μƒνƒœλ‘œ μˆ˜ν–‰ν•  수 μžˆλŠ” μ•”ν˜Έμ‹œμŠ€ν…œμ„ μ œμ•ˆν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ ꡬ체적인 μ•ˆμ „μ„± 평가 방법을 κ³ λ €ν•˜μ˜€λ‹€. 이λ₯Ό μœ„ν•΄ λ™ν˜•μ•”ν˜Έκ°€ κΈ°λ°˜ν•˜κ³  μžˆλŠ” 이λ₯Έλ°” Learning With Errors (LWE) 문제의 μ‹€μ œμ μΈ λ‚œν•΄μ„±μ„ λ©΄λ°€νžˆ λΆ„μ„ν•˜μ˜€κ³ , κ·Έ κ²°κ³Ό 기쑴의 곡격 μ•Œκ³ λ¦¬μ¦˜λ³΄λ‹€ ν‰κ· μ μœΌλ‘œ 1000λ°° 이상 λΉ λ₯Έ 곡격 μ•Œκ³ λ¦¬μ¦˜λ“€μ„ κ°œλ°œν•˜μ˜€λ‹€. 이λ₯Ό 톡해 ν˜„μž¬ μ‚¬μš©ν•˜κ³  μžˆλŠ” λ™ν˜•μ•”ν˜Έ νŒŒλΌλ―Έν„°κ°€ μ•ˆμ „ν•˜μ§€ μ•ŠμŒμ„ λ³΄μ˜€κ³ , μƒˆλ‘œμš΄ 곡격 μ•Œκ³ λ¦¬μ¦˜μ„ ν†΅ν•œ νŒŒλΌλ―Έν„° μ„€μ • 방법에 λŒ€ν•΄μ„œ λ…Όν•˜μ˜€λ‹€.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems. First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood. For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1 1.1 Access Control based on Identity 2 1.2 Biometric Key Management 3 1.3 Concrete Security of HE 3 1.4 List of Papers 4 2 Background 6 2.1 Notation 6 2.2 Lattices 7 2.2.1 Lattice Reduction Algorithm 7 2.2.2 BKZ cost model 8 2.2.3 Geometric Series Assumption (GSA) 8 2.2.4 The Nearest Plane Algorithm 9 2.3 Gaussian Measures 9 2.3.1 Kullback-Leibler Divergence 11 2.4 Lattice-based Hard Problems 12 2.4.1 The Learning With Errors Problem 12 2.4.2 NTRU Problem 13 2.5 One-way and Pseudo-random Functions 14 3 ID-based Data Access Control 16 3.1 Module-NTRU Lattices 16 3.1.1 Construction of MNTRU lattice and trapdoor 17 3.1.2 Minimize the Gram-Schmidt norm 22 3.2 IBE-Scheme from Module-NTRU 24 3.2.1 Scheme Construction 24 3.2.2 Security Analysis by Attack Algorithms 29 3.2.3 Parameter Selections 31 3.3 Application to Signature 33 4 Noisy Key Cryptosystem 36 4.1 Reusable Fuzzy Extractors 37 4.2 Local Functions 40 4.2.1 Hardness over Non-uniform Sources 40 4.2.2 Flipping local functions 43 4.2.3 Noise stability of predicate functions: Xor-Maj 44 4.3 From Pseudorandom Local Functions 47 4.3.1 Basic Construction: One-bit Fuzzy Extractor 48 4.3.2 Expansion to multi-bit Fuzzy Extractor 50 4.3.3 Indistinguishable Reusability 52 4.3.4 One-way Reusability 56 4.4 From Local One-way Functions 59 5 Concrete Security of Homomorphic Encryption 63 5.1 Albrecht's Improved Dual Attack 64 5.1.1 Simple Dual Lattice Attack 64 5.1.2 Improved Dual Attack 66 5.2 Meet-in-the-Middle Attack on LWE 69 5.2.1 Noisy Collision Search 70 5.2.2 Noisy Meet-in-the-middle Attack on LWE 74 5.3 The Hybrid-Dual Attack 76 5.3.1 Dimension-error Trade-o of LWE 77 5.3.2 Our Hybrid Attack 79 5.4 The Hybrid-Primal Attack 82 5.4.1 The Primal Attack on LWE 83 5.4.2 The Hybrid Attack for SVP 86 5.4.3 The Hybrid-Primal attack for LWE 93 5.4.4 Complexity Analysis 96 5.5 Bit-security estimation 102 5.5.1 Estimations 104 5.5.2 Application to PKE 105 6 Conclusion 108 Abstract (in Korean) 120Docto
    • …
    corecore