45 research outputs found

    μž‘μŒν‚€λ₯Ό κ°€μ§€λŠ” μ‹ μ›κΈ°λ°˜ λ™ν˜•μ•”ν˜Έμ— κ΄€ν•œ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :μžμ—°κ³Όν•™λŒ€ν•™ μˆ˜λ¦¬κ³Όν•™λΆ€,2020. 2. μ²œμ •ν¬.ν΄λΌμš°λ“œ μƒμ˜ 데이터 뢄석 μœ„μž„ μ‹œλ‚˜λ¦¬μ˜€λŠ” λ™ν˜•μ•”ν˜Έμ˜ κ°€μž₯ 효과적인 μ‘μš© μ‹œλ‚˜λ¦¬μ˜€ 쀑 ν•˜λ‚˜μ΄λ‹€. κ·ΈλŸ¬λ‚˜, λ‹€μ–‘ν•œ 데이터 μ œκ³΅μžμ™€ 뢄석결과 μš”κ΅¬μžκ°€ μ‘΄μž¬ν•˜λŠ” μ‹€μ œ ν˜„μ‹€μ˜ λͺ¨λΈμ—μ„œλŠ” 기본적인 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚° 외에도 μ—¬μ „νžˆ ν•΄κ²°ν•΄μ•Ό ν•  κ³Όμ œλ“€μ΄ λ‚¨μ•„μžˆλŠ” 싀정이닀. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” μ΄λŸ¬ν•œ λͺ¨λΈμ—μ„œ ν•„μš”ν•œ μ—¬λŸ¬ μš”κ΅¬μ‚¬ν•­λ“€μ„ ν¬μ°©ν•˜κ³ , 이에 λŒ€ν•œ ν•΄κ²°λ°©μ•ˆμ„ λ…Όν•˜μ˜€λ‹€. λ¨Όμ €, 기쑴의 μ•Œλ €μ§„ λ™ν˜• 데이터 뢄석 μ†”λ£¨μ…˜λ“€μ€ 데이터 κ°„μ˜ μΈ΅μœ„λ‚˜ μˆ˜μ€€μ„ κ³ λ €ν•˜μ§€ λͺ»ν•œλ‹€λŠ” 점에 μ°©μ•ˆν•˜μ—¬, μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ™€ λ™ν˜•μ•”ν˜Έλ₯Ό κ²°ν•©ν•˜μ—¬ 데이터 사이에 μ ‘κ·Ό κΆŒν•œμ„ μ„€μ •ν•˜μ—¬ ν•΄λ‹Ή 데이터 μ‚¬μ΄μ˜ 연산을 ν—ˆμš©ν•˜λŠ” λͺ¨λΈμ„ μƒκ°ν•˜μ˜€λ‹€. λ˜ν•œ 이 λͺ¨λΈμ˜ 효율적인 λ™μž‘μ„ μœ„ν•΄μ„œ λ™ν˜•μ•”ν˜Έ μΉœν™”μ μΈ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ— λŒ€ν•˜μ—¬ μ—°κ΅¬ν•˜μ˜€κ³ , 기쑴에 μ•Œλ €μ§„ NTRU 기반의 μ•”ν˜Έλ₯Ό ν™•μž₯ν•˜μ—¬ module-NTRU 문제λ₯Ό μ •μ˜ν•˜κ³  이λ₯Ό 기반으둜 ν•œ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έλ₯Ό μ œμ•ˆν•˜μ˜€λ‹€. λ‘˜μ§Έλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ λ³΅ν˜Έν™” κ³Όμ •μ—λŠ” μ—¬μ „νžˆ λΉ„λ°€ν‚€κ°€ κ΄€μ—¬ν•˜κ³  있고, λ”°λΌμ„œ λΉ„λ°€ν‚€ 관리 λ¬Έμ œκ°€ λ‚¨μ•„μžˆλ‹€λŠ” 점을 ν¬μ°©ν•˜μ˜€λ‹€. μ΄λŸ¬ν•œ μ μ—μ„œ 생체정보λ₯Ό ν™œμš©ν•  수 μžˆλŠ” λ³΅ν˜Έν™” 과정을 κ°œλ°œν•˜μ—¬ ν•΄λ‹Ή 과정을 λ™ν˜•μ•”ν˜Έ λ³΅ν˜Έν™”μ— μ μš©ν•˜μ˜€κ³ , 이λ₯Ό 톡해 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚°μ˜ μ „ 과정을 μ–΄λŠ 곳에도 ν‚€κ°€ μ €μž₯λ˜μ§€ μ•Šμ€ μƒνƒœλ‘œ μˆ˜ν–‰ν•  수 μžˆλŠ” μ•”ν˜Έμ‹œμŠ€ν…œμ„ μ œμ•ˆν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ ꡬ체적인 μ•ˆμ „μ„± 평가 방법을 κ³ λ €ν•˜μ˜€λ‹€. 이λ₯Ό μœ„ν•΄ λ™ν˜•μ•”ν˜Έκ°€ κΈ°λ°˜ν•˜κ³  μžˆλŠ” 이λ₯Έλ°” Learning With Errors (LWE) 문제의 μ‹€μ œμ μΈ λ‚œν•΄μ„±μ„ λ©΄λ°€νžˆ λΆ„μ„ν•˜μ˜€κ³ , κ·Έ κ²°κ³Ό 기쑴의 곡격 μ•Œκ³ λ¦¬μ¦˜λ³΄λ‹€ ν‰κ· μ μœΌλ‘œ 1000λ°° 이상 λΉ λ₯Έ 곡격 μ•Œκ³ λ¦¬μ¦˜λ“€μ„ κ°œλ°œν•˜μ˜€λ‹€. 이λ₯Ό 톡해 ν˜„μž¬ μ‚¬μš©ν•˜κ³  μžˆλŠ” λ™ν˜•μ•”ν˜Έ νŒŒλΌλ―Έν„°κ°€ μ•ˆμ „ν•˜μ§€ μ•ŠμŒμ„ λ³΄μ˜€κ³ , μƒˆλ‘œμš΄ 곡격 μ•Œκ³ λ¦¬μ¦˜μ„ ν†΅ν•œ νŒŒλΌλ―Έν„° μ„€μ • 방법에 λŒ€ν•΄μ„œ λ…Όν•˜μ˜€λ‹€.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems. First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood. For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1 1.1 Access Control based on Identity 2 1.2 Biometric Key Management 3 1.3 Concrete Security of HE 3 1.4 List of Papers 4 2 Background 6 2.1 Notation 6 2.2 Lattices 7 2.2.1 Lattice Reduction Algorithm 7 2.2.2 BKZ cost model 8 2.2.3 Geometric Series Assumption (GSA) 8 2.2.4 The Nearest Plane Algorithm 9 2.3 Gaussian Measures 9 2.3.1 Kullback-Leibler Divergence 11 2.4 Lattice-based Hard Problems 12 2.4.1 The Learning With Errors Problem 12 2.4.2 NTRU Problem 13 2.5 One-way and Pseudo-random Functions 14 3 ID-based Data Access Control 16 3.1 Module-NTRU Lattices 16 3.1.1 Construction of MNTRU lattice and trapdoor 17 3.1.2 Minimize the Gram-Schmidt norm 22 3.2 IBE-Scheme from Module-NTRU 24 3.2.1 Scheme Construction 24 3.2.2 Security Analysis by Attack Algorithms 29 3.2.3 Parameter Selections 31 3.3 Application to Signature 33 4 Noisy Key Cryptosystem 36 4.1 Reusable Fuzzy Extractors 37 4.2 Local Functions 40 4.2.1 Hardness over Non-uniform Sources 40 4.2.2 Flipping local functions 43 4.2.3 Noise stability of predicate functions: Xor-Maj 44 4.3 From Pseudorandom Local Functions 47 4.3.1 Basic Construction: One-bit Fuzzy Extractor 48 4.3.2 Expansion to multi-bit Fuzzy Extractor 50 4.3.3 Indistinguishable Reusability 52 4.3.4 One-way Reusability 56 4.4 From Local One-way Functions 59 5 Concrete Security of Homomorphic Encryption 63 5.1 Albrecht's Improved Dual Attack 64 5.1.1 Simple Dual Lattice Attack 64 5.1.2 Improved Dual Attack 66 5.2 Meet-in-the-Middle Attack on LWE 69 5.2.1 Noisy Collision Search 70 5.2.2 Noisy Meet-in-the-middle Attack on LWE 74 5.3 The Hybrid-Dual Attack 76 5.3.1 Dimension-error Trade-o of LWE 77 5.3.2 Our Hybrid Attack 79 5.4 The Hybrid-Primal Attack 82 5.4.1 The Primal Attack on LWE 83 5.4.2 The Hybrid Attack for SVP 86 5.4.3 The Hybrid-Primal attack for LWE 93 5.4.4 Complexity Analysis 96 5.5 Bit-security estimation 102 5.5.1 Estimations 104 5.5.2 Application to PKE 105 6 Conclusion 108 Abstract (in Korean) 120Docto

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Code Offset in the Exponent

    Get PDF
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable

    Efficient, Reusable Fuzzy Extractors from LWE

    Get PDF
    A fuzzy extractor (FE), proposed for deriving cryptographic keys from biometric data, enables reproducible generation of high-quality randomness from noisy inputs having sufficient min-entropy. FEs rely in their operation on a public helper string that is guaranteed not to leak too much information about the original input. Unfortunately, this guarantee may not hold when multiple independent helper strings are generated from correlated inputs as would occur if a user registers their biometric data with multiple servers; reusable FEs are needed in that case. Although the notion of reusable FEs was introduced in 2004, it has received relatively little attention since then. We first analyze an FE proposed by Fuller et al. (Asiacrypt 2013) based on the learning-with-errors (LWE) assumption, and show that it is not reusable. We then show how to adapt their construction to obtain a weakly reusable FE. We also show a generic technique for turning any weakly reusable FE to a strongly reusable one, in the random-oracle model. Finally, we give a direct construction of a strongly reusable FE based on the LWE assumption, that does not rely on random oracles

    Generic Constructions of Robustly Reusable Fuzzy Extractor

    Get PDF
    Robustly reusable Fuzzy Extractor (rrFE) considers reusability and robustness simultaneously. We present two approaches to the generic construction of rrFE. Both of approaches make use of a secure sketch and universal hash functions. The first approach also employs a special pseudo-random function (PRF), namely unique-input key-shift (ui-ks) secure PRF, and the second uses a key-shift secure auxiliary-input authenticated encryption (AIAE). The ui-ks security of PRF (resp. key-shift security of AIAE), together with the homomorphic properties of secure sketch and universal hash function, guarantees the reusability and robustness of rrFE. Meanwhile, we show two instantiations of the two approaches respectively. The first instantiation results in the first rrFE from the LWE assumption, while the second instantiation results in the first rrFE from the DDH assumption over non-pairing groups

    A Reusable Fuzzy Extractor with Practical Storage Size

    Get PDF
    After the concept of a Fuzzy Extractor (FE) was rst introduced by Dodis et al. , it has been regarded as one of the candidate solutions for key management utilizing biometric data. With a noisy input such as biometrics, FE generates a public helper value and a random secret key which is reproducible given another input similar to the original input. However, helper values may cause some leakage of information when generated repeatedly by correlated inputs, thus reusability should be considered as an important property. Recently, Canetti et al. (Eurocrypt 2016) proposed a FE satisfying both reusability and robustness with inputs from low-entropy distributions. Their strategy, the so-called Sample-then-Lock method, is to sample many partial strings from a noisy input string and to lock one secret key with each partial string independently. In this paper, modifying this reusable FE, we propose a new FE with size-reduced helper data hiring a threshold scheme. Our new FE also satises both reusability and robustness, and requires much less storage memory than the original. To show the advantages of this scheme, we analyze and compare our scheme with the original in concrete parameters of the biometric, IrisCode. As a result, on 1024-bit inputs, with false rejection rate 0.5 and error tolerance 0.25, while the original requires about 1TB for each helper value, our scheme requires only 300MB with an additional 1.35GB of common data which can be used for all helper values

    Securing Systems with Scarce Entropy: LWE-Based Lossless Computational Fuzzy Extractor for the IoT

    Get PDF
    With the advent of the Internet of Things, lightweight devices necessitate secure and cost-efficient key storage. Since traditional secure key storage is expensive, novel solutions have been developed based on the idea of deriving the key from noisy entropy sources. Such sources when combined with fuzzy extractors allow cryptographically strong key derivation. Information theoretic fuzzy extractors require large amounts of input entropy to account for entropy loss in the key extraction process. It has been shown by Fuller \textit{et al.}~(ASIACRYPT\u2713) that the entropy loss can be reduced if the requirement is relaxed to computational security based on the hardness of the Learning with Errors problem. Using this computational fuzzy extractor, we show how to construct a device-server authentication system providing outsider chosen perturbation security and pre-application robustness. We present the first implementation of a \emph{lossless} computational fuzzy extractor where the entropy of the source equals the entropy of the key on a constrained device. The implementation needs only 1.45KB of SRAM and 9.8KB of Flash memory on an 8-bit microcontroller. Furthermore, we also show how a device-server authentication system can be constructed and efficiently implemented in our system. We compare our implementation to existing work in terms of security, while achieving no entropy loss

    Pseudoentropic Isometries: A New Framework for Fuzzy Extractor Reusability

    Get PDF
    Fuzzy extractors (Dodis \textit{et al.}, Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. \textit{Reusable} fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens \textit{et al.} (IEEE S\&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti \textit{et al.} (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a \emph{sublinear} fraction of errors. We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a \emph{reusable pseudoentropic isometry} that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key. We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a {\it linear} fraction of errors. For the Hamming metric, we show that the second construction of Canetti \textit{et al.} (Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry\u27s reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet. Lastly, we implement our set difference solution and describe two use cases

    Facial Template Protection via Lattice-based Fuzzy Extractors

    Get PDF
    With the growing adoption of facial recognition worldwide as a popular authentication method, there is increasing concern about the invasion of personal privacy due to the lifetime irrevocability of facial features. In principle, {\it Fuzzy Extractors} enable biometric-based authentication while preserving the privacy of biometric templates. Nevertheless, to our best knowledge, most existing fuzzy extractors handle binary vectors with Hamming distance, and no explicit construction is known for facial recognition applications where β„“2\ell_2-distance of real vectors is considered. In this paper, we utilize the dense packing feature of certain lattices (e.g., E8\rm E_8 and Leech) to design a family of {\it lattice-based} fuzzy extractors that docks well with existing neural network-based biometric identification schemes. We instantiate and implement the generic construction and conduct experiments on publicly available datasets. Our result confirms the feasibility of facial template protection via fuzzy extractors
    corecore