185 research outputs found

    μž‘μŒν‚€λ₯Ό κ°€μ§€λŠ” μ‹ μ›κΈ°λ°˜ λ™ν˜•μ•”ν˜Έμ— κ΄€ν•œ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :μžμ—°κ³Όν•™λŒ€ν•™ μˆ˜λ¦¬κ³Όν•™λΆ€,2020. 2. μ²œμ •ν¬.ν΄λΌμš°λ“œ μƒμ˜ 데이터 뢄석 μœ„μž„ μ‹œλ‚˜λ¦¬μ˜€λŠ” λ™ν˜•μ•”ν˜Έμ˜ κ°€μž₯ 효과적인 μ‘μš© μ‹œλ‚˜λ¦¬μ˜€ 쀑 ν•˜λ‚˜μ΄λ‹€. κ·ΈλŸ¬λ‚˜, λ‹€μ–‘ν•œ 데이터 μ œκ³΅μžμ™€ 뢄석결과 μš”κ΅¬μžκ°€ μ‘΄μž¬ν•˜λŠ” μ‹€μ œ ν˜„μ‹€μ˜ λͺ¨λΈμ—μ„œλŠ” 기본적인 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚° 외에도 μ—¬μ „νžˆ ν•΄κ²°ν•΄μ•Ό ν•  κ³Όμ œλ“€μ΄ λ‚¨μ•„μžˆλŠ” 싀정이닀. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” μ΄λŸ¬ν•œ λͺ¨λΈμ—μ„œ ν•„μš”ν•œ μ—¬λŸ¬ μš”κ΅¬μ‚¬ν•­λ“€μ„ ν¬μ°©ν•˜κ³ , 이에 λŒ€ν•œ ν•΄κ²°λ°©μ•ˆμ„ λ…Όν•˜μ˜€λ‹€. λ¨Όμ €, 기쑴의 μ•Œλ €μ§„ λ™ν˜• 데이터 뢄석 μ†”λ£¨μ…˜λ“€μ€ 데이터 κ°„μ˜ μΈ΅μœ„λ‚˜ μˆ˜μ€€μ„ κ³ λ €ν•˜μ§€ λͺ»ν•œλ‹€λŠ” 점에 μ°©μ•ˆν•˜μ—¬, μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ™€ λ™ν˜•μ•”ν˜Έλ₯Ό κ²°ν•©ν•˜μ—¬ 데이터 사이에 μ ‘κ·Ό κΆŒν•œμ„ μ„€μ •ν•˜μ—¬ ν•΄λ‹Ή 데이터 μ‚¬μ΄μ˜ 연산을 ν—ˆμš©ν•˜λŠ” λͺ¨λΈμ„ μƒκ°ν•˜μ˜€λ‹€. λ˜ν•œ 이 λͺ¨λΈμ˜ 효율적인 λ™μž‘μ„ μœ„ν•΄μ„œ λ™ν˜•μ•”ν˜Έ μΉœν™”μ μΈ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έμ— λŒ€ν•˜μ—¬ μ—°κ΅¬ν•˜μ˜€κ³ , 기쑴에 μ•Œλ €μ§„ NTRU 기반의 μ•”ν˜Έλ₯Ό ν™•μž₯ν•˜μ—¬ module-NTRU 문제λ₯Ό μ •μ˜ν•˜κ³  이λ₯Ό 기반으둜 ν•œ μ‹ μ›κΈ°λ°˜ μ•”ν˜Έλ₯Ό μ œμ•ˆν•˜μ˜€λ‹€. λ‘˜μ§Έλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ λ³΅ν˜Έν™” κ³Όμ •μ—λŠ” μ—¬μ „νžˆ λΉ„λ°€ν‚€κ°€ κ΄€μ—¬ν•˜κ³  있고, λ”°λΌμ„œ λΉ„λ°€ν‚€ 관리 λ¬Έμ œκ°€ λ‚¨μ•„μžˆλ‹€λŠ” 점을 ν¬μ°©ν•˜μ˜€λ‹€. μ΄λŸ¬ν•œ μ μ—μ„œ 생체정보λ₯Ό ν™œμš©ν•  수 μžˆλŠ” λ³΅ν˜Έν™” 과정을 κ°œλ°œν•˜μ—¬ ν•΄λ‹Ή 과정을 λ™ν˜•μ•”ν˜Έ λ³΅ν˜Έν™”μ— μ μš©ν•˜μ˜€κ³ , 이λ₯Ό 톡해 μ•”λ³΅ν˜Έν™”μ™€ λ™ν˜• μ—°μ‚°μ˜ μ „ 과정을 μ–΄λŠ 곳에도 ν‚€κ°€ μ €μž₯λ˜μ§€ μ•Šμ€ μƒνƒœλ‘œ μˆ˜ν–‰ν•  수 μžˆλŠ” μ•”ν˜Έμ‹œμŠ€ν…œμ„ μ œμ•ˆν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ™ν˜•μ•”ν˜Έμ˜ ꡬ체적인 μ•ˆμ „μ„± 평가 방법을 κ³ λ €ν•˜μ˜€λ‹€. 이λ₯Ό μœ„ν•΄ λ™ν˜•μ•”ν˜Έκ°€ κΈ°λ°˜ν•˜κ³  μžˆλŠ” 이λ₯Έλ°” Learning With Errors (LWE) 문제의 μ‹€μ œμ μΈ λ‚œν•΄μ„±μ„ λ©΄λ°€νžˆ λΆ„μ„ν•˜μ˜€κ³ , κ·Έ κ²°κ³Ό 기쑴의 곡격 μ•Œκ³ λ¦¬μ¦˜λ³΄λ‹€ ν‰κ· μ μœΌλ‘œ 1000λ°° 이상 λΉ λ₯Έ 곡격 μ•Œκ³ λ¦¬μ¦˜λ“€μ„ κ°œλ°œν•˜μ˜€λ‹€. 이λ₯Ό 톡해 ν˜„μž¬ μ‚¬μš©ν•˜κ³  μžˆλŠ” λ™ν˜•μ•”ν˜Έ νŒŒλΌλ―Έν„°κ°€ μ•ˆμ „ν•˜μ§€ μ•ŠμŒμ„ λ³΄μ˜€κ³ , μƒˆλ‘œμš΄ 곡격 μ•Œκ³ λ¦¬μ¦˜μ„ ν†΅ν•œ νŒŒλΌλ―Έν„° μ„€μ • 방법에 λŒ€ν•΄μ„œ λ…Όν•˜μ˜€λ‹€.Secure data analysis delegation on cloud is one of the most powerful application that homomorphic encryption (HE) can bring. As the technical level of HE arrive at practical regime, this model is also being considered to be a more serious and realistic paradigm. In this regard, this increasing attention requires more versatile and secure model to deal with much complicated real world problems. First, as real world modeling involves a number of data owners and clients, an authorized control to data access is still required even for HE scenario. Second, we note that although homomorphic operation requires no secret key, the decryption requires the secret key. That is, the secret key management concern still remains even for HE. Last, in a rather fundamental view, we thoroughly analyze the concrete hardness of the base problem of HE, so-called Learning With Errors (LWE). In fact, for the sake of efficiency, HE exploits a weaker variant of LWE whose security is believed not fully understood. For the data encryption phase efficiency, we improve the previously suggested NTRU-lattice ID-based encryption by generalizing the NTRU concept into module-NTRU lattice. Moreover, we design a novel method that decrypts the resulting ciphertext with a noisy key. This enables the decryptor to use its own noisy source, in particular biometric, and hence fundamentally solves the key management problem. Finally, by considering further improvement on existing LWE solving algorithms, we propose new algorithms that shows much faster performance. Consequently, we argue that the HE parameter choice should be updated regarding our attacks in order to maintain the currently claimed security level.1 Introduction 1 1.1 Access Control based on Identity 2 1.2 Biometric Key Management 3 1.3 Concrete Security of HE 3 1.4 List of Papers 4 2 Background 6 2.1 Notation 6 2.2 Lattices 7 2.2.1 Lattice Reduction Algorithm 7 2.2.2 BKZ cost model 8 2.2.3 Geometric Series Assumption (GSA) 8 2.2.4 The Nearest Plane Algorithm 9 2.3 Gaussian Measures 9 2.3.1 Kullback-Leibler Divergence 11 2.4 Lattice-based Hard Problems 12 2.4.1 The Learning With Errors Problem 12 2.4.2 NTRU Problem 13 2.5 One-way and Pseudo-random Functions 14 3 ID-based Data Access Control 16 3.1 Module-NTRU Lattices 16 3.1.1 Construction of MNTRU lattice and trapdoor 17 3.1.2 Minimize the Gram-Schmidt norm 22 3.2 IBE-Scheme from Module-NTRU 24 3.2.1 Scheme Construction 24 3.2.2 Security Analysis by Attack Algorithms 29 3.2.3 Parameter Selections 31 3.3 Application to Signature 33 4 Noisy Key Cryptosystem 36 4.1 Reusable Fuzzy Extractors 37 4.2 Local Functions 40 4.2.1 Hardness over Non-uniform Sources 40 4.2.2 Flipping local functions 43 4.2.3 Noise stability of predicate functions: Xor-Maj 44 4.3 From Pseudorandom Local Functions 47 4.3.1 Basic Construction: One-bit Fuzzy Extractor 48 4.3.2 Expansion to multi-bit Fuzzy Extractor 50 4.3.3 Indistinguishable Reusability 52 4.3.4 One-way Reusability 56 4.4 From Local One-way Functions 59 5 Concrete Security of Homomorphic Encryption 63 5.1 Albrecht's Improved Dual Attack 64 5.1.1 Simple Dual Lattice Attack 64 5.1.2 Improved Dual Attack 66 5.2 Meet-in-the-Middle Attack on LWE 69 5.2.1 Noisy Collision Search 70 5.2.2 Noisy Meet-in-the-middle Attack on LWE 74 5.3 The Hybrid-Dual Attack 76 5.3.1 Dimension-error Trade-o of LWE 77 5.3.2 Our Hybrid Attack 79 5.4 The Hybrid-Primal Attack 82 5.4.1 The Primal Attack on LWE 83 5.4.2 The Hybrid Attack for SVP 86 5.4.3 The Hybrid-Primal attack for LWE 93 5.4.4 Complexity Analysis 96 5.5 Bit-security estimation 102 5.5.1 Estimations 104 5.5.2 Application to PKE 105 6 Conclusion 108 Abstract (in Korean) 120Docto

    Commitment and Oblivious Transfer in the Bounded Storage Model with Errors

    Get PDF
    The bounded storage model restricts the memory of an adversary in a cryptographic protocol, rather than restricting its computational power, making information theoretically secure protocols feasible. We present the first protocols for commitment and oblivious transfer in the bounded storage model with errors, i.e., the model where the public random sources available to the two parties are not exactly the same, but instead are only required to have a small Hamming distance between themselves. Commitment and oblivious transfer protocols were known previously only for the error-free variant of the bounded storage model, which is harder to realize

    Trapdoor Computational Fuzzy Extractors and Stateless Cryptographically-Secure Physical Unclonable Functions

    Get PDF
    We present a fuzzy extractor whose security can be reduced to the hardness of Learning Parity with Noise (LPN) and can efficiently correct a constant fraction of errors in a biometric source with a ``noise-avoiding trapdoor. Using this computational fuzzy extractor, we present a stateless construction of a cryptographically-secure Physical Unclonable Function. Our construct requires no non-volatile (permanent) storage, secure or otherwise, and its computational security can be reduced to the hardness of an LPN variant under the random oracle model. The construction is ``stateless,\u27\u27 because there is \emph{no} information stored between subsequent queries, which mitigates attacks against the PUF via tampering. Moreover, our stateless construction corresponds to a PUF whose outputs are free of noise because of internal error-correcting capability, which enables a host of applications beyond authentication. We describe the construction, provide a proof of computational security, analysis of the security parameter for system parameter choices, and present experimental evidence that the construction is practical and reliable under a wide environmental range

    Securing Systems with Scarce Entropy: LWE-Based Lossless Computational Fuzzy Extractor for the IoT

    Get PDF
    With the advent of the Internet of Things, lightweight devices necessitate secure and cost-efficient key storage. Since traditional secure key storage is expensive, novel solutions have been developed based on the idea of deriving the key from noisy entropy sources. Such sources when combined with fuzzy extractors allow cryptographically strong key derivation. Information theoretic fuzzy extractors require large amounts of input entropy to account for entropy loss in the key extraction process. It has been shown by Fuller \textit{et al.}~(ASIACRYPT\u2713) that the entropy loss can be reduced if the requirement is relaxed to computational security based on the hardness of the Learning with Errors problem. Using this computational fuzzy extractor, we show how to construct a device-server authentication system providing outsider chosen perturbation security and pre-application robustness. We present the first implementation of a \emph{lossless} computational fuzzy extractor where the entropy of the source equals the entropy of the key on a constrained device. The implementation needs only 1.45KB of SRAM and 9.8KB of Flash memory on an 8-bit microcontroller. Furthermore, we also show how a device-server authentication system can be constructed and efficiently implemented in our system. We compare our implementation to existing work in terms of security, while achieving no entropy loss

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Secure Boot and Remote Attestation in the Sanctum Processor

    Get PDF
    During the secure boot process for a trusted execution environment, the processor must provide a chain of certificates to the remote client demonstrating that their secure container was established as specified. This certificate chain is rooted at the hardware manufacturer who is responsible for constructing chips according to the correct specification and provisioning them with key material. We consider a semi-honest manufacturer who is assumed to construct chips correctly, but may attempt to obtain knowledge of client private keys during the process. Using the RISC-V Rocket chip architecture as a base, we design, document, and implement an attested execution processor that does not require secure non-volatile memory, nor a private key explicitly assigned by the manufacturer. Instead, the processor derives its cryptographic identity from manufacturing variation measured by a Physical Unclonable Function (PUF). Software executed by a bootloader built into the processor transforms the PUF output into an elliptic curve key pair. The (re)generated private key is used to sign trusted portions of the boot image, and is immediately destroyed. The platform can therefore provide attestations about its state to remote clients. Reliability and security of PUF keys are ensured through the use of a trapdoor computational fuzzy extractor. We present detailed evaluation results for secure boot and attestation by a client of a Rocket chip implementation on a Xilinx Zynq 7000 FPGA
    • …
    corecore