182 research outputs found

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Code Offset in the Exponent

    Get PDF
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Forward Secure Fuzzy Extractors

    Get PDF

    Pseudoentropic Isometries: A New Framework for Fuzzy Extractor Reusability

    Get PDF
    Fuzzy extractors (Dodis \textit{et al.}, Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. \textit{Reusable} fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens \textit{et al.} (IEEE S\&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti \textit{et al.} (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a \emph{sublinear} fraction of errors. We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a \emph{reusable pseudoentropic isometry} that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key. We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a {\it linear} fraction of errors. For the Hamming metric, we show that the second construction of Canetti \textit{et al.} (Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry\u27s reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet. Lastly, we implement our set difference solution and describe two use cases

    Robust and Reusable Fuzzy Extractors and their Application to Authentication from Iris Data

    Get PDF
    Fuzzy extractors (FE) are cryptographic primitives that establish a shared secret between two parties who have similar samples of a random source, and can communicate over a public channel. An example for this is that Alice has a stored biometric at a server and wants to have authenticated communication using a new reading of her biometric on her device. Reusability and robustness of FE, respectively, guarantee that security holds when FE is used with multiple samples, and the communication channel is tamperable. Fuzzy extractors have been studied in information theoretic and computational setting. Contributions of this paper are two-fold. First, we define a strongly robust and reusable FE that combines the strongest security requirements of FEs, and give three constructions. Construction 1 has computational security, and Constructions 2 and 3 provide information theoretic (IT) security, in our proposed model. Construction 1 provides a solution to the open question of Canetti et al. (Eurocrypt 2014), by achieving robustness and reusability (post-quantum) security in standard model for their construction. Constructions 2 and 3 offer a new approach to the construction of IT-secure FE. Construction 3 is the first robust and reusable FE with IT-security without assuming random oracle. Our robust FEs use a new IT-secure MAC with security against key-shift attack which is of independent interest. Our constructions are for structured sources which for Construction 1, matches Canetti et al.’s source. We then use our Construction 1 for biometric authentication using iris data. We use a widely used iris data set to find the system parameters of the construction for the data set, and implement it. We compare our implementation with an implementation of Canetti et al.’s reusable FE on the same data set, showing the cost of post-quantum security without using random oracle, and robustness in standard model

    Facial Template Protection via Lattice-based Fuzzy Extractors

    Get PDF
    With the growing adoption of facial recognition worldwide as a popular authentication method, there is increasing concern about the invasion of personal privacy due to the lifetime irrevocability of facial features. In principle, {\it Fuzzy Extractors} enable biometric-based authentication while preserving the privacy of biometric templates. Nevertheless, to our best knowledge, most existing fuzzy extractors handle binary vectors with Hamming distance, and no explicit construction is known for facial recognition applications where â„“2\ell_2-distance of real vectors is considered. In this paper, we utilize the dense packing feature of certain lattices (e.g., E8\rm E_8 and Leech) to design a family of {\it lattice-based} fuzzy extractors that docks well with existing neural network-based biometric identification schemes. We instantiate and implement the generic construction and conduct experiments on publicly available datasets. Our result confirms the feasibility of facial template protection via fuzzy extractors

    Efficient, Reusable Fuzzy Extractors from LWE

    Get PDF
    A fuzzy extractor (FE), proposed for deriving cryptographic keys from biometric data, enables reproducible generation of high-quality randomness from noisy inputs having sufficient min-entropy. FEs rely in their operation on a public helper string that is guaranteed not to leak too much information about the original input. Unfortunately, this guarantee may not hold when multiple independent helper strings are generated from correlated inputs as would occur if a user registers their biometric data with multiple servers; reusable FEs are needed in that case. Although the notion of reusable FEs was introduced in 2004, it has received relatively little attention since then. We first analyze an FE proposed by Fuller et al. (Asiacrypt 2013) based on the learning-with-errors (LWE) assumption, and show that it is not reusable. We then show how to adapt their construction to obtain a weakly reusable FE. We also show a generic technique for turning any weakly reusable FE to a strongly reusable one, in the random-oracle model. Finally, we give a direct construction of a strongly reusable FE based on the LWE assumption, that does not rely on random oracles

    Adaptive learning and cryptography

    Get PDF
    Significant links exist between cryptography and computational learning theory. Cryptographic functions are the usual method of demonstrating significant intractability results in computational learning theory as they can demonstrate that certain problems are hard in a representation independent sense. On the other hand, hard learning problems have been used to create efficient cryptographic protocols such as authentication schemes, pseudo-random permutations and functions, and even public key encryption schemes.;Learning theory / coding theory also impacts cryptography in that it enables cryptographic primitives to deal with the issues of noise or bias in their inputs. Several different constructions of fuzzy primitives exist, a fuzzy primitive being a primitive which functions correctly even in the presence of noisy , or non-uniform inputs. Some examples of these primitives include error-correcting blockciphers, fuzzy identity based cryptosystems, fuzzy extractors and fuzzy sketches. Error correcting blockciphers combine both encryption and error correction in a single function which results in increased efficiency. Fuzzy identity based encryption allows the decryption of any ciphertext that was encrypted under a close enough identity. Fuzzy extractors and sketches are methods of reliably (re)-producing a uniformly random secret key given an imperfectly reproducible string from a biased source, through a public string that is called the sketch .;While hard learning problems have many qualities which make them useful in constructing cryptographic protocols, such as their inherent error tolerance and simple algebraic structure, it is often difficult to utilize them to construct very secure protocols due to assumptions they make on the learning algorithm. Due to these assumptions, the resulting protocols often do not have security against various types of adaptive adversaries. to help deal with this issue, we further examine the inter-relationships between cryptography and learning theory by introducing the concept of adaptive learning . Adaptive learning is a rather weak form of learning in which the learner is not expected to closely approximate the concept function in its entirety, rather it is only expected to answer a query of the learner\u27s choice about the target. Adaptive learning allows for a much weaker learner than in the standard model, while maintaining the the positive properties of many learning problems in the standard model, a fact which we feel makes problems that are hard to adaptively learn more useful than standard model learning problems in the design of cryptographic protocols. We argue that learning parity with noise is hard to do adaptively and use that assumption to construct a related key secure, efficient MAC as well as an efficient authentication scheme. In addition we examine the security properties of fuzzy sketches and extractors and demonstrate how these properties can be combined by using our related key secure MAC. We go on to demonstrate that our extractor can allow a form of related-key hardening for protocols in that, by affecting how the key for a primitive is stored it renders that protocol immune to related key attacks
    • …
    corecore