45,922 research outputs found

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Information compression in the context model

    Get PDF
    The Context Model provides a formal framework for the representation, interpretation, and analysis of vague and uncertain data. The clear semantics of the underlying concepts make it feasible to compare well-known approaches to the modeling of imperfect knowledge like that given in Bayes Theory, Shafer's Evidence Theory, the Transferable Belief Model, and Possibility Theory. In this paper we present the basic ideas of the Context Model and show its applicability as an alternative foundation of Possibility Theory and the epistemic view of fuzzy sets

    (Quantum) Space-Time as a Statistical Geometry of Fuzzy Lumps and the Connection with Random Metric Spaces

    Get PDF
    We develop a kind of pregeometry consisting of a web of overlapping fuzzy lumps which interact with each other. The individual lumps are understood as certain closely entangled subgraphs (cliques) in a dynamically evolving network which, in a certain approximation, can be visualized as a time-dependent random graph. This strand of ideas is merged with another one, deriving from ideas, developed some time ago by Menger et al, that is, the concept of probabilistic- or random metric spaces, representing a natural extension of the metrical continuum into a more microscopic regime. It is our general goal to find a better adapted geometric environment for the description of microphysics. In this sense one may it also view as a dynamical randomisation of the causal-set framework developed by e.g. Sorkin et al. In doing this we incorporate, as a perhaps new aspect, various concepts from fuzzy set theory.Comment: 25 pages, Latex, no figures, some references added, some minor changes added relating to previous wor

    A fuzzified BRAIN algorithm for learning DNF from incomplete data

    Get PDF
    Aim of this paper is to address the problem of learning Boolean functions from training data with missing values. We present an extension of the BRAIN algorithm, called U-BRAIN (Uncertainty-managing Batch Relevance-based Artificial INtelligence), conceived for learning DNF Boolean formulas from partial truth tables, possibly with uncertain values or missing bits. Such an algorithm is obtained from BRAIN by introducing fuzzy sets in order to manage uncertainty. In the case where no missing bits are present, the algorithm reduces to the original BRAIN

    Relations among Security Metrics for Template Protection Algorithms

    Full text link
    Many biometric template protection algorithms have been proposed mainly in two approaches: biometric feature transformation and biometric cryptosystem. Security evaluation of the proposed algorithms are often conducted in various inconsistent manner. Thus, it is strongly demanded to establish the common evaluation metrics for easier comparison among many algorithms. Simoens et al. and Nagar et al. proposed good metrics covering nearly all aspect of requirements expected for biometric template protection algorithms. One drawback of the two papers is that they are biased to experimental evaluation of security of biometric template protection algorithms. Therefore, it was still difficult mainly for algorithms in biometric cryptosystem to prove their security according to the proposed metrics. This paper will give a formal definitions for security metrics proposed by Simoens et al. and Nagar et al. so that it can be used for the evaluation of both of the two approaches. Further, this paper will discuss the relations among several notions of security metrics
    • 

    corecore