188 research outputs found

    Security/privacy analysis of biometric hashing and template protection for fingerprint minutiae

    Get PDF
    This thesis has two main parts. The first part deals with security and privacy analysis of biometric hashing. The second part introduces a method for fixed-length feature vector extraction and hash generation from fingerprint minutiae. The upsurge of interest in biometric systems has led to development of biometric template protection methods in order to overcome security and privacy problems. Biometric hashing produces a secure binary template by combining a personal secret key and the biometric of a person, which leads to a two factor authentication method. This dissertation analyzes biometric hashing both from a theoretical point of view and in regards to its practical application. For theoretical evaluation of biohashes, a systematic approach which uses estimated entropy based on degree of freedom of a binomial distribution is outlined. In addition, novel practical security and privacy attacks against face image hashing are presented to quantify additional protection provided by biometrics in cases where the secret key is compromised (i.e., the attacker is assumed to know the user's secret key). Two of these attacks are based on sparse signal recovery techniques using one-bit compressed sensing in addition to two other minimum-norm solution based attacks. A rainbow attack based on a large database of faces is also introduced. The results show that biometric templates would be in serious danger of being exposed when the secret key is known by an attacker, and the system would be under a serious threat as well. Due to its distinctiveness and performance, fingerprint is preferred among various biometric modalities in many settings. Most fingerprint recognition systems use minutiae information, which is an unordered collection of minutiae locations and orientations Some advanced template protection algorithms (such as fuzzy commitment and other modern cryptographic alternatives) require a fixed-length binary template. However, such a template protection method is not directly applicable to fingerprint minutiae representation which by its nature is of variable size. This dissertation introduces a novel and empirically validated framework that represents a minutiae set with a rotation invariant fixed-length vector and hence enables using biometric template protection methods for fingerprint recognition without signi cant loss in verification performance. The introduced framework is based on using local representations around each minutia as observations modeled by a Gaussian mixture model called a universal background model (UBM). For each fingerprint, we extract a fixed length super-vector of rst order statistics through alignment with the UBM. These super-vectors are then used for learning linear support vector machine (SVM) models per person for verifiation. In addition, the xed-length vector and the linear SVM model are both converted into binary hashes and the matching process is reduced to calculating the Hamming distance between them so that modern cryptographic alternatives based on homomorphic encryption can be applied for minutiae template protection

    Naval Reserve support to information Operations Warfighting

    Get PDF
    Since the mid-1990s, the Fleet Information Warfare Center (FIWC) has led the Navy's Information Operations (IO) support to the Fleet. Within the FIWC manning structure, there are in total 36 officer and 84 enlisted Naval Reserve billets that are manned to approximately 75 percent and located in Norfolk and San Diego Naval Reserve Centers. These Naval Reserve Force personnel could provide support to FIWC far and above what they are now contributing specifically in the areas of Computer Network Operations, Psychological Operations, Military Deception and Civil Affairs. Historically personnel conducting IO were primarily reservists and civilians in uniform with regular military officers being by far the minority. The Naval Reserve Force has the personnel to provide skilled IO operators but the lack of an effective manning document and training plans is hindering their opportunity to enhance FIWC's capabilities in lull spectrum IO. This research investigates the skill requirements of personnel in IO to verify that the Naval Reserve Force has the talent base for IO support and the feasibility of their expanded use in IO.http://archive.org/details/navalreservesupp109451098

    There Is Always a Way Out! Destruction-Resistant Key Management: Formal Definition and Practical Instantiation

    Get PDF
    A central advantage of deploying cryptosystems is that the security of large high-sensitive data sets can be reduced to the security of a very small key. The most popular way to manage keys is to use a (t,n)−(t,n)-threshold secret sharing scheme: a user splits her/his key into nn shares, distributes them among nn key servers, and can recover the key with the aid of any tt of them. However, it is vulnerable to device destruction: if all key servers and user\u27s devices break down, the key will be permanently lost. We propose a D‾\mathrm{\underline{D}}estruction-R‾\mathrm{\underline{R}}esistant K‾\mathrm{\underline{K}}ey M‾\mathrm{\underline{M}}anagement scheme, dubbed DRKM, which ensures the key availability even if destruction occurs. In DRKM, a user utilizes her/his n∗n^{*} personal identification factors (PIFs) to derive a cryptographic key but can retrieve the key using any t∗t^{*} of the n∗n^{*} PIFs. As most PIFs can be retrieved by the user per se\textit{per se} without requiring stateful\textit{stateful} devices, destruction resistance is achieved. With the integration of a (t,n)−(t,n)-threshold secret sharing scheme, DRKM also provides portable\textit{portable} key access for the user (with the aid of any tt of nn key servers) before destruction occurs. DRKM can be utilized to construct a destruction-resistant cryptosystem (DRC) in tandem with any backup system. We formally prove the security of DRKM, implement a DRKM prototype, and conduct a comprehensive performance evaluation to demonstrate its high efficiency. We further utilize Cramer\u27s Rule to reduce the required buffer to retrieve a key from 25 MB to 40 KB (for 256-bit security)

    Improving k-nn search and subspace clustering based on local intrinsic dimensionality

    Get PDF
    In several novel applications such as multimedia and recommender systems, data is often represented as object feature vectors in high-dimensional spaces. The high-dimensional data is always a challenge for state-of-the-art algorithms, because of the so-called curse of dimensionality . As the dimensionality increases, the discriminative ability of similarity measures diminishes to the point where many data analysis algorithms, such as similarity search and clustering, that depend on them lose their effectiveness. One way to handle this challenge is by selecting the most important features, which is essential for providing compact object representations as well as improving the overall search and clustering performance. Having compact feature vectors can further reduce the storage space and the computational complexity of search and learning tasks. Support-Weighted Intrinsic Dimensionality (support-weighted ID) is a new promising feature selection criterion that estimates the contribution of each feature to the overall intrinsic dimensionality. Support-weighted ID identifies relevant features locally for each object, and penalizes those features that have locally lower discriminative power as well as higher density. In fact, support-weighted ID measures the ability of each feature to locally discriminate between objects in the dataset. Based on support-weighted ID, this dissertation introduces three main research contributions: First, this dissertation proposes NNWID-Descent, a similarity graph construction method that utilizes the support-weighted ID criterion to identify and retain relevant features locally for each object and enhance the overall graph quality. Second, with the aim to improve the accuracy and performance of cluster analysis, this dissertation introduces k-LIDoids, a subspace clustering algorithm that extends the utility of support-weighted ID within a clustering framework in order to gradually select the subset of informative and important features per cluster. k-LIDoids is able to construct clusters together with finding a low dimensional subspace for each cluster. Finally, using the compact object and cluster representations from NNWID-Descent and k-LIDoids, this dissertation defines LID-Fingerprint, a new binary fingerprinting and multi-level indexing framework for the high-dimensional data. LID-Fingerprint can be used for hiding the information as a way of preventing passive adversaries as well as providing an efficient and secure similarity search and retrieval for the data stored on the cloud. When compared to other state-of-the-art algorithms, the good practical performance provides an evidence for the effectiveness of the proposed algorithms for the data in high-dimensional spaces

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach
    • …
    corecore