80 research outputs found

    On the Dissimilarity Representation and Prototype Selection for Signature-Based Bio-Cryptographic Systems

    Get PDF
    Abstract. Robust bio-cryptographic schemes employ encoding methods where a short message is extracted from biometric samples to encode cryptographic keys. This approach implies design limitations: 1) the encoding message should be concise and discriminative, and 2) a dissimilarity threshold must provide a good compromise between false rejection and acceptance rates. In this paper, the dissimilarity representation approach is employed to tackle these limitations, with the offline signature images are employed as biometrics. The signature images are represented as vectors in a high dimensional feature space, and is projected on an intermediate space, where pairwise feature distances are computed. Boosting feature selection is employed to provide a compact space where intra-personal distances are minimized and the inter-personal distances are maximized. Finally, the resulting representation is projected on the dissimilarity space to select the most discriminative prototypes for encoding, and to optimize the dissimilarity threshold. Simulation results on the Brazilian signature DB show the viability of the proposed approach. Employing the dissimilarity representation approach increases the encoding message discriminative power (the area under the ROC curve grows by about 47%). Prototype selection with threshold optimization increases the decoding accuracy (the Average Error Rate AER grows by about 34%)

    A dissimilarity representation approach to designing systems for signature verification and bio-cryptography

    Get PDF
    Automation of legal and financial processes requires enforcing of authenticity, confidentiality, and integrity of the involved transactions. This Thesis focuses on developing offline signature verification (OLSV) systems for enforcing authenticity of transactions. In addition, bio-cryptography systems are developed based on the offline handwritten signature images for enforcing confidentiality and integrity of transactions. Design of OLSV systems is challenging, as signatures are behavioral biometric traits that have intrinsic intra-personal variations and inter-personal similarities. Standard OLSV systems are designed in the feature representation (FR) space, where high-dimensional feature representations are needed to capture the invariance of the signature images. With the numerous users, found in real world applications, e.g., banking systems, decision boundaries in the high-dimensional FR spaces become complex. Accordingly, large number of training samples are required to design of complex classifiers, which is not practical in typical applications. In contrast, design of bio-cryptography systems based on the offline signature images is more challenging. In these systems, signature images lock the cryptographic keys, and a user retrieves his key by applying a query signature sample. For practical bio-cryptographic schemes, the locking feature vector should be concise. In addition, such schemes employ simple error correction decoders, and therefore no complex classification rules can be employed. In this Thesis, the challenging problems of designing OLSV and bio-cryptography systems are addressed by employing the dissimilarity representation (DR) approach. Instead of designing classifiers in the feature space, the DR approach provides a classification space that is defined by some proximity measure. This way, a multi-class classification problem, with few samples per class, is transformed to a more tractable two-class problem with large number of training samples. Since many feature extraction techniques have already been proposed for OLSV applications, a DR approach based on FR is employed. In this case, proximity between two signatures is measured by applying a dissimilarity measure on their feature vectors. The main hypothesis of this Thesis is as follows. The FRs and dissimilarity measures should be properly designed, so that signatures belong to same writer are close, while signatures of different writers are well separated in the resulting DR spaces. In that case, more cost-effecitive classifiers, and therefore simpler OLSV and bio-cryptography systems can be designed. To this end, in Chapter 2, an approach for optimizing FR-based DR spaces is proposed such that concise representations are discriminant, and simple classification thresholds are sufficient. High-dimensional feature representations are translated to an intermediate DR space, where pairwise feature distances are the space constituents. Then, a two-step boosting feature selection (BFS) algorithm is applied. The first step uses samples from a development database, and aims to produce a universal space of reduced dimensionality. The resulting universal space is further reduced and tuned for specific users through a second BFS step using user-specific training set. In the resulting space, feature variations are modeled and an adaptive dissimilarity measure is designed. This measure generates the final DR space, where discriminant prototypes are selected for enhanced representation. The OLSV and bio-cryptographic systems are formulated as simple threshold classifiers that operate in the designed DR space. Proof of concept simulations on the Brazilian signature database indicate the viability of the proposed approach. Concise DRs with few features and a single prototype are produced. Employing a simple threshold classifier, the DRs have shown state-of-the-art accuracy of about 7% AER, comparable to complex systems in the literature. In Chapter 3, the OLSV problem is further studied. Although the aforementioned OLSV implementation has shown acceptable recognition accuracy, the resulting systems are not secure as signature templates must be stored for verification. For enhanced security, we modified the previous implementation as follows. The first BFS step is implemented as aforementioned, producing a writer-independent (WI) system. This enables starting system operation, even if users provide a single signature sample in the enrollment phase. However, the second BFS is modified to run in a FR space instead of a DR space, so that no signature templates are used for verification. To this end, the universal space is translated back to a FR space of reduced dimensionality, so that designing a writer-dependent (WD) system by the few user-specific samples is tractable in the reduced space. Simulation results on two real-world offline signature databases confirm the feasibility of the proposed approach. The initial universal (WI) verification mode showed comparable performance to that of state-of-the-art OLSV systems. The final secure WD verification mode showed enhanced accuracy with decreased computational complexity. Only a single compact classifier produced similar level of accuracy (AER of about 5.38 and 13.96% for the Brazilian and the GPDS signature databases, respectively) as complex WI and WD systems in the literature. Finally, in Chapter 4, a key-binding bio-cryptographic scheme known as the fuzzy vault (FV) is implemented based on the offline signature images. The proposed DR-based two-step BFS technique is employed for selecting a compact and discriminant user-specific FR from a large number of feature extractions. This representation is used to generate the FV locking/unlocking points. Representation variability modeled in the DR space is considered for matching the unlocking and locking points during FV decoding. Proof of concept simulations on the Brazilian signature database have shown FV recognition accuracy of 3% AER and system entropy of about 45-bits. For enhanced security, an adaptive chaff generation method is proposed, where the modeled variability controls the chaff generation process. Similar recognition accuracy is reported, where more enhanced entropy of about 69-bits is achieved

    Mixing Biometric Data For Generating Joint Identities and Preserving Privacy

    Get PDF
    Biometrics is the science of automatically recognizing individuals by utilizing biological traits such as fingerprints, face, iris and voice. A classical biometric system digitizes the human body and uses this digitized identity for human recognition. In this work, we introduce the concept of mixing biometrics. Mixing biometrics refers to the process of generating a new biometric image by fusing images of different fingers, different faces, or different irises. The resultant mixed image can be used directly in the feature extraction and matching stages of an existing biometric system. In this regard, we design and systematically evaluate novel methods for generating mixed images for the fingerprint, iris and face modalities. Further, we extend the concept of mixing to accommodate two distinct modalities of an individual, viz., fingerprint and iris. The utility of mixing biometrics is demonstrated in two different applications. The first application deals with the issue of generating a joint digital identity. A joint identity inherits its uniqueness from two or more individuals and can be used in scenarios such as joint bank accounts or two-man rule systems. The second application deals with the issue of biometric privacy, where the concept of mixing is used for de-identifying or obscuring biometric images and for generating cancelable biometrics. Extensive experimental analysis suggests that the concept of biometric mixing has several benefits and can be easily incorporated into existing biometric systems

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Intelligent Computing for Big Data

    Get PDF
    Recent advances in artificial intelligence have the potential to further develop current big data research. The Special Issue on ‘Intelligent Computing for Big Data’ highlighted a number of recent studies related to the use of intelligent computing techniques in the processing of big data for text mining, autism diagnosis, behaviour recognition, and blockchain-based storage

    Automatic Signature Verification: The State of the Art

    Full text link

    Selected Computing Research Papers Volume 1 June 2012

    Get PDF
    An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1 A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7 An Evaluation of Current Intrusion Detection Systems Research (Gavin Alexander Burns) .................................................................................................... 13 An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19 A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29 An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39 An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45 An Empirical Study of Security Techniques Used In Online Banking (Rajinder D G Singh) .......................................................................................................... 51 A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5

    Algorithmic Techniques in Gene Expression Processing. From Imputation to Visualization

    Get PDF
    The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.Siirretty Doriast

    On the performance of helper data template protection schemes

    Get PDF
    The use of biometrics looks promising as it is already being applied in elec- tronic passports, ePassports, on a global scale. Because the biometric data has to be stored as a reference template on either a central or personal storage de- vice, its wide-spread use introduces new security and privacy risks such as (i) identity fraud, (ii) cross-matching, (iii) irrevocability and (iv) leaking sensitive medical information. Mitigating these risks is essential to obtain the accep- tance from the subjects of the biometric systems and therefore facilitating the successful implementation on a large-scale basis. A solution to mitigate these risks is to use template protection techniques. The required protection properties of the stored reference template according to ISO guidelines are (i) irreversibility, (ii) renewability and (iii) unlinkability. A known template protection scheme is the helper data system (HDS). The fun- damental principle of the HDS is to bind a key with the biometric sample with use of helper data and cryptography, as such that the key can be reproduced or released given another biometric sample of the same subject. The identity check is then performed in a secure way by comparing the hash of the key. Hence, the size of the key determines the amount of protection. This thesis extensively investigates the HDS system, namely (i) the the- oretical classication performance, (ii) the maximum key size, (iii) the irre- versibility and unlinkability properties, and (iv) the optimal multi-sample and multi-algorithm fusion method. The theoretical classication performance of the biometric system is deter- mined by assuming that the features extracted from the biometric sample are Gaussian distributed. With this assumption we investigate the in uence of the bit extraction scheme on the classication performance. With use of the the- oretical framework, the maximum size of the key is determined by assuming the error-correcting code to operate on Shannon's bound. We also show three vulnerabilities of HDS that aect the irreversibility and unlinkability property and propose solutions. Finally, we study the optimal level of applying multi- sample and multi-algorithm fusion with the HDS at either feature-, score-, or decision-level
    corecore