10 research outputs found

    Does EigenPalm work? A system and evaluation perspective

    Get PDF
    Author name used in this publication: Adams KongAuthor name used in this publication: David ZhangRefereed conference paper2006-2007 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Building a Strong Undergraduate Research Culture in African Universities

    Get PDF
    Africa had a late start in the race to setting up and obtaining universities with research quality fundamentals. According to Mamdani [5], the first colonial universities were few and far between: Makerere in East Africa, Ibadan and Legon in West Africa. This last place in the race, compared to other continents, has had tremendous implications in the development plans for the continent. For Africa, the race has been difficult from a late start to an insurmountable litany of problems that include difficulty in equipment acquisition, lack of capacity, limited research and development resources and lack of investments in local universities. In fact most of these universities are very recent with many less than 50 years in business except a few. To help reduce the labor costs incurred by the colonial masters of shipping Europeans to Africa to do mere clerical jobs, they started training ―workshops‖ calling them technical or business colleges. According to Mamdani, meeting colonial needs was to be achieved while avoiding the ―Indian disease‖ in Africa -- that is, the development of an educated middle class, a group most likely to carry the virus of nationalism. Upon independence, most of these ―workshops‖ were turned into national ―universities‖, but with no clear role in national development. These national ―universities‖ were catering for children of the new African political elites. Through the seventies and eighties, most African universities were still without development agendas and were still doing business as usual. Meanwhile, governments strapped with lack of money saw no need of putting more scarce resources into big white elephants. By mid-eighties, even the UN and IMF were calling for a limit on funding African universities. In today‘s African university, the traditional curiosity driven research model has been replaced by a market-driven model dominated by a consultancy culture according to Mamdani (Mamdani, Mail and Guardian Online). The prevailing research culture as intellectual life in universities has been reduced to bare-bones classroom activity, seminars and workshops have migrated to hotels and workshop attendance going with transport allowances and per diems (Mamdani, Mail and Guardian Online). There is need to remedy this situation and that is the focus of this paper

    Palmprint Identification Based on Generalization of IrisCode

    Get PDF
    The development of accurate and reliable security systems is a matter of wide interest, and in this context biometrics is seen as a highly effective automatic mechanism for personal identification. Among biometric technologies, IrisCode developed by Daugman in 1993 is regarded as a highly accurate approach, being able to support real-time personal identification of large databases. Since 1993, on the top of IrisCode, different coding methods have been proposed for iris and fingerprint identification. In this research, I extend and generalize IrisCode for real-time secure palmprint identification. PalmCode, the first coding method for palmprint identification developed by me in 2002, directly applied IrisCode to extract phase information of palmprints as features. However, I observe that the PalmCodes from the different palms are similar, having many 45o streaks. Such structural similarities in the PalmCodes of different palms would reduce the individuality of PalmCodes and the performance of palmprint identification systems. To reduce the correlation between PalmCodes, in this thesis, I employ multiple elliptical Gabor filters with different orientations to compute different PalmCodes and merge them to produce a single feature, called Fusion Code. Experimental results demonstrate that Fusion Code performs better than PalmCode. Based on the results of Fusion Code, I further identify that the orientation fields of palmprints are powerful features. Consequently, Competitive Code, which uses real parts of six Gabor filters to estimate the orientation fields, is developed. To embed the properties of IrisCode, such as high speed matching, in Competitive Code, a novel coding scheme and a bitwise angular distance are proposed. Experimental results demonstrate that Competitive Code is much more effective than other palmprint algorithms. Although many coding methods have been developed based on IrisCode for iris and palmprint identification, we lack a detailed analysis of IrisCode. One of the aims of this research is to provide such analysis as a way of better understanding IrisCode, extending the coarse phase representation to a precise phase representation, and uncovering the relationship between IrisCode and other coding methods. This analysis demonstrates that IrisCode is a clustering process with four prototypes; the locus of a Gabor function is a two-dimensional ellipse with respect to a phase parameter and the bitwise hamming distance can be regarded as a bitwise angular distance. In this analysis, I also point out that the theoretical evidence of the imposter binomial distribution of IrisCode is incomplete. I use this analysis to develop a precise phase representation which can enhance iris recognition accuracy and to relate IrisCode and other coding methods. By making use of this analysis, principal component analysis and simulated annealing, near optimal filters for palmprint identification are sought. The near optimal filters perform better than Competitive Code in term of d’ index. Identical twins having the closest genetics-based relationship are expected to have maximum similarity in their biometrics. Classifying identical twins is a challenging problem for some automatic biometric systems. Palmprint has been studied for personal identification for many years. However, genetically identical palmprints have not been studied. I systemically examine Competitive Code on genetically identical palmprints for automatic personal identification and to uncover the genetically related palmprint features. The experimental results show that the three principal lines and some portions of weak lines are genetically related features but our palms still contain rich genetically unrelated features for classifying identical twins. As biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analyzed before they are massively deployed in security systems. I propose projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system based on Competitive Code. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks. In addition to brute-force attacks, I address the other three security issues: template re-issuances, also called cancellable biometrics, replay attacks, and database attacks. A random orientation filter bank (ROFB) is used to generate cancellable Competitive Codes for templates re-issuances. Secret messages are hidden in templates to prevent replay and database attacks. This technique can be regarded as template watermarking. A series of analyses is provided to evaluate the security levels of the measures

    Palmprint Identification Based on Generalization of IrisCode

    Get PDF
    The development of accurate and reliable security systems is a matter of wide interest, and in this context biometrics is seen as a highly effective automatic mechanism for personal identification. Among biometric technologies, IrisCode developed by Daugman in 1993 is regarded as a highly accurate approach, being able to support real-time personal identification of large databases. Since 1993, on the top of IrisCode, different coding methods have been proposed for iris and fingerprint identification. In this research, I extend and generalize IrisCode for real-time secure palmprint identification. PalmCode, the first coding method for palmprint identification developed by me in 2002, directly applied IrisCode to extract phase information of palmprints as features. However, I observe that the PalmCodes from the different palms are similar, having many 45o streaks. Such structural similarities in the PalmCodes of different palms would reduce the individuality of PalmCodes and the performance of palmprint identification systems. To reduce the correlation between PalmCodes, in this thesis, I employ multiple elliptical Gabor filters with different orientations to compute different PalmCodes and merge them to produce a single feature, called Fusion Code. Experimental results demonstrate that Fusion Code performs better than PalmCode. Based on the results of Fusion Code, I further identify that the orientation fields of palmprints are powerful features. Consequently, Competitive Code, which uses real parts of six Gabor filters to estimate the orientation fields, is developed. To embed the properties of IrisCode, such as high speed matching, in Competitive Code, a novel coding scheme and a bitwise angular distance are proposed. Experimental results demonstrate that Competitive Code is much more effective than other palmprint algorithms. Although many coding methods have been developed based on IrisCode for iris and palmprint identification, we lack a detailed analysis of IrisCode. One of the aims of this research is to provide such analysis as a way of better understanding IrisCode, extending the coarse phase representation to a precise phase representation, and uncovering the relationship between IrisCode and other coding methods. This analysis demonstrates that IrisCode is a clustering process with four prototypes; the locus of a Gabor function is a two-dimensional ellipse with respect to a phase parameter and the bitwise hamming distance can be regarded as a bitwise angular distance. In this analysis, I also point out that the theoretical evidence of the imposter binomial distribution of IrisCode is incomplete. I use this analysis to develop a precise phase representation which can enhance iris recognition accuracy and to relate IrisCode and other coding methods. By making use of this analysis, principal component analysis and simulated annealing, near optimal filters for palmprint identification are sought. The near optimal filters perform better than Competitive Code in term of d’ index. Identical twins having the closest genetics-based relationship are expected to have maximum similarity in their biometrics. Classifying identical twins is a challenging problem for some automatic biometric systems. Palmprint has been studied for personal identification for many years. However, genetically identical palmprints have not been studied. I systemically examine Competitive Code on genetically identical palmprints for automatic personal identification and to uncover the genetically related palmprint features. The experimental results show that the three principal lines and some portions of weak lines are genetically related features but our palms still contain rich genetically unrelated features for classifying identical twins. As biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analyzed before they are massively deployed in security systems. I propose projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system based on Competitive Code. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks. In addition to brute-force attacks, I address the other three security issues: template re-issuances, also called cancellable biometrics, replay attacks, and database attacks. A random orientation filter bank (ROFB) is used to generate cancellable Competitive Codes for templates re-issuances. Secret messages are hidden in templates to prevent replay and database attacks. This technique can be regarded as template watermarking. A series of analyses is provided to evaluate the security levels of the measures

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Automatic mesh generation and adaptive remeshing for geological modelling

    Get PDF

    A novel approach for multimodal graph dimensionality reduction

    No full text
    This thesis deals with the problem of multimodal dimensionality reduction (DR), which arises when the input objects, to be mapped on a low-dimensional space, consist of multiple vectorial representations, instead of a single one. Herein, the problem is addressed in two alternative manners. One is based on the traditional notion of modality fusion, but using a novel approach to determine the fusion weights. In order to optimally fuse the modalities, the known graph embedding DR framework is extended to multiple modalities by considering a weighted sum of the involved affinity matrices. The weights of the sum are automatically calculated by minimizing an introduced notion of inconsistency of the resulting multimodal affinity matrix. The other manner for dealing with the problem is an approach to consider all modalities simultaneously, without fusing them, which has the advantage of minimal information loss due to fusion. In order to avoid fusion, the problem is viewed as a multi-objective optimization problem. The multiple objective functions are defined based on graph representations of the data, so that their individual minimization leads to dimensionality reduction for each modality separately. The aim is to combine the multiple modalities without the need to assign importance weights to them, or at least postpone such an assignment as a last step. The proposed approaches were experimentally tested in mapping multimedia data on low-dimensional spaces for purposes of visualization, classification and clustering. The no-fusion approach, namely Multi-objective DR, was able to discover mappings revealing the structure of all modalities simultaneously, which cannot be discovered by weight-based fusion methods. However, it results in a set of optimal trade-offs, from which one needs to be selected, which is not trivial. The optimal-fusion approach, namely Multimodal Graph Embedding DR, is able to easily extend unimodal DR methods to multiple modalities, but depends on the limitations of the unimodal DR method used. Both the no-fusion and the optimal-fusion approaches were compared to state-of-the-art multimodal dimensionality reduction methods and the comparison showed performance improvement in visualization, classification and clustering tasks. The proposed approaches were also evaluated for different types of problems and data, in two diverse application fields, a visual-accessibility-enhanced search engine and a visualization tool for mobile network security data. The results verified their applicability in different domains and suggested promising directions for future advancements.Open Acces

    CONTACTLESS FINGERPRINT BIOMETRICS: ACQUISITION, PROCESSING, AND PRIVACY PROTECTION

    Get PDF
    Biometrics is defined by the International Organization for Standardization (ISO) as \u201cthe automated recognition of individuals based on their behavioral and biological characteristics\u201d Examples of distinctive features evaluated by biometrics, called biometric traits, are behavioral characteristics like the signature, gait, voice, and keystroke, and biological characteristics like the fingerprint, face, iris, retina, hand geometry, palmprint, ear, and DNA. The biometric recognition is the process that permits to establish the identity of a person, and can be performed in two modalities: verification, and identification. The verification modality evaluates if the identity declared by an individual corresponds to the acquired biometric data. Differently, in the identification modality, the recognition application has to determine a person's identity by comparing the acquired biometric data with the information related to a set of individuals. Compared with traditional techniques used to establish the identity of a person, biometrics offers a greater confidence level that the authenticated individual is not impersonated by someone else. Traditional techniques, in fact, are based on surrogate representations of the identity, like tokens, smart cards, and passwords, which can easily be stolen or copied with respect to biometric traits. This characteristic permitted a wide diffusion of biometrics in different scenarios, like physical access control, government applications, forensic applications, logical access control to data, networks, and services. Most of the biometric applications, also called biometric systems, require performing the acquisition process in a highly controlled and cooperative manner. In order to obtain good quality biometric samples, the acquisition procedures of these systems need that the users perform deliberate actions, assume determinate poses, and stay still for a time period. Limitations regarding the applicative scenarios can also be present, for example the necessity of specific light and environmental conditions. Examples of biometric technologies that traditionally require constrained acquisitions are based on the face, iris, fingerprint, and hand characteristics. Traditional face recognition systems need that the users take a neutral pose, and stay still for a time period. Moreover, the acquisitions are based on a frontal camera and performed in controlled light conditions. Iris acquisitions are usually performed at a distance of less than 30 cm from the camera, and require that the user assume a defined pose and stay still watching the camera. Moreover they use near infrared illumination techniques, which can be perceived as dangerous for the health. Fingerprint recognition systems and systems based on the hand characteristics require that the users touch the sensor surface applying a proper and uniform pressure. The contact with the sensor is often perceived as unhygienic and/or associated to a police procedure. This kind of constrained acquisition techniques can drastically reduce the usability and social acceptance of biometric technologies, therefore decreasing the number of possible applicative contexts in which biometric systems could be used. In traditional fingerprint recognition systems, the usability and user acceptance are not the only negative aspects of the used acquisition procedures since the contact of the finger with the sensor platen introduces a security lack due to the release of a latent fingerprint on the touched surface, the presence of dirt on the surface of the finger can reduce the accuracy of the recognition process, and different pressures applied to the sensor platen can introduce non-linear distortions and low-contrast regions in the captured samples. Other crucial aspects that influence the social acceptance of biometric systems are associated to the privacy and the risks related to misuses of biometric information acquired, stored and transmitted by the systems. One of the most important perceived risks is related to the fact that the persons consider the acquisition of biometric traits as an exact permanent filing of their activities and behaviors, and the idea that the biometric systems can guarantee recognition accuracy equal to 100\% is very common. Other perceived risks consist in the use of the collected biometric data for malicious purposes, for tracing all the activities of the individuals, or for operating proscription lists. In order to increase the usability and the social acceptance of biometric systems, researchers are studying less-constrained biometric recognition techniques based on different biometric traits, for example, face recognition systems in surveillance applications, iris recognition techniques based on images captured at a great distance and on the move, and contactless technologies based on the fingerprint and hand characteristics. Other recent studies aim to reduce the real and perceived privacy risks, and consequently increase the social acceptance of biometric technologies. In this context, many studies regard methods that perform the identity comparison in the encrypted domain in order to prevent possible thefts and misuses of biometric data. The objective of this thesis is to research approaches able to increase the usability and social acceptance of biometric systems by performing less-constrained and highly accurate biometric recognitions in a privacy compliant manner. In particular, approaches designed for high security contexts are studied in order improve the existing technologies adopted in border controls, investigative, and governmental applications. Approaches based on low cost hardware configurations are also researched with the aim of increasing the number of possible applicative scenarios of biometric systems. The privacy compliancy is considered as a crucial aspect in all the studied applications. Fingerprint is specifically considered in this thesis, since this biometric trait is characterized by high distinctivity and durability, is the most diffused trait in the literature, and is adopted in a wide range of applicative contexts. The studied contactless biometric systems are based on one or more CCD cameras, can use two-dimensional or three-dimensional samples, and include privacy protection methods. The main goal of these systems is to perform accurate and privacy compliant recognitions in less-constrained applicative contexts with respect to traditional fingerprint biometric systems. Other important goals are the use of a wider fingerprint area with respect to traditional techniques, compatibility with the existing databases, usability, social acceptance, and scalability. The main contribution of this thesis consists in the realization of novel biometric systems based on contactless fingerprint acquisitions. In particular, different techniques for every step of the recognition process based on two-dimensional and three-dimensional samples have been researched. Novel techniques for the privacy protection of fingerprint data have also been designed. The studied approaches are multidisciplinary since their design and realization involved optical acquisition systems, multiple view geometry, image processing, pattern recognition, computational intelligence, statistics, and cryptography. The implemented biometric systems and algorithms have been applied to different biometric datasets describing a heterogeneous set of applicative scenarios. Results proved the feasibility of the studied approaches. In particular, the realized contactless biometric systems have been compared with traditional fingerprint recognition systems, obtaining positive results in terms of accuracy, usability, user acceptability, scalability, and security. Moreover, the developed techniques for the privacy protection of fingerprint biometric systems showed satisfactory performances in terms of security, accuracy, speed, and memory usage

    Discriminative preprocessing of speech : towards improving biometric authentication

    Get PDF
    Im Rahmen des "SecurePhone-Projektes" wurde ein multimodales System zur Benutzerauthentifizierung entwickelt, das auf ein PDA implementiert wurde. Bei der vollzogenen Erweiterung dieses Systems wurde der Möglichkeit nachgegangen, die Benutzerauthentifizierung durch eine auf biometrischen Parametern (E.: "feature enhancement") basierende Unterscheidung zwischen Sprechern sowie durch eine Kombination mehrerer Parameter zu verbessern. In der vorliegenden Dissertation wird ein allgemeines Bezugssystem zur Verbesserung der Parameter prĂ€sentiert, das ein mehrschichtiges neuronales Netz (E.: "MLP: multilayer perceptron") benutzt, um zu einer optimalen Sprecherdiskrimination zu gelangen. In einem ersten Schritt wird beim Trainieren des MLPs eine Teilmenge der Sprecher (Sprecherbasis) berĂŒcksichtigt, um die zugrundeliegenden Charakteristika des vorhandenen akustischen Parameterraums darzustellen. Am Ende eines zweiten Schrittes steht die Erkenntnis, dass die GrĂ¶ĂŸe der verwendeten Sprecherbasis die LeistungsfĂ€higkeit eines Sprechererkennungssystems entscheidend beeinflussen kann. Ein dritter Schritt fĂŒhrt zur Feststellung, dass sich die Selektion der Sprecherbasis ebenfalls auf die LeistungsfĂ€higkeit des Systems auswirken kann. Aufgrund dieser Beobachtung wird eine automatische Selektionsmethode fĂŒr die Sprecher auf der Basis des maximalen Durchschnittswertes der Zwischenklassenvariation (between-class variance) vorgeschlagen. Unter RĂŒckgriff auf verschiedene sprachliche Produktionssituationen (Sprachproduktion mit und ohne HintergrundgerĂ€usche; Sprachproduktion beim Telefonieren) wird gezeigt, dass diese Methode die LeistungsfĂ€higkeit des Erkennungssystems verbessern kann. Auf der Grundlage dieser Ergebnisse wird erwartet, dass sich die hier fĂŒr die Sprechererkennung verwendete Methode auch fĂŒr andere biometrische ModalitĂ€ten als sinnvoll erweist. ZusĂ€tzlich wird in der vorliegenden Dissertation eine alternative ParameterreprĂ€sentation vorgeschlagen, die aus der sog. "Sprecher-Stimme-Signatur" (E.: "SVS: speaker voice signature") abgeleitet wird. Die SVS besteht aus Trajektorien in einem Kohonennetz (E.: "SOM: self-organising map"), das den akustischen Raum reprĂ€sentiert. Als weiteres Ergebnis der Arbeit erweist sich diese ParameterreprĂ€sentation als ErgĂ€nzung zu dem zugrundeliegenden Parameterset. Deshalb liegt eine Kombination beider Parametersets im Sinne einer Verbesserung der LeistungsfĂ€higkeit des Erkennungssystems nahe. Am Ende der Arbeit sind schließlich einige potentielle Erweiterungsmöglichkeiten zu den vorgestellten Methoden zu finden. SchlĂŒsselwörter: Feature Enhancement, MLP, SOM, Sprecher-Basis-Selektion, SprechererkennungIn the context of the SecurePhone project, a multimodal user authentication system was developed for implementation on a PDA. Extending this system, we investigate biometric feature enhancement and multi-feature fusion with the aim of improving user authentication accuracy. In this dissertation, a general framework for feature enhancement is proposed which uses a multilayer perceptron (MLP) to achieve optimal speaker discrimination. First, to train this MLP a subset of speakers (speaker basis) is used to represent the underlying characteristics of the given acoustic feature space. Second, the size of the speaker basis is found to be among the crucial factors affecting the performance of a speaker recognition system. Third, it is found that the selection of the speaker basis can also influence system performance. Based on this observation, an automatic speaker selection approach is proposed on the basis of the maximal average between-class variance. Tests in a variety of conditions, including clean and noisy as well as telephone speech, show that this approach can improve the performance of speaker recognition systems. This approach, which is applied here to feature enhancement for speaker recognition, can be expected to also be effective with other biometric modalities besides speech. Further, an alternative feature representation is proposed in this dissertation, which is derived from what we call speaker voice signatures (SVS). These are trajectories in a Kohonen self organising map (SOM) which has been trained to represent the acoustic space. This feature representation is found to be somewhat complementary to the baseline feature set, suggesting that they can be fused to achieve improved performance in speaker recognition. Finally, this dissertation finishes with a number of potential extensions of the proposed approaches. Keywords: feature enhancement, MLP, SOM, speaker basis selection, speaker recognition, biometric, authentication, verificatio
    corecore