26 research outputs found

    CONTACTLESS FINGERPRINT BIOMETRICS: ACQUISITION, PROCESSING, AND PRIVACY PROTECTION

    Get PDF
    Biometrics is defined by the International Organization for Standardization (ISO) as \u201cthe automated recognition of individuals based on their behavioral and biological characteristics\u201d Examples of distinctive features evaluated by biometrics, called biometric traits, are behavioral characteristics like the signature, gait, voice, and keystroke, and biological characteristics like the fingerprint, face, iris, retina, hand geometry, palmprint, ear, and DNA. The biometric recognition is the process that permits to establish the identity of a person, and can be performed in two modalities: verification, and identification. The verification modality evaluates if the identity declared by an individual corresponds to the acquired biometric data. Differently, in the identification modality, the recognition application has to determine a person's identity by comparing the acquired biometric data with the information related to a set of individuals. Compared with traditional techniques used to establish the identity of a person, biometrics offers a greater confidence level that the authenticated individual is not impersonated by someone else. Traditional techniques, in fact, are based on surrogate representations of the identity, like tokens, smart cards, and passwords, which can easily be stolen or copied with respect to biometric traits. This characteristic permitted a wide diffusion of biometrics in different scenarios, like physical access control, government applications, forensic applications, logical access control to data, networks, and services. Most of the biometric applications, also called biometric systems, require performing the acquisition process in a highly controlled and cooperative manner. In order to obtain good quality biometric samples, the acquisition procedures of these systems need that the users perform deliberate actions, assume determinate poses, and stay still for a time period. Limitations regarding the applicative scenarios can also be present, for example the necessity of specific light and environmental conditions. Examples of biometric technologies that traditionally require constrained acquisitions are based on the face, iris, fingerprint, and hand characteristics. Traditional face recognition systems need that the users take a neutral pose, and stay still for a time period. Moreover, the acquisitions are based on a frontal camera and performed in controlled light conditions. Iris acquisitions are usually performed at a distance of less than 30 cm from the camera, and require that the user assume a defined pose and stay still watching the camera. Moreover they use near infrared illumination techniques, which can be perceived as dangerous for the health. Fingerprint recognition systems and systems based on the hand characteristics require that the users touch the sensor surface applying a proper and uniform pressure. The contact with the sensor is often perceived as unhygienic and/or associated to a police procedure. This kind of constrained acquisition techniques can drastically reduce the usability and social acceptance of biometric technologies, therefore decreasing the number of possible applicative contexts in which biometric systems could be used. In traditional fingerprint recognition systems, the usability and user acceptance are not the only negative aspects of the used acquisition procedures since the contact of the finger with the sensor platen introduces a security lack due to the release of a latent fingerprint on the touched surface, the presence of dirt on the surface of the finger can reduce the accuracy of the recognition process, and different pressures applied to the sensor platen can introduce non-linear distortions and low-contrast regions in the captured samples. Other crucial aspects that influence the social acceptance of biometric systems are associated to the privacy and the risks related to misuses of biometric information acquired, stored and transmitted by the systems. One of the most important perceived risks is related to the fact that the persons consider the acquisition of biometric traits as an exact permanent filing of their activities and behaviors, and the idea that the biometric systems can guarantee recognition accuracy equal to 100\% is very common. Other perceived risks consist in the use of the collected biometric data for malicious purposes, for tracing all the activities of the individuals, or for operating proscription lists. In order to increase the usability and the social acceptance of biometric systems, researchers are studying less-constrained biometric recognition techniques based on different biometric traits, for example, face recognition systems in surveillance applications, iris recognition techniques based on images captured at a great distance and on the move, and contactless technologies based on the fingerprint and hand characteristics. Other recent studies aim to reduce the real and perceived privacy risks, and consequently increase the social acceptance of biometric technologies. In this context, many studies regard methods that perform the identity comparison in the encrypted domain in order to prevent possible thefts and misuses of biometric data. The objective of this thesis is to research approaches able to increase the usability and social acceptance of biometric systems by performing less-constrained and highly accurate biometric recognitions in a privacy compliant manner. In particular, approaches designed for high security contexts are studied in order improve the existing technologies adopted in border controls, investigative, and governmental applications. Approaches based on low cost hardware configurations are also researched with the aim of increasing the number of possible applicative scenarios of biometric systems. The privacy compliancy is considered as a crucial aspect in all the studied applications. Fingerprint is specifically considered in this thesis, since this biometric trait is characterized by high distinctivity and durability, is the most diffused trait in the literature, and is adopted in a wide range of applicative contexts. The studied contactless biometric systems are based on one or more CCD cameras, can use two-dimensional or three-dimensional samples, and include privacy protection methods. The main goal of these systems is to perform accurate and privacy compliant recognitions in less-constrained applicative contexts with respect to traditional fingerprint biometric systems. Other important goals are the use of a wider fingerprint area with respect to traditional techniques, compatibility with the existing databases, usability, social acceptance, and scalability. The main contribution of this thesis consists in the realization of novel biometric systems based on contactless fingerprint acquisitions. In particular, different techniques for every step of the recognition process based on two-dimensional and three-dimensional samples have been researched. Novel techniques for the privacy protection of fingerprint data have also been designed. The studied approaches are multidisciplinary since their design and realization involved optical acquisition systems, multiple view geometry, image processing, pattern recognition, computational intelligence, statistics, and cryptography. The implemented biometric systems and algorithms have been applied to different biometric datasets describing a heterogeneous set of applicative scenarios. Results proved the feasibility of the studied approaches. In particular, the realized contactless biometric systems have been compared with traditional fingerprint recognition systems, obtaining positive results in terms of accuracy, usability, user acceptability, scalability, and security. Moreover, the developed techniques for the privacy protection of fingerprint biometric systems showed satisfactory performances in terms of security, accuracy, speed, and memory usage

    Signal processing and machine learning techniques for human verification based on finger textures

    Get PDF
    PhD ThesisIn recent years, Finger Textures (FTs) have attracted considerable attention as potential biometric characteristics. They can provide robust recognition performance as they have various human-speci c features, such as wrinkles and apparent lines distributed along the inner surface of all ngers. The main topic of this thesis is verifying people according to their unique FT patterns by exploiting signal processing and machine learning techniques. A Robust Finger Segmentation (RFS) method is rst proposed to isolate nger images from a hand area. It is able to detect the ngers as objects from a hand image. An e cient adaptive nger segmentation method is also suggested to address the problem of alignment variations in the hand image called the Adaptive and Robust Finger Segmentation (ARFS) method. A new Multi-scale Sobel Angles Local Binary Pattern (MSALBP) feature extraction method is proposed which combines the Sobel direction angles with the Multi-Scale Local Binary Pattern (MSLBP). Moreover, an enhanced method called the Enhanced Local Line Binary Pattern (ELLBP) is designed to e ciently analyse the FT patterns. As a result, a powerful human veri cation scheme based on nger Feature Level Fusion with a Probabilistic Neural Network (FLFPNN) is proposed. A multi-object fusion method, termed the Finger Contribution Fusion Neural Network (FCFNN), combines the contribution scores of the nger objects. The veri cation performances are examined in the case of missing FT areas. Consequently, to overcome nger regions which are poorly imaged a method is suggested to salvage missing FT elements by exploiting the information embedded within the trained Probabilistic Neural Network (PNN). Finally, a novel method to produce a Receiver Operating Characteristic (ROC) curve from a PNN is suggested. Furthermore, additional development to this method is applied to generate the ROC graph from the FCFNN. Three databases are employed for evaluation: The Hong Kong Polytechnic University Contact-free 3D/2D (PolyU3D2D), Indian Institute of Technology (IIT) Delhi and Spectral 460nm (S460) from the CASIA Multi-Spectral (CASIAMS) databases. Comparative simulation studies con rm the e ciency of the proposed methods for human veri cation. The main advantage of both segmentation approaches, the RFS and ARFS, is that they can collect all the FT features. The best results have been benchmarked for the ELLBP feature extraction with the FCFNN, where the best Equal Error Rate (EER) values for the three databases PolyU3D2D, IIT Delhi and CASIAMS (S460) have been achieved 0.11%, 1.35% and 0%, respectively. The proposed salvage approach for the missing feature elements has the capability to enhance the veri cation performance for the FLFPNN. Moreover, ROC graphs have been successively established from the PNN and FCFNN.the ministry of higher education and scientific research in Iraq (MOHESR); the Technical college of Mosul; the Iraqi Cultural Attach e; the active people in the MOHESR, who strongly supported Iraqi students

    Privacy-Preserving Biometric Authentication

    Full text link
    Biometric-based authentication provides a highly accurate means of authentication without requiring the user to memorize or possess anything. However, there are three disadvantages to the use of biometrics in authentication; any compromise is permanent as it is impossible to revoke biometrics; there are significant privacy concerns with the loss of biometric data; and humans possess only a limited number of biometrics, which limits how many services can use or reuse the same form of authentication. As such, enhancing biometric template security is of significant research interest. One of the methodologies is called cancellable biometric template which applies an irreversible transformation on the features of the biometric sample and performs the matching in the transformed domain. Yet, this is itself susceptible to specific classes of attacks, including hill-climb, pre-image, and attacks via records multiplicity. This work has several outcomes and contributions to the knowledge of privacy-preserving biometric authentication. The first of these is a taxonomy structuring the current state-of-the-art and provisions for future research. The next of these is a multi-filter framework for developing a robust and secure cancellable biometric template, designed specifically for fingerprint biometrics. This framework is comprised of two modules, each of which is a separate cancellable fingerprint template that has its own matching and measures. The matching for this is based on multiple thresholds. Importantly, these methods show strong resistance to the above-mentioned attacks. Another of these outcomes is a method that achieves a stable performance and can be used to be embedded into a Zero-Knowledge-Proof protocol. In this novel method, a new strategy was proposed to improve the recognition error rates which is privacy-preserving in the untrusted environment. The results show promising performance when evaluated on current datasets

    Investigation of Multimodal Template-Free Biometric Techniques and Associated Exception Handling

    Get PDF
    The Biometric systems are commonly used as a fundamental tool by both government and private sector organizations to allow restricted access to sensitive areas, to identify the criminals by the police and to authenticate the identification of individuals requesting to access to certain personal and confidential services. The applications of these identification tools have created issues of security and privacy relating to personal, commercial and government identities. Over the last decade, reports of increasing insecurity to the personal data of users in the public and commercial domain applications has prompted the development of more robust and sound measures to protect the personal data of users from being stolen and spoofing. The present study aimed to introduce the scheme for integrating direct and indirect biometric key generation schemes with the application of Shamir‘s secret sharing algorithm in order to address the two disadvantages: revocability of the biometric key and the exception handling of biometric modality. This study used two different approaches for key generation using Shamir‘s secret sharing scheme: template based approach for indirect key generation and template-free. The findings of this study demonstrated that the encryption key generated by the proposed system was not required to be stored in the database which prevented the attack on the privacy of the data of the individuals from the hackers. Interestingly, the proposed system was also able to generate multiple encryption keys with varying lengths. Furthermore, the results of this study also offered the flexibility of providing the multiple keys for different applications for each user. The results from this study, consequently, showed the considerable potential and prospect of the proposed scheme to generate encryption keys directly and indirectly from the biometric samples, which could enhance its success in biometric security field

    Chemometric tools for automated method-development and data interpretation in liquid chromatography

    Get PDF
    The thesis explores the challenges and advancements in the field of liquid chromatography (LC), particularly focusing on complex sample analysis using high-resolution mass spectrometry (MS) and two-dimensional (2D) LC techniques. The research addresses the need for efficient optimization and data-handling strategies in modern LC practice. The thesis is divided into several chapters, each addressing specific aspects of LC and polymer analysis. Chapter 2 provides an overview of the need for chemometric tools in LC practice, discussing methods for processing and analyzing data from 1D and 2D-LC systems and how chemometrics can be utilized for method development and optimization. Chapter 3 introduces a novel approach for interpreting the molecular-weight distribution and intrinsic viscosity of polymers, allowing quantitative analysis of polymer properties without prior knowledge of their interactions. This method correlates the curvature parameter of the Mark-Houwink plot with the polymer's structural and chemical properties. Chapters 4 and 5 focus on the analysis of cellulose ethers (CEs), essential in various industrial applications. A new method is presented for mapping the substitution degree and composition of CE samples, providing detailed compositional distributions. Another method involves a comprehensive 2D LC-MS/MS approach for analyzing hydroxypropyl methyl cellulose (HPMC) monomers, revealing subtle differences in composition between industrial HPMC samples. Chapter 6 introduces AutoLC, an algorithm for automated and interpretive development of 1D-LC separations. It uses retention modeling and Bayesian optimization to achieve optimal separation within a few iterations, significantly improving the efficiency of gradient LC separations. Chapter 7 focuses on the development of an open-source algorithm for automated method development in 2D-LC-MS systems. This algorithm improves separation performance by refining gradient profiles and accurately predicting peak widths, enhancing the reliability of complex gradient LC separations. Chapter 8 addresses the challenge of gradient deformation in LC instruments. An algorithm based on the stable function corrects instrument-specific gradient deformations, enabling accurate determination of analyte retention parameters and improving data comparability between different sources. Chapter 9 introduces a novel approach using capacitively-coupled-contactless-conductivity detection (C4D) to measure gradient profiles without adding tracer components. This method enhances inter-system transferability of retention models for polymers, overcoming the limitations of UV-absorbance detectable tracer components. Chapter 10 discusses practical choices and challenges faced in the thesis chapters, highlighting the need for well-defined standard samples in industrial polymer analysis and emphasizing the importance of generalized problem-solving approaches. The thesis identifies future research directions, emphasizing the importance of computational-assisted methods for polymer analysis, the utilization of online reaction modulation techniques, and exploring continuous distributions obtained through size-exclusion chromatography (SEC) in conjunction with triple detection. Chemometric tools are recognized as essential for gaining deeper insights into polymer chemistry and improving data interpretation in the field of LC

    Indoor location identification technologies for real-time IoT-based applications: an inclusive survey

    Get PDF
    YesThe advent of the Internet of Things has witnessed tremendous success in the application of wireless sensor networks and ubiquitous computing for diverse smart-based applications. The developed systems operate under different technologies using different methods to achieve their targeted goals. In this treatise, we carried out an inclusive survey on key indoor technologies and techniques, with to view to explore their various benefits, limitations, and areas for improvement. The mathematical formulation for simple localization problems is also presented. In addition, an empirical evaluation of the performance of these indoor technologies is carried out using a common generic metric of scalability, accuracy, complexity, robustness, energy-efficiency, cost and reliability. An empirical evaluation of performance of different RF-based technologies establishes the viability of Wi-Fi, RFID, UWB, Wi-Fi, Bluetooth, ZigBee, and Light over other indoor technologies for reliable IoT-based applications. Furthermore, the survey advocates hybridization of technologies as an effective approach to achieve reliable IoT-based indoor systems. The findings of the survey could be useful in the selection of appropriate indoor technologies for the development of reliable real-time indoor applications. The study could also be used as a reliable source for literature referencing on the subject of indoor location identification.Supported in part by the Tertiary Education Trust Fund of the Federal Government of Nigeria, and in part by the European Union’s Horizon 2020 Research and Innovation Programme under Grant agreement H2020-MSCA-ITN-2016 SECRET-72242

    Terahertz imaging and spectroscopy : application to defense and security

    Get PDF
    The aim of this work is to demonstrate the potential and capabilities of terahertz technology for parcels screening and inspection to detect threats such as weapons and explosives, without the need to open the parcel.In this study, we first present terahertz time-domain spectroscopy and spectral imaging for explosives detection. Two types of explosives as well as their binary mixture is analyzed. Due to the complexity of extracting information when facing such mixtures of samples, three chemometric tools are used: principal component analysis (PCA), partial least square analysis (PLS) and partial least squares-discriminant analysis (PLS-DA). The analyses are applied to terahertz spectral data and to spectral-images in order to: (i) describe a set of unknown data and identify similarities between samples by PCA; (ii) create a classification model and predict the belonging of unknown samples to each of the classes, by PLS-DA; (iii) create a model able to quantify and predict the explosive concentrations in a pure state or in mixtures, by PLS.The second part of this work focuses on millimeter wave imaging for weapon detection in parcels. Three different imaging techniques are studied: passive imaging, continuous wave (CW) active imaging and frequency modulated continuous wave (FMCW) active imaging. The performances, the advantages and the limitations of each of the three techniques, for parcel inspection, are exhibited. Moreover, computed tomography is applied to each of the three techniques to visualize data in 3D and inspect parcels in volume. Thus, a special tomography algorithm is developed by taking in consideration the Gaussian propagation of the wave.Le but de ce travail est de quantifier le potentiel et les capacités de la technologie térahertz à contrôler des colis afin de détecter les menaces telles que les armes et les explosifs, sans avoir besoin d'ouvrir le colis.Dans cette étude, nous présentons la spectroscopie térahertz résolue en temps et l'imagerie multi-spectrale pour la détection des explosifs. Deux types d’explosifs, ainsi que leurs mélanges binaires sont analysés. En raison de la complexité de l'extraction des informations face à tels échantillons, trois outils de chimiométrie sont utilisés: l’analyse en composantes principales (ACP), l'analyse des moindres carrés partiels (PLS) et l'analyse des moindres carrés partiels discriminante (PLS-DA). Les méthodes sont appliquées sur des données spectrales térahertz et sur des images spectrales pour : (i) décrire un ensemble de données inconnues et identifier des similitudes entre les échantillons par l'ACP ; (ii) créer des classes, ensuite classer les échantillons inconnus par PLS-DA ; (iii) créer un modèle capable de prédire les concentrations d’un explosif, à l'état pur ou dans des mélanges, par PLS.Dans la deuxième partie de ce travail, nous présentons l'imagerie par les ondes millimétriques pour la détection d'armes dans les colis. Trois techniques d'imagerie différentes sont étudiées : l'imagerie passive, l’imagerie active par des ondes continues (CW) et l’imagerie active par modulation de fréquence (FMCW). Les performances, les avantages et les limitations de chacune de ces techniques, pour l’inspection de colis, sont présentés. En outre, la technique de reconstruction tomographique est appliquée à chacune de ces trois techniques, pour visualiser en 3D et inspecter les colis en volume. Dans cet ordre, un algorithme de tomographie spécial est développé en prenant en considération la propagation gaussienne de l'onde

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore