324 research outputs found

    Risk analysis in biometric-based Border Inspection System

    Get PDF
    The main goal of a Border Inspection System is to prevent the entry of individuals who pose a threat to a country. The entry of just one of these persons could have severe consequences. Nevertheless, performing a lengthy border inspection is not possible, given that 240,737 international passengers enter the country in an average day [5]. For this reason, the primary inspection is performed using biometrics traits and information flow processes that have a low false acceptance rate and have a high throughput.;This thesis uses the analytic modeling tool called LQNS (Layered Queueing Network Solver) to solve open models for biometric-based border inspection system and cost curves to evaluate the risk. The contributions of the thesis include a performance model of a biometric-based border inspection using open workloads and a risk model of a biometric-based border inspection using cost curves. Further, we propose an original methodology for analyzing a combination of performance risk and security risk in the border inspection system

    On the performance of helper data template protection schemes

    Get PDF
    The use of biometrics looks promising as it is already being applied in elec- tronic passports, ePassports, on a global scale. Because the biometric data has to be stored as a reference template on either a central or personal storage de- vice, its wide-spread use introduces new security and privacy risks such as (i) identity fraud, (ii) cross-matching, (iii) irrevocability and (iv) leaking sensitive medical information. Mitigating these risks is essential to obtain the accep- tance from the subjects of the biometric systems and therefore facilitating the successful implementation on a large-scale basis. A solution to mitigate these risks is to use template protection techniques. The required protection properties of the stored reference template according to ISO guidelines are (i) irreversibility, (ii) renewability and (iii) unlinkability. A known template protection scheme is the helper data system (HDS). The fun- damental principle of the HDS is to bind a key with the biometric sample with use of helper data and cryptography, as such that the key can be reproduced or released given another biometric sample of the same subject. The identity check is then performed in a secure way by comparing the hash of the key. Hence, the size of the key determines the amount of protection. This thesis extensively investigates the HDS system, namely (i) the the- oretical classication performance, (ii) the maximum key size, (iii) the irre- versibility and unlinkability properties, and (iv) the optimal multi-sample and multi-algorithm fusion method. The theoretical classication performance of the biometric system is deter- mined by assuming that the features extracted from the biometric sample are Gaussian distributed. With this assumption we investigate the in uence of the bit extraction scheme on the classication performance. With use of the the- oretical framework, the maximum size of the key is determined by assuming the error-correcting code to operate on Shannon's bound. We also show three vulnerabilities of HDS that aect the irreversibility and unlinkability property and propose solutions. Finally, we study the optimal level of applying multi- sample and multi-algorithm fusion with the HDS at either feature-, score-, or decision-level

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework

    Diffusion Models for Medical Image Analysis: A Comprehensive Survey

    Full text link
    Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems. A diffusion probabilistic model defines a forward diffusion stage where the input data is gradually perturbed over several steps by adding Gaussian noise and then learns to reverse the diffusion process to retrieve the desired noise-free data from noisy data samples. Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples despite their known computational burdens. Capitalizing on the advances in computer vision, the field of medical imaging has also observed a growing interest in diffusion models. To help the researcher navigate this profusion, this survey intends to provide a comprehensive overview of diffusion models in the discipline of medical image analysis. Specifically, we introduce the solid theoretical foundation and fundamental concepts behind diffusion models and the three generic diffusion modelling frameworks: diffusion probabilistic models, noise-conditioned score networks, and stochastic differential equations. Then, we provide a systematic taxonomy of diffusion models in the medical domain and propose a multi-perspective categorization based on their application, imaging modality, organ of interest, and algorithms. To this end, we cover extensive applications of diffusion models in the medical domain. Furthermore, we emphasize the practical use case of some selected approaches, and then we discuss the limitations of the diffusion models in the medical domain and propose several directions to fulfill the demands of this field. Finally, we gather the overviewed studies with their available open-source implementations at https://github.com/amirhossein-kz/Awesome-Diffusion-Models-in-Medical-Imaging.Comment: Second revision: including more papers and further discussion

    BIOMETRIC TECHNOLOGIES FOR AMBIENT INTELLIGENCE

    Get PDF
    Il termine Ambient Intelligence (AmI) si riferisce a un ambiente in grado di riconoscere e rispondere alla presenza di diversi individui in modo trasparente, non intrusivo e spesso invisibile. In questo tipo di ambiente, le persone sono circondate da interfacce uomo macchina intuitive e integrate in oggetti di ogni tipo. Gli scopi dell\u2019AmI sono quelli di fornire un supporto ai servizi efficiente e di facile utilizzo per accrescere le potenzialit\ue0 degli individui e migliorare l\u2019interazioni uomo-macchina. Le tecnologie di AmI possono essere impiegate in contesti come uffici (smart offices), case (smart homes), ospedali (smart hospitals) e citt\ue0 (smart cities). Negli scenari di AmI, i sistemi biometrici rappresentano tecnologie abilitanti al fine di progettare servizi personalizzati per individui e gruppi di persone. La biometria \ue8 la scienza che si occupa di stabilire l\u2019identit\ue0 di una persona o di una classe di persone in base agli attributi fisici o comportamentali dell\u2019individuo. Le applicazioni tipiche dei sistemi biometrici includono: controlli di sicurezza, controllo delle frontiere, controllo fisico dell\u2019accesso e autenticazione per dispositivi elettronici. Negli scenari basati su AmI, le tecnologie biometriche devono funzionare in condizioni non controllate e meno vincolate rispetto ai sistemi biometrici comunemente impiegati. Inoltre, in numerosi scenari applicativi, potrebbe essere necessario utilizzare tecniche in grado di funzionare in modo nascosto e non cooperativo. In questo tipo di applicazioni, i campioni biometrici spesso presentano una bassa qualit\ue0 e i metodi di riconoscimento biometrici allo stato dell\u2019arte potrebbero ottenere prestazioni non soddisfacenti. \uc8 possibile distinguere due modi per migliorare l\u2019applicabilit\ue0 e la diffusione delle tecnologie biometriche negli scenari basati su AmI. Il primo modo consiste nel progettare tecnologie biometriche innovative che siano in grado di funzionare in modo robusto con campioni acquisiti in condizioni non ideali e in presenza di rumore. Il secondo modo consiste nel progettare approcci biometrici multimodali innovativi, in grado di sfruttare a proprio vantaggi tutti i sensori posizionati in un ambiente generico, al fine di ottenere un\u2019elevata accuratezza del riconoscimento ed effettuare autenticazioni continue o periodiche in modo non intrusivo. Il primo obiettivo di questa tesi \ue8 la progettazione di sistemi biometrici innovativi e scarsamente vincolati in grado di migliorare, rispetto allo stato dell\u2019arte attuale, la qualit\ue0 delle tecniche di interazione uomo-macchine in diversi scenari applicativi basati su AmI. Il secondo obiettivo riguarda la progettazione di approcci innovativi per migliorare l\u2019applicabilit\ue0 e l\u2019integrazione di tecnologie biometriche eterogenee negli scenari che utilizzano AmI. In particolare, questa tesi considera le tecnologie biometriche basate su impronte digitali, volto, voce e sistemi multimodali. Questa tesi presenta le seguenti ricerche innovative: \u2022 un metodo per il riconoscimento del parlatore tramite la voce in applicazioni che usano AmI; \u2022 un metodo per la stima dell\u2019et\ue0 dell\u2019individuo da campioni acquisiti in condizioni non-ideali nell\u2019ambito di scenari basati su AmI; \u2022 un metodo per accrescere l\u2019accuratezza del riconoscimento biometrico in modo protettivo della privacy e basato sulla normalizzazione degli score biometrici tramite l\u2019analisi di gruppi di campioni simili tra loro; \u2022 un approccio per la fusione biometrica multimodale indipendente dalla tecnologia utilizzata, in grado di combinare tratti biometrici eterogenei in scenari basati su AmI; \u2022 un approccio per l\u2019autenticazione continua multimodale in applicazioni che usano AmI. Le tecnologie biometriche innovative progettate e descritte in questa tesi sono state validate utilizzando diversi dataset biometrici (sia pubblici che acquisiti in laboratorio), i quali simulano le condizioni che si possono verificare in applicazioni di AmI. I risultati ottenuti hanno dimostrato la realizzabilit\ue0 degli approcci studiati e hanno mostrato che i metodi progettati aumentano l\u2019accuratezza, l\u2019applicabilit\ue0 e l\u2019usabilit\ue0 delle tecnologie biometriche rispetto allo stato dell\u2019arte negli scenari basati su AmI.Ambient Intelligence (AmI) refers to an environment capable of recognizing and responding to the presence of different individuals in a seamless, unobtrusive and often invisible way. In this environment, people are surrounded by intelligent intuitive interfaces that are embedded in all kinds of objects. The goals of AmI are to provide greater user-friendliness, more efficient services support, user-empowerment, and support for human interactions. Examples of AmI scenarios are smart cities, smart homes, smart offices, and smart hospitals. In AmI, biometric technologies represent enabling technologies to design personalized services for individuals or groups of people. Biometrics is the science of establishing the identity of an individual or a class of people based on the physical, or behavioral attributes of the person. Common applications include: security checks, border controls, access control to physical places, and authentication to electronic devices. In AmI, biometric technologies should work in uncontrolled and less-constrained conditions with respect to traditional biometric technologies. Furthermore, in many application scenarios, it could be required to adopt covert and non-cooperative technologies. In these non-ideal conditions, the biometric samples frequently present poor quality, and state-of-the-art biometric technologies can obtain unsatisfactory performance. There are two possible ways to improve the applicability and diffusion of biometric technologies in AmI. The first one consists in designing novel biometric technologies robust to samples acquire in noisy and non-ideal conditions. The second one consists in designing novel multimodal biometric approaches that are able to take advantage from all the sensors placed in a generic environment in order to achieve high recognition accuracy and to permit to perform continuous or periodic authentications in an unobtrusive manner. The first goal of this thesis is to design innovative less-constrained biometric systems, which are able to improve the quality of the human-machine interaction in different AmI environments with respect to the state-of-the-art technologies. The second goal is to design novel approaches to improve the applicability and integration of heterogeneous biometric systems in AmI. In particular, the thesis considers technologies based on fingerprint, face, voice, and multimodal biometrics. This thesis presents the following innovative research studies: \u2022 a method for text-independent speaker identification in AmI applications; \u2022 a method for age estimation from non-ideal samples acquired in AmI scenarios; \u2022 a privacy-compliant cohort normalization technique to increase the accuracy of already deployed biometric systems; \u2022 a technology-independent multimodal fusion approach to combine heterogeneous traits in AmI scenarios; \u2022 a multimodal continuous authentication approach for AmI applications. The designed novel biometric technologies have been tested on different biometric datasets (both public and collected in our laboratory) simulating the acquisitions performed in AmI applications. Results proved the feasibility of the studied approaches and shown that the studied methods effectively increased the accuracy, applicability, and usability of biometric technologies in AmI with respect to the state-of-the-art

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Resilient Infrastructure and Building Security

    Get PDF

    Handbook of Vascular Biometrics

    Get PDF
    • …
    corecore