357 research outputs found

    Binary Biometric Representation through Pairwise Adaptive Phase Quantization

    Get PDF
    Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Quantization and coding is the straightforward way to extract binary representations from arbitrary real-valued biometric modalities. In this paper, we propose a pairwise adaptive phase quantization (APQ) method, together with a long-short (LS) pairing strategy, which aims to maximize the overall detection rate. Experimental results on the FVC2000 fingerprint and the FRGC face database show reasonably good verification performances.\ud \u

    QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios

    Get PDF
    The concerns about individuals security have justified the increasing number of surveillance cameras deployed both in private and public spaces. However, contrary to popular belief, these devices are in most cases used solely for recording, instead of feeding intelligent analysis processes capable of extracting information about the observed individuals. Thus, even though video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant details about the subjects that took part in a crime depends on the manual inspection of recordings. As such, the current goal of the research community is the development of automated surveillance systems capable of monitoring and identifying subjects in surveillance scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at designing a visual surveillance system capable of acquiring biometric data at a distance (e.g., face, iris or gait) without requiring human intervention in the process, as well as devising biometric recognition methods robust to the degradation factors resulting from the unconstrained acquisition process. Regarding the first goal, the analysis of the data acquired by typical surveillance systems shows that large acquisition distances significantly decrease the resolution of biometric samples, and thus their discriminability is not sufficient for recognition purposes. In the literature, diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring high-resolution imagery at a distance, particularly when using a master-slave configuration. In the master-slave configuration, the video acquired by a typical surveillance camera is analyzed for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged at high-resolution by the PTZ camera. Several methods have already shown that this configuration can be used for acquiring biometric data at a distance. Nevertheless, these methods failed at providing effective solutions to the typical challenges of this strategy, restraining its use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development of a biometric data acquisition system based on the cooperation of a PTZ camera with a typical surveillance camera. The first proposal is a camera calibration method capable of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ camera. The second proposal is a camera scheduling method for determining - in real-time - the sequence of acquisitions that maximizes the number of different targets obtained, while minimizing the cumulative transition time. In order to achieve the first goal of this thesis, both methods were combined with state-of-the-art approaches of the human monitoring field to develop a fully automated surveillance capable of acquiring biometric data at a distance and without human cooperation, designated as QUIS-CAMPI system. The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis of the performance of the state-of-the-art biometric recognition approaches shows that these approaches attain almost ideal recognition rates in unconstrained data. However, this performance is incongruous with the recognition rates observed in surveillance scenarios. Taking into account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a distance ranging from 5 to 40 meters and without human intervention in the acquisition process. This set allows to objectively assess the performance of state-of-the-art biometric recognition methods in data that truly encompass the covariates of surveillance scenarios. As such, this set was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained by the nine methods specially designed for this competition. In addition, the data acquired by the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the development of methods robust to the covariates of surveillance scenarios. The first proposal regards a method for detecting corrupted features in biometric signatures inferred by a redundancy analysis algorithm. The second proposal is a caricature-based face recognition approach capable of enhancing the recognition performance by automatically generating a caricature from a 2D photo. The experimental evaluation of these methods shows that both approaches contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivíduos tem justificado o crescimento do número de câmaras de vídeo-vigilância instaladas tanto em espaços privados como públicos. Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente capaz de inferir em tempo real informações sobre os indivíduos observados. Assim, apesar de a vídeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda confinado à disponibilização de vídeos que têm que ser manualmente inspecionados para extrair informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal desafio da comunidade científica é o desenvolvimento de sistemas automatizados capazes de monitorizar e identificar indivíduos em ambientes de vídeo-vigilância. Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento biométrico aos ambientes de vídeo-vigilância. De forma mais especifica, pretende-se 1) conceber um sistema de vídeo-vigilância que consiga adquirir dados biométricos a longas distâncias (e.g., imagens da cara, íris, ou vídeos do tipo de passo) sem requerer a cooperação dos indivíduos no processo; e 2) desenvolver métodos de reconhecimento biométrico robustos aos fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas. No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas típicos de vídeo-vigilância mostra que, devido à distância de captura, os traços biométricos amostrados não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis. Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave. Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse (e.g. carros, pessoas) a partir do vídeo adquirido por uma câmara de vídeo-vigilância e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados com esta estratégia, impedindo assim o seu uso em ambientes de vídeo-vigilância. Deste modo, esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de vídeo-vigilância usando uma câmara PTZ assistida por uma câmara típica de vídeo-vigilância. O primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara master para o ângulo da câmara PTZ (slave) sem o auxílio de outros dispositivos óticos. O segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações que maximiza o número de diferentes sujeitos observados e simultaneamente minimiza o tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os dois métodos propostos foram combinados com os avanços alcançados na área da monitorização de humanos para assim desenvolver o primeiro sistema de vídeo-vigilância completamente automatizado e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação dos indivíduos no processo, designado por sistema QUIS-CAMPI. O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho não é corroborado pelos resultados observados em ambientes de vídeo-vigilância, o que sugere que os conjuntos de dados atuais não contêm verdadeiramente os fatores de degradação típicos dos ambientes de vídeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biométricos atuais, esta tese introduz um novo conjunto de dados biométricos (imagens da face e vídeos do tipo de passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho dos métodos do estado-da-arte no reconhecimento de indivíduos em imagens/vídeos capturados num ambiente real de vídeo-vigilância. Como tal, este conjunto foi utilizado para promover a primeira competição de reconhecimento biométrico em ambientes não controlados. Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9 métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para aumentar a robustez aos fatores de degradação observados em ambientes de vídeo-vigilância. O primeiro é um método para detetar características corruptas em assinaturas biométricas através da análise da redundância entre subconjuntos de características. O segundo é um método de reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma única foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir as taxas de erro em dados adquiridos de forma não controlada

    A dissimilarity representation approach to designing systems for signature verification and bio-cryptography

    Get PDF
    Automation of legal and financial processes requires enforcing of authenticity, confidentiality, and integrity of the involved transactions. This Thesis focuses on developing offline signature verification (OLSV) systems for enforcing authenticity of transactions. In addition, bio-cryptography systems are developed based on the offline handwritten signature images for enforcing confidentiality and integrity of transactions. Design of OLSV systems is challenging, as signatures are behavioral biometric traits that have intrinsic intra-personal variations and inter-personal similarities. Standard OLSV systems are designed in the feature representation (FR) space, where high-dimensional feature representations are needed to capture the invariance of the signature images. With the numerous users, found in real world applications, e.g., banking systems, decision boundaries in the high-dimensional FR spaces become complex. Accordingly, large number of training samples are required to design of complex classifiers, which is not practical in typical applications. In contrast, design of bio-cryptography systems based on the offline signature images is more challenging. In these systems, signature images lock the cryptographic keys, and a user retrieves his key by applying a query signature sample. For practical bio-cryptographic schemes, the locking feature vector should be concise. In addition, such schemes employ simple error correction decoders, and therefore no complex classification rules can be employed. In this Thesis, the challenging problems of designing OLSV and bio-cryptography systems are addressed by employing the dissimilarity representation (DR) approach. Instead of designing classifiers in the feature space, the DR approach provides a classification space that is defined by some proximity measure. This way, a multi-class classification problem, with few samples per class, is transformed to a more tractable two-class problem with large number of training samples. Since many feature extraction techniques have already been proposed for OLSV applications, a DR approach based on FR is employed. In this case, proximity between two signatures is measured by applying a dissimilarity measure on their feature vectors. The main hypothesis of this Thesis is as follows. The FRs and dissimilarity measures should be properly designed, so that signatures belong to same writer are close, while signatures of different writers are well separated in the resulting DR spaces. In that case, more cost-effecitive classifiers, and therefore simpler OLSV and bio-cryptography systems can be designed. To this end, in Chapter 2, an approach for optimizing FR-based DR spaces is proposed such that concise representations are discriminant, and simple classification thresholds are sufficient. High-dimensional feature representations are translated to an intermediate DR space, where pairwise feature distances are the space constituents. Then, a two-step boosting feature selection (BFS) algorithm is applied. The first step uses samples from a development database, and aims to produce a universal space of reduced dimensionality. The resulting universal space is further reduced and tuned for specific users through a second BFS step using user-specific training set. In the resulting space, feature variations are modeled and an adaptive dissimilarity measure is designed. This measure generates the final DR space, where discriminant prototypes are selected for enhanced representation. The OLSV and bio-cryptographic systems are formulated as simple threshold classifiers that operate in the designed DR space. Proof of concept simulations on the Brazilian signature database indicate the viability of the proposed approach. Concise DRs with few features and a single prototype are produced. Employing a simple threshold classifier, the DRs have shown state-of-the-art accuracy of about 7% AER, comparable to complex systems in the literature. In Chapter 3, the OLSV problem is further studied. Although the aforementioned OLSV implementation has shown acceptable recognition accuracy, the resulting systems are not secure as signature templates must be stored for verification. For enhanced security, we modified the previous implementation as follows. The first BFS step is implemented as aforementioned, producing a writer-independent (WI) system. This enables starting system operation, even if users provide a single signature sample in the enrollment phase. However, the second BFS is modified to run in a FR space instead of a DR space, so that no signature templates are used for verification. To this end, the universal space is translated back to a FR space of reduced dimensionality, so that designing a writer-dependent (WD) system by the few user-specific samples is tractable in the reduced space. Simulation results on two real-world offline signature databases confirm the feasibility of the proposed approach. The initial universal (WI) verification mode showed comparable performance to that of state-of-the-art OLSV systems. The final secure WD verification mode showed enhanced accuracy with decreased computational complexity. Only a single compact classifier produced similar level of accuracy (AER of about 5.38 and 13.96% for the Brazilian and the GPDS signature databases, respectively) as complex WI and WD systems in the literature. Finally, in Chapter 4, a key-binding bio-cryptographic scheme known as the fuzzy vault (FV) is implemented based on the offline signature images. The proposed DR-based two-step BFS technique is employed for selecting a compact and discriminant user-specific FR from a large number of feature extractions. This representation is used to generate the FV locking/unlocking points. Representation variability modeled in the DR space is considered for matching the unlocking and locking points during FV decoding. Proof of concept simulations on the Brazilian signature database have shown FV recognition accuracy of 3% AER and system entropy of about 45-bits. For enhanced security, an adaptive chaff generation method is proposed, where the modeled variability controls the chaff generation process. Similar recognition accuracy is reported, where more enhanced entropy of about 69-bits is achieved

    Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014

    Get PDF
    Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem

    Distortion Robust Biometric Recognition

    Get PDF
    abstract: Information forensics and security have come a long way in just a few years thanks to the recent advances in biometric recognition. The main challenge remains a proper design of a biometric modality that can be resilient to unconstrained conditions, such as quality distortions. This work presents a solution to face and ear recognition under unconstrained visual variations, with a main focus on recognition in the presence of blur, occlusion and additive noise distortions. First, the dissertation addresses the problem of scene variations in the presence of blur, occlusion and additive noise distortions resulting from capture, processing and transmission. Despite their excellent performance, ’deep’ methods are susceptible to visual distortions, which significantly reduce their performance. Sparse representations, on the other hand, have shown huge potential capabilities in handling problems, such as occlusion and corruption. In this work, an augmented SRC (ASRC) framework is presented to improve the performance of the Spare Representation Classifier (SRC) in the presence of blur, additive noise and block occlusion, while preserving its robustness to scene dependent variations. Different feature types are considered in the performance evaluation including image raw pixels, HoG and deep learning VGG-Face. The proposed ASRC framework is shown to outperform the conventional SRC in terms of recognition accuracy, in addition to other existing sparse-based methods and blur invariant methods at medium to high levels of distortion, when particularly used with discriminative features. In order to assess the quality of features in improving both the sparsity of the representation and the classification accuracy, a feature sparse coding and classification index (FSCCI) is proposed and used for feature ranking and selection within both the SRC and ASRC frameworks. The second part of the dissertation presents a method for unconstrained ear recognition using deep learning features. The unconstrained ear recognition is performed using transfer learning with deep neural networks (DNNs) as a feature extractor followed by a shallow classifier. Data augmentation is used to improve the recognition performance by augmenting the training dataset with image transformations. The recognition performance of the feature extraction models is compared with an ensemble of fine-tuned networks. The results show that, in the case where long training time is not desirable or a large amount of data is not available, the features from pre-trained DNNs can be used with a shallow classifier to give a comparable recognition accuracy to the fine-tuned networks.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    From clothing to identity; manual and automatic soft biometrics

    No full text
    Soft biometrics have increasingly attracted research interest and are often considered as major cues for identity, especially in the absence of valid traditional biometrics, as in surveillance. In everyday life, several incidents and forensic scenarios highlight the usefulness and capability of identity information that can be deduced from clothing. Semantic clothing attributes have recently been introduced as a new form of soft biometrics. Although clothing traits can be naturally described and compared by humans for operable and successful use, it is desirable to exploit computer-vision to enrich clothing descriptions with more objective and discriminative information. This allows automatic extraction and semantic description and comparison of visually detectable clothing traits in a manner similar to recognition by eyewitness statements. This study proposes a novel set of soft clothing attributes, described using small groups of high-level semantic labels, and automatically extracted using computer-vision techniques. In this way we can explore the capability of human attributes vis-a-vis those which are inferred automatically by computer-vision. Categorical and comparative soft clothing traits are derived and used for identification/re identification either to supplement soft body traits or to be used alone. The automatically- and manually-derived soft clothing biometrics are employed in challenging invariant person retrieval. The experimental results highlight promising potential for use in various applications
    corecore