131 research outputs found

    Feature exploration for biometric recognition using millimetre wave body images

    Full text link
    The electronic version of this article is the complete one and can be found online at: http://dx.doi.org/10.1186/s13640-015-0084-3The use of millimetre wave images has been proposed recently in the biometric field to overcome certain limitations when using images acquired at visible frequencies. Furthermore, the security community has started using millimetre wave screening scanners in order to detect concealed objects. We believe we can exploit the use of these devices by incorporating biometric functionalities. This paper proposes a biometric recognition system based on the information of the silhouette of the human body, which may be seen as a type of soft biometric trait. To this aim, we report experimental results on the BIOGIGA database with four feature extraction approaches (contour coordinates, shape contexts, Fourier descriptors and landmarks) and three classification methods (Euclidean distance, dynamic time warping and support vector machines). The best configuration of 1.33 % EER is achieved when using contour coordinates with dynamic time warping.This work has been partially supported by projects TeraSense (CSD2008-00068), Bio-Shield (TEC2012-34881) and BEAT (FP7-SEC-284989) from EU. E. Gonzalez-Sosa is supported by a PhD scholarship from Universidad Autonoma de Madrid

    Body shape-based biometric person recognition from mmW images

    Full text link
    A growing interest has arisen in the security community for the use of millimeter waves in order to detect weapons and concealed objects. Also, the use of millimetre wave images has been proposed recently for biometric person recognition to overcome certain limitations of images acquired at visible frequencies. This paper proposes a biometric person recognition system based on shape information extracted from millimetre wave images. To this aim, we report experimental results using millimeter wave images with different body shape-based feature approaches: contour coordinates, shape contexts, Fourier descriptors and row and column profiles, using Dynamic Time Warping for matching. Results suggest the potential of performing person recognition through millimetre waves using only shape information, a functionality that could be easily integrated in the security scanners deployed in airportsThis work has been partially supported by project CogniMetrics TEC2015-70627-R (MINECO/FEDER), and the SPATEK network (TEC2015-68766-REDC

    Millimetre wave person recognition: hand-crafted vs learned features

    Full text link
    Imaging using millimeter waves (mmWs) has many advantages including ability to penetrate obscurants such as clothes and polymers. Although conceal weapon detection has been the predominant mmW imaging application, in this paper, we aim to gain some insight about the potential of using mmW images for person recognition. We report experimental results using the mmW TNO database consisting of 50 individuals based on both hand-crafted and learned features from Alexnet and VGG-face pretrained CNN models. Results suggest that: i) mmW torso region is more discriminative than mmW face and the entire body, ii) CNN features produce better results compared to hand-crafted features on mmW faces and the entire body, and iii) hand-crafted features slightly outperform CNN features on mmW torsoThis work has been partially supported by project CogniMetrics TEC2015-70627-R (MINECO/FEDER), and the SPATEK network (TEC2015-68766-REDC). E. GonzalezSosa is supported by a PhD scholarship from Universidad Autonoma de Madrid. Vishal M. Patel was partially supported by US Office of Naval Research (ONR) Grant YIP N00014-16-1-3134. Authors wish to thank also TNO for providing access to the databas

    Image-based Gender Estimation from Body and Face across Distances

    Get PDF
    International audienceGender estimation has received increased attention due to its use in a number of pertinent security and commercial applications. Automated gender estimation algorithms are mainly based on extracting representative features from face images. In this work we study gender estimation based on information deduced jointly from face and body, extracted from single-shot images. The approach addresses challenging settings such as low-resolution-images, as well as settings when faces are occluded. Specifically the face-based features include local binary patterns (LBP) and scale-invariant feature transform (SIFT) features, projected into a PCA space. The features of the novel body-based algorithm proposed in this work include continuous shape information extracted from body silhouettes and texture information retained by HOG descriptors. Support Vector Machines (SVMs) are used for classification for body and face features. We conduct experiments on images extracted from video-sequences of the Multi-Biometric Tunnel database, emphasizing on three distance-settings: close, medium and far, ranging from full body exposure (far setting) to head and shoulders exposure (close setting). The experiments suggest that while face-based gender estimation performs best in the close-distance-setting, body-based gender estimation performs best when a large part of the body is visible. Finally we present two score-level-fusion schemes of face and body-based features, outperforming the two individual modalities in most cases

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework

    Skin Texture as a Source of Biometric Information

    Get PDF
    Traditional face recognition systems have achieved remarkable performances when the whole face image is available. However, recognising people from partial view of their facial image is a challenging task. Face recognition systems' performances may also be degraded due to low resolution image quality. These limitations can restrict the practicality of such systems in real-world scenarios such as surveillance, and forensic applications. Therefore, there is a need to identify people from whatever information is available and one of the possible approaches would be to use the texture information from available facial skin regions for the biometric identification of individuals. This thesis presents the design, implementation and experimental evaluation of an automated skin-based biometric framework. The proposed system exploits the skin information from facial regions for person recognition. Such a system is applicable where only a partial view of a face is captured by imaging devices. The system automatically detects the regions of interest by using a set of facial landmarks. Four regions were investigated in this study: forehead, right cheek, left cheek, and chin. A skin purity assessment scheme determines whether the region of interest contains enough skin pixels for biometric analysis. Texture features were extracted from non-overlapping sub-regions and categorised using a number of classification schemes. To further improve the reliability of the system, the study also investigated various techniques to deal with the challenge where the face images may be acquired at different resolutions to that available at the time of enrolment or sub-regions themselves be partially occluded. The study also presented an adaptive scheme for exploiting the available information from the corrupt regions of interest. Extensive experiments were conducted using publicly available databases to evaluate both the performance of the prototype system and the adaptive framework for different operational conditions, such as level of occlusion and mixture of different resolution skin images. Results suggest that skin information can provide useful discriminative characteristics for individual identification. The comparison analyses with state-of-the-art methods show that the proposed system achieved a promising performance

    Advances in Sensors, Big Data and Machine Learning in Intelligent Animal Farming

    Get PDF
    Animal production (e.g., milk, meat, and eggs) provides valuable protein production for human beings and animals. However, animal production is facing several challenges worldwide such as environmental impacts and animal welfare/health concerns. In animal farming operations, accurate and efficient monitoring of animal information and behavior can help analyze the health and welfare status of animals and identify sick or abnormal individuals at an early stage to reduce economic losses and protect animal welfare. In recent years, there has been growing interest in animal welfare. At present, sensors, big data, machine learning, and artificial intelligence are used to improve management efficiency, reduce production costs, and enhance animal welfare. Although these technologies still have challenges and limitations, the application and exploration of these technologies in animal farms will greatly promote the intelligent management of farms. Therefore, this Special Issue will collect original papers with novel contributions based on technologies such as sensors, big data, machine learning, and artificial intelligence to study animal behavior monitoring and recognition, environmental monitoring, health evaluation, etc., to promote intelligent and accurate animal farm management

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Designing for Socially Acceptable Security Technologies

    Get PDF
    Security technologies (STs) are increasingly being positioned, developed, and implemented as technological-fixes for addressing crime; never more so than in the wake of the numerous terrorist attacks beginning with September 11th 2001. However, despite the purported security benefits afforded citizens by these technologies, their smooth assimilation into society is never assured. STs which evoke social controversy and resistance fail to survive unscathed over the mid- to long-term; subjected instead to enforced modification, restrictions on acquisition, restrictions on use, or in the worst case scenario - outright banning. Such controversies can negatively affect the companies designing these STs, end-users who employ them, governments who authorise them, and citizens whose security may genuinely remain compromised. The aim of this thesis is to assist the developers and designers of STs in anticipating and mitigating negative societal responses to their technologies upstream in the design process. The logic being that; by targeting STs before they are completed those elements of design most likely to evoke controversy can be modified, which in turn will produce STs the public are more likely to afford legitimacy through acceptance. To achieve this aim, three objectives were set. The first was to identify the causes of social controversies arising from the design and operation of STs. Through repeated focussed case-studies of previous controversial STs a taxonomy of forty-three commonalities of controversy was produced. The second goal was to generate guidelines for the development of future methodological design-tools that could be produced to assist those developing STs in identifying these controversies. This was achieved by conducting interviews with scientists and engineers actively involved in the design and production of STs. Finally, this taxonomy and guidelines were applied to produce two prototypes of potential design tools; with one subsequently applied to an ongoing ST design project
    corecore