738 research outputs found

    Pose variability compensation using projective transformation for forensic face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. E. Gonzalez-Sosa, R. Vera-Rodriguez, J. Fierrez, P. Tome and J. Ortega-Garcia, "Pose Variability Compensation Using Projective Transformation for Forensic Face Recognition," Biometrics Special Interest Group (BIOSIG), 2015 International Conference of the, Darmstadt, 2015, pp. 1-5. doi: 10.1109/BIOSIG.2015.7314615The forensic scenario is a very challenging problem within the face recognition community. The verification problem in this case typically implies the comparison between a high quality controlled image against a low quality image extracted from a close circuit television (CCTV). One of the downsides that frequently presents this scenario is pose deviation since CCTV devices are usually placed in ceilings and the subject normally walks facing forward. This paper proves the value of the projective transformation as a simple tool to compensate the pose distortion present in surveillance images in forensic scenarios. We evaluate the influence of this projective transformation over a baseline system based on principal component analysis and support vector machines (PCA-SVM) for the SCface database. The application of this technique improves greatly the performance, being this improvement more striking with closer images. Results suggest the convenience of this transformation within the preprocessing stage of all CCTV images. The average relative improvement reached with this method is around 30% of EER.This work has been partially supported in part by Bio-Shield (TEC2012-34881) from Spanish MINECO, in part by BEAT (FP7-SEC-284989) from EU and in part by Cátedra UAM-Telefónica. E. Gonzalez-Sosa is supported by a PhD scholarship from Universidad Autonoma de Madrid

    Biometric fusion methods for adaptive face recognition in computer vision

    Get PDF
    PhD ThesisFace recognition is a biometric method that uses different techniques to identify the individuals based on the facial information received from digital image data. The system of face recognition is widely used for security purposes, which has challenging problems. The solutions to some of the most important challenges are proposed in this study. The aim of this thesis is to investigate face recognition across pose problem based on the image parameters of camera calibration. In this thesis, three novel methods have been derived to address the challenges of face recognition and offer solutions to infer the camera parameters from images using a geomtric approach based on perspective projection. The following techniques were used: camera calibration CMT and Face Quadtree Decomposition (FQD), in order to develop the face camera measurement technique (FCMT) for human facial recognition. Facial information from a feature extraction and identity-matching algorithm has been created. The success and efficacy of the proposed algorithm are analysed in terms of robustness to noise, the accuracy of distance measurement, and face recognition. To overcome the intrinsic and extrinsic parameters of camera calibration parameters, a novel technique has been developed based on perspective projection, which uses different geometrical shapes to calibrate the camera. The parameters used in novel measurement technique CMT that enables the system to infer the real distance for regular and irregular objects from the 2-D images. The proposed system of CMT feeds into FQD to measure the distance between the facial points. Quadtree decomposition enhances the representation of edges and other singularities along curves of the face, and thus improves directional features from face detection across face pose. The proposed FCMT system is the new combination of CMT and FQD to recognise the faces in the various pose. The theoretical foundation of the proposed solutions has been thoroughly developed and discussed in detail. The results show that the proposed algorithms outperform existing algorithms in face recognition, with a 2.5% improvement in main error recognition rate compared with recent studies

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    What Does 2D Geometric Information Really Tell Us About 3D Face Shape?

    Get PDF
    A face image contains geometric cues in the form of configurational information (semantically meaningful landmark points and contours). In this thesis, we explore to what degree such 2D geometric information allows us to estimate 3D face shape. First, we focus on the problem of fitting a 3D morphable model to single face images using only sparse geometric features. We propose a novel approach that explicitly computes hard correspondences which allow us to treat the model edge vertices as known 2D positions, for which optimal pose or shape estimates can be linearly computed. Moreover, we show how to formulate this shape-from-landmarks problem as a separable nonlinear least squares optimisation. Second, we show how a statistical model can be used to spatially transform input data as a module within a convolutional neural network. This is an extension of the original spatial transformer network in that we are able to interpret and normalise 3D pose changes and self-occlusions. We show that the localiser can be trained using only simple geometric loss functions on a relatively small dataset yet is able to perform robust normalisation on highly uncontrolled images. We consider another extension in which the model itself is also learnt. The final contribution of this thesis lies in exploring the limits of 2D geometric features and characterising the resulting ambiguities. 2D geometric information only provides a partial constraint on 3D face shape. In other words, face landmarks or occluding contours are an ambiguous shape cue. Two faces with different 3D shape can give rise to the same 2D geometry, particularly as a result of perspective transformation when camera distance varies. We derive methods to compute these ambiguity subspaces, demonstrate that they contain significant shape variability and show that these ambiguities occur in real-world datasets

    Realtime Face Tracking and Animation

    Get PDF
    Capturing and processing human geometry, appearance, and motion is at the core of computer graphics, computer vision, and human-computer interaction. The high complexity of human geometry and motion dynamics, and the high sensitivity of the human visual system to variations and subtleties in faces and bodies make the 3D acquisition and reconstruction of humans in motion a challenging task. Digital humans are often created through a combination of 3D scanning, appearance acquisition, and motion capture, leading to stunning results in recent feature films. However, these methods typically require complex acquisition systems and substantial manual post-processing. As a result, creating and animating high-quality digital avatars entails long turn-around times and substantial production costs. Recent technological advances in RGB-D devices, such as Microsoft Kinect, brought new hopes for realtime, portable, and affordable systems allowing to capture facial expressions as well as hand and body motions. RGB-D devices typically capture an image and a depth map. This permits to formulate the motion tracking problem as a 2D/3D non-rigid registration of a deformable model to the input data. We introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. This led to unprecedented face tracking quality on a low cost consumer level device. The main drawback of this approach in the context of consumer applications is the need for an offline user-specific training. Robust and efficient tracking is achieved by building an accurate 3D expression model of the user's face who is scanned in a predefined set of facial expressions. We extended this approach removing the need of a user-specific training or calibration, or any other form of manual assistance, by modeling online a 3D user-specific dynamic face model. In complement of a realtime face tracking and modeling algorithm, we developed a novel system for animation retargeting that allows learning a high-quality mapping between motion capture data and arbitrary target characters. We addressed one of the main challenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the correspondence between source and target expression spaces. We showed that this number can be significantly reduced by leveraging the information contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. Finally, we present a novel realtime physics-based animation technique allowing to simulate a large range of deformable materials such as fat, flesh, hair, or muscles. This approach could be used to produce more lifelike animations by enhancing the animated avatars with secondary effects. We believe that the realtime face tracking and animation pipeline presented in this thesis has the potential to inspire numerous future research in the area of computer-generated animation. Already, several ideas presented in thesis have been successfully used in industry and this work gave birth to the startup company faceshift AG

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    The words of the body: psychophysiological patterns in dissociative narratives

    Get PDF
    Trauma has severe consequences on both psychological and somatic levels, even affecting the genetic expression and the cell\u2019s DNA repair ability. A key mechanism in the understanding of clinical disorders deriving from trauma is identified in dissociation, as a primitive defense against the fragmentation of the self originated by overwhelming experiences. The dysregulation of the interpersonal patterns due to the traumatic experience and its detrimental effects on the body are supported by influent neuroscientific models such as Damasio\u2019s somatic markers and Porges\u2019 polyvagal theory. On the basis of these premises, and supported by our previous empirical observations on 40 simulated clinical sessions, we will discuss the longitudinal process of a brief psychodynamic psychotherapy (16 sessions, weekly frequency) with a patient who suffered a relational trauma. The research design consists of the collection of self-report and projective tests, pre-post therapy and after each clinical session, in order to assess personality, empathy, clinical alliance and clinical progress, along with the verbatim analysis of the transcripts trough the Psychotherapy Process Q-Set and the Collaborative Interactions Scale. Furthermore, we collected simultaneous psychophysiological measures of the therapeutic dyad: skin conductance and hearth rate. Lastly, we employed a computerized analysis of non-verbal behaviors to assess synchrony in posture and gestures. These automated measures are able to highlight moments of affective concordance and discordance, allowing for a deep understanding of the mutual regulations between the patient and the therapist. Preliminary results showed that psychophysiological changes in dyadic synchrony, observed in body movements, skin conductance and hearth rate, occurred within sessions during the discussion of traumatic experiences, with levels of attunement that changed in both therapist and the patient depending on the quality of the emotional representation of the experience. These results go in the direction of understanding the relational process in trauma therapy, using an integrative language in which both clinical and neurophysiological knowledge may take advantage of each other

    An Investigation of Iris Recognition in Unconstrained Environments

    Get PDF
    Iris biometrics is widely regarded as a reliable and accurate method for personal identification and the continuing advancements in the field have resulted in the technology being widely adopted in recent years and implemented in many different scenarios. Current typical iris biometric deployments, while generally expected to perform well, require a considerable level of co-operation from the system user. Specifically, the physical positioning of the human eye in relation to the iris capture device is a critical factor, which can substantially affect the performance of the overall iris biometric system. The work reported in this study will explore some of the important issues relating to the capture and identification of iris images at varying positions with respect to the capture device, and in particular presents an investigation into the analysis of iris images captured when the gaze angle of a subject is not aligned with the axis of the camera lens. A reliable method of acquiring off-angle iris images will be implemented, together with a study of a database thereby compiled of such images captured methodically. A detailed analysis of these so-called “off-angle” characteristics will be presented, making possible the implementation of new methods whereby significant enhancement of system performance can be achieved. The research carried out in this study suggests that implementing carefully new training methodologies to improve the classification performance can compensate effectively for the problem of off-angle iris images. The research also suggests that acquiring off-angle iris samples during the enrolment process for an iris biometric system and the implementation of the developed training configurations provides an increase in classification performance
    corecore