2,203 research outputs found

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    Predicting sex as a soft-biometrics from device interaction swipe gestures

    Get PDF
    Touch and multi-touch gestures are becoming the most common way to interact with technology such as smart phones, tablets and other mobile devices. The latest touch-screen input capacities have tremendously increased the quantity and quality of available gesture data, which has led to the exploration of its use in multiple disciplines from psychology to biometrics. Following research studies undertaken in similar modalities such as keystroke and mouse usage biometrics, the present work proposes the use of swipe gesture data for the prediction of soft-biometrics, specifically the user's sex. This paper details the software and protocol used for the data collection, the feature set extracted and subsequent machine learning analysis. Within this analysis, the BestFirst feature selection technique and classification algorithms (naïve Bayes, logistic regression, support vector machine and decision tree) have been tested. The results of this exploratory analysis have confirmed the possibility of sex prediction from the swipe gesture data, obtaining an encouraging 78% accuracy rate using swipe gesture data from two different directions. These results will hopefully encourage further research in this area, where the prediction of soft-biometrics traits from swipe gesture data can play an important role in enhancing the authentication processes based on touch-screen devices

    Identification of Persons and Several Demographic Features based on Motion Analysis of Various Daily Activities using Wearable Sensors

    Full text link
    In recent years, there has been an increasing interest in using the capabilities of wearable sensors, including accelerometers, gyroscopes and magnetometers, to recognize individuals while undertaking a set of normal daily activities. The past few years have seen considerable research exploring person recognition using wearable sensing devices due to its significance in different applications, including security and human-computer interaction applications. This thesis explores the identification of subjects and related multiple biometric demographic attributes based on the motion data of normal daily activities gathered using wearable sensor devices. First, it studies the recognition of 18 subjects based on motion data of 20 daily living activities using six wearable sensors affixed to different body locations. Next, it investigates the task of classifying various biometric demographic features: age, gender, height, and weight based on motion data of various activities gathered using two types of accelerometers and one gyroscope wearable sensors. Initially, different significant parameters that impact the subjects' recognition success rates are investigated. These include studying the performance of the three sensor sources: accelerometer, gyroscope, and magnetometer, and the impact of their combinations. Furthermore, the impact of the number of different sensors mounted at different body positions and the best body position to mount sensors are also studied. Next, the analysis also explored which activities are more suitable for subject recognition, and lastly, the recognition success rates and mutual confusion among individuals. In addition, the impact of several fundamental factors on the classification performance of different demographic features using motion data collected from three sensors is studied. Those factors include the performance evaluation of feature-set extracted from both time and frequency domains, feature selection, individual sensor sources and multiple sources. The key findings are: (I) Features extracted from all three sensor sources provide the highest accuracy of subjects recognition. (2) The recognition accuracy is affected by the body position and the number of sensors. Ankle, chest, and thigh positions outperform other positions in terms of the recognition accuracy of subjects. There is a depreciating association between the subject classification accuracy and the number of sensors used. (3) Sedentary activities such as watching tv, texting on the phone, writing with a pen, and using pc produce higher classification results and distinguish persons efficiently due to the absence of motion noise in the signal. (4) Identifiability is not uniformly distributed across subjects. (5) According to the classification results of considered biometric features, both full and selected features-set derived from all three sources of two accelerometers and a gyroscope sensor provide the highest classification accuracy of all biometric features compared to features derived from individual sensors sources or pairs of sensors together. (6) Under all configurations and for all biometric features classified; the time-domain features examined always outperformed the frequency domain features. Combining the two sets led to no increase in classification accuracy over time-domain alone

    Machine learning techniques for identification using mobile and social media data

    Get PDF
    Networked access and mobile devices provide near constant data generation and collection. Users, environments, applications, each generate different types of data; from the voluntarily provided data posted in social networks to data collected by sensors on mobile devices, it is becoming trivial to access big data caches. Processing sufficiently large amounts of data results in inferences that can be characterized as privacy invasive. In order to address privacy risks we must understand the limits of the data exploring relationships between variables and how the user is reflected in them. In this dissertation we look at data collected from social networks and sensors to identify some aspect of the user or their surroundings. In particular, we find that from social media metadata we identify individual user accounts and from the magnetic field readings we identify both the (unique) cellphone device owned by the user and their course-grained location. In each project we collect real-world datasets and apply supervised learning techniques, particularly multi-class classification algorithms to test our hypotheses. We use both leave-one-out cross validation as well as k-fold cross validation to reduce any bias in the results. Throughout the dissertation we find that unprotected data reveals sensitive information about users. Each chapter also contains a discussion about possible obfuscation techniques or countermeasures and their effectiveness with regards to the conclusions we present. Overall our results show that deriving information about users is attainable and, with each of these results, users would have limited if any indication that any type of analysis was taking place

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework
    corecore