6 research outputs found

    Environmental Effects on Face Recognition in Smartphones

    Get PDF
    Face recognition is convenient for user authentication on smartphones as it offers several advantages suitable for mobile environments. There is no need to remember a numeric code or password or carry tokens. Face verification allows the unlocking of the smartphone, pay bills or check emails through looking at the smartphone. However, devices mobility also introduces a lot of factors that may influence the biometric performance mainly regarding interaction and environment. Scenarios can vary significantly as there is no control of the surroundings. Noise can be caused by other people appearing on the background, by different illumination conditions, by different users’ poses and through many other reasons. User-interaction with biometric systems is fundamental: bad experiences may derive to unwillingness to use the technology. But how does the environment influence the quality of facial images? And does it influence the user experience with face recognition? In order to answer these questions, our research investigates the user-biometric system interaction from a non-traditional point of view: we recreate reallife scenarios to test which factors influence the image quality in face recognition and, quantifiably, to what extent. Results indicate the variability in face recognition performance when varying environmental conditions using smartphones

    Face Image Analysis in Mobile Biometric Accessibility Evaluations

    Get PDF
    Smartphones cameras are widely used for biometric authentication purposes. This enables more and more users experience face recognition in different common scenarios (e.g., unlocking phones, banking, access controls). One of its advantages is that face recognition requires low interaction with the systems (by simply looking at the smartphone's screen). Thus, it may be useful for people affected by mobility concerns. For this reason, researchers recently started to conduct mobile biometric evaluations recruiting accessibility populations. The aim is to analyse all those factors that, depending on the users' capabilities, influence the biometrics recognition process. In this paper we focus our attention on sample quality, analysing the face images collected during a mobile biometric accessibility study. Results obtained enable us to understand how the users' accessibility concerns influence the biometric sample quality and discuss possible solutions for eradicating this inconvenience. This assessment had been conducted following the recommendations of the ISO/IEC TR 29794-5

    Sensing Movement on Smartphone Devices to Assess User Interaction for Face Verification

    Get PDF
    Unlocking and protecting smartphone devices has become easier with the introduction of biometric face verification, as it has the promise of a secure and quick authentication solution to prevent unauthorised access. However, there are still many challenges for this biometric modality in a mobile context, where the user’s posture and capture device are not constrained. This research proposes a method to assess user interaction by analysing sensor data collected in the background of smartphone devices during verification sample capture. From accelerometer data, we have extracted magnitude variations and angular acceleration for pitch, roll, and yaw (angles around the x-axis, y-axis, and z-axis of the smartphone respectively) as features to describe the amplitude and number of movements during a facial image capture process. Results obtained from this experiment demonstrate that it can be possible to ensure good sample quality and high biometric performance by applying an appropriate threshold that will regulate the amplitude on variations of the smartphone movements during facial image capture. Moreover, the results suggest that better quality images are obtained when users spend more time positioning the smartphone before taking an image

    Exploring Audio Sensing in Detecting Social Interactions Using Smartphone Devices

    Get PDF
    In recent years, the fast proliferation of smartphones devices has provided powerful and portable methodologies for integrating sensing systems which can run continuously and provide feedback in real-time. The mobile crowd-sensing of human behaviour is an emerging computing paradigm that offers a challenge of sensing everyday social interactions performed by people who carry smartphone devices upon themselves. Typical smartphone sensors and the mobile crowd-sensing paradigm compose a process where the sensors present, such as the microphone, are used to infer social relationships between people in diverse social settings, where environmental factors can be dynamic and the infrastructure of buildings can vary. The typical approaches in detecting social interactions between people consider the use of co-location as a proxy for real-world interactions. Such approaches can under-perform in challenging situations where multiple social interactions can occur within close proximity to each other, for example when people are in a queue at the supermarket but not a part of the same social interaction. Other approaches involve a limitation where all participants of a social interaction must carry a smartphone device with themselves at all times and each smartphone must have the sensing app installed. The problem here is the feasibility of the sensing system, which relies heavily on each participant's smartphone acting as nodes within a social graph, connected together with weighted edges of proximity between the devices; when users uninstall the app or disable background sensing, the system is unable to accurately determine the correct number of participants. In this thesis, we present two novel approaches to detecting co-located social interac- tions using smartphones. The first relies on the use of WiFi signals and audio signals to distinguish social groups interacting within a few meters from each other with 88% precision. We orchestrated preliminary experiments using WiFi as a proxy for co-location between people who are socially interacting. Initial results showed that in more challenging scenarios, WiFi is not accurate enough to determine if people are socially interacting within the same social group. We then made use of audio as a second modality to capture the sound patterns of conversations to identify and segment social groups within close proximity to each other. Through a range of real-world experiments (social interactions in meeting scenarios, coffee shop scenarios, conference scenarios), we demonstrate a technique that utilises WiFi fingerprinting, along with sound fingerprinting to identify these social groups. We built a system which performs well, and then optimized the power consumption and improved the performance to 88% precision in the most challenging scenarios using duty cycling and data averaging techniques. The second approach explores the feasibility of detecting social interactions without the need of all social contacts to carry a social sensing device. This work explores the use of supervised and unsupervised Deep Learning techniques before concluding on the use of an Autoencoder model to perform a Speaker Identification task. We demonstrate how machine learning can be used with the audio data collected from a singular device as a speaker identification framework. Speech from people is used as the input to our Autoencoder model and then classified against a list of "social contacts" to determine if the user has spoken a person before or not. By doing this, the system can count the number of social contacts belonging to the user, and develop a database of common social contacts. Through the use 100 randomly-generated social conversations and the use of state-of-the-art Deep Learning techniques, we demonstrate how this system can accurately distinguish new and existing speakers from a data set of voices, to count the number of daily social interactions a user encounters with a precision of 75%. We then optimize the model using Hyperparameter Optimization to ensure that the model is most optimal for the task. Unlike most systems in the literature, this approach would work without the need to modify the existing infrastructure of a building, and without all participants needing to install the same ap

    A Performance Assessment Framework for Mobile Biometrics

    Get PDF
    This project aims to develop and explore a robust framework for assessing biometric systems on mobile platforms, where data is often collected in non-constrained, potentially challenging environments. The framework enables the performance assessment given a particular platform, biometric modality, usage environment, user base and required security level. The ubiquity of mobile devices such as smartphones and tablets has increased access to Internet-based services across various scenarios and environments. Citizens use mobile platforms for an ever-expanding set of services and interactions, often transferring personal information, and conducting financial transactions. Accurate identity authentication for physical access to the device and service is, therefore, critical to ensure the security of the individual, information, and transaction. Biometrics provides an established alternative to conventional authentication methods. Mobile devices offer considerable opportunities to utilise biometric data from an enhanced range of sensors alongside temporal information on the use of the device itself. For example, cameras and dedicated fingerprint devices can capture front-line physiological biometric samples (already used for device log-on applications and payment authorisation schemes such as Apple Pay) alongside voice capture using conventional microphones. Understanding the performance of these biometric modalities is critical to assessing suitability for deployment. Providing a robust performance and security assessment given a set of deployment variables is critical to ensure appropriate security and accuracy. Conventional biometrics testing is typically performed in controlled, constrained environments that fail to encapsulate mobile systems' daily (and developing) use. This thesis aims to develop an understanding of biometric performance on mobile devices. The impact of different mobile platforms, and the range of environmental conditions in use, on biometrics' accuracy, usability, security, and utility is poorly understood. This project will also examine the application and performance of mobile biometrics when in motion

    Environmental effects on face recognition in smartphones

    No full text
    Face recognition is convenient for user authentication on smartphones as it offers several advantages suitable for mobile environments. There is no need to remember a numeric code or password or carry tokens. Face verification allows the unlocking of the smartphone, pay bills or check emails through looking at the smartphone. However, devices mobility also introduces a lot of factors that may influence the biometric performance mainly regarding interaction and environment. Scenarios can vary significantly as there is no control of the surroundings. Noise can be caused by other people appearing on the background, by different illumination conditions, by different users? poses and through many other reasons. User-interaction with biometric systems is fundamental: bad experiences may derive to unwillingness to use the technology. But how does the environment influence the quality of facial images? And does it influence the user experience with face recognition? In order to answer these questions, our research investigates the user-biometric system interaction from a non-traditional point of view: we recreate reallife scenarios to test which factors influence the image quality in face recognition and, quantifiably, to what extent. Results indicate the variability in face recognition performance when varying environmental conditions using smartphones
    corecore