11 research outputs found

    Deep Feature-based Face Detection on Mobile Devices

    Full text link
    We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the special nature of the images captured by the front-facing camera of mobile devices and exploits the GPUs present in mobile devices without CUDA-based frameorks, to meet these challenges.Comment: ISBA 201

    Active User Authentication for Smartphones: A Challenge Data Set and Benchmark Results

    Full text link
    In this paper, automated user verification techniques for smartphones are investigated. A unique non-commercial dataset, the University of Maryland Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication research is introduced. This paper focuses on three sensors - front camera, touch sensor and location service while providing a general description for other modalities. Benchmark results for face detection, face verification, touch-based user identification and location-based next-place prediction are presented, which indicate that more robust methods fine-tuned to the mobile platform are needed to achieve satisfactory verification accuracy. The dataset will be made available to the research community for promoting additional research.Comment: 8 pages, 12 figures, 6 tables. Best poster award at BTAS 201

    A survey of face recognition techniques under occlusion

    Get PDF
    The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed

    A survey of face recognition techniques under occlusion

    Get PDF
    The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed

    Leveraging user-related internet of things for continuous authentication: a survey

    Get PDF
    Among all Internet of Things (IoT) devices, a subset of them are related to users. Leveraging these user-related IoT elements, itis possible to ensure the identity of the user for a period of time, thus avoiding impersonation. This need is known as ContinuousAuthentication (CA). Since 2009, a plethora of IoT-based CA academic research and industrial contributions have been proposed. Weoffer a comprehensive overview of 58 research papers regarding the main components of such a CA system. The status of the industryis studied as well, covering 32 market contributions, research projects and related standards. Lessons learned, challenges and openissues to foster further research in this area are finally presented.This work was supported by the MINECO grant TIN2016-79095-C2-2-R (SMOG-DEV) and by the CAM grants S2013/ICE-3095 (CIBERDINE) and P2018/TCS4566 (CYNAMON-CM) both co-funded with European FEDER funds

    Clasificador de atributos faciales a partir de imágenes en entornos no controlados

    Full text link
    En el presente proyecto fin de carrera se estudian, desarrollan y evalúan clasificadores de atributos faciales extraídos a partir de imágenes en entornos no controlados de iluminación y pose. Se partirá de las imágenes etiquetadas en la base de datos Facetracer, que contiene multitud de imágenes de rostros y etiquetados manuales. Como punto de partida se ha estudiado el estado del arte en los sistemas de reconocimiento facial tradicionales así como otros trabajos que se adentran en la clasificación de atributos. A continuación se ha diseñado y analizado la extracción de una multitud de características faciales. Se estudiará así mismo la aplicación de análisis reductores de dimensionalidad de las características. Posteriormente se discutirán varias formas de construir clasificadores de atributos a partir de las características obtenidas previamente. Finalmente se discutirá como desplegar un sistema capaz de clasificar sobre una multitud de atributos faciales. La parte experimental consiste por tanto de tres fases. En la primera se pretende evaluar cada elección de características mediante la construcción y evaluación de clasificadores locales construidos a partir de cada una de ellas. En la segunda, se evaluarán varios métodos propuestos para construir clasificadores para cada atributo bajo estudio a partir de las características obtenidas de la fase anterior. A partir de estos resultados, se definirá la construcción de cada clasificador de atributos. En la tercera, se desarrollará un sistema que integrará todos los clasificadores de atributos obtenidos y que permitirá la clasificación automática de cualquier imagen de entrada sobre un rango amplio de atributos faciales. Por último se presentan las conclusiones extraídas a lo largo del proyecto y se proponen líneas de trabajo futuro.This final project pretends to study, develop and evaluate multiple automatic attribute classifiers, extracted from labeled face images in uncontrolled environments regarding light and pose. The results of this document are extracted from the images contained in the Facetracer database. As a starting point, the current state of the art regarding facial recognizition technicques and automatic attribute classification are studied and discussed, followed by an analysis of the features that are to be used in the classification. Methods of reducing the size of features will also be studied. Various methods of constructing each classifier will be discussed in order to produce a system capable of classifying any given face image over a variety of facial attributes. The experimental phase will therefore consist of three main stages. In the first stage, classifiers will be constructed from each of the possible features discussed in this project in order to evaluate the discriminating value of each of the proposed features. In the second stage, various methods of constructing classifiers will be discussed following the results in the previous stage. The final stage will be dedicated to the construction of a system capable of classifying each proposed facial attribute. Finally, conclusions extracted throughout the development of this project are presented and future lines of work are proposed

    Machine Learning of Facial Attributes Using Explainable, Secure and Generative Adversarial Networks

    Get PDF
    "Attributes" are referred to abstractions that humans use to group entities and phenomena that have a common characteristic. In machine learning (ML), attributes are fundamental because they bridge the semantic gap between humans and ML systems. Thus, researchers have been using this concept to transform complicated ML systems into interactive ones. However, training the attribute detectors which are central to attribute-based ML systems can still be challenging. It might be infeasible to gather attribute labels for rare combinations to cover all the corner cases, which can result in weak detectors. Also, it is not clear how to fill in the semantic gap with attribute detectors themselves. Finally, it is not obvious how to interpret the detectors' outputs in the presence of adversarial noise. First, we investigate the effectiveness of attributes for bridging the semantic gap in complicated ML systems. We turn a system that does continuous authentication of human faces on mobile phones into an interactive attribute-based one. We employ deep multi-task learning in conjunction with multi-view classification using facial parts to tackle this problem. We show how the proposed system decomposition enables efficient deployment of deep networks for authentication on mobile phones with limited resources. Next, we seek to improve the attribute detectors by using conditional image synthesis. We take a generative modeling approach for manipulating the semantics of a given image to provide novel examples. Previous works condition the generation process on binary attribute existence values. We take this type of approaches one step further by modeling each attribute as a distributed representation in a vector space. These representations allow us to not only toggle the presence of attributes but to transfer an attribute style from one image to the other. Furthermore, we show diverse image generation from the same set of conditions, which was not possible using existing methods with a single dimension per attribute. We then investigate filling in the semantic gap between humans and attribute classifiers by proposing a new way to explain the pre-trained attribute detectors. We use adversarial training in conjunction with an encoder-decoder model to learn the behavior of binary attribute classifiers. We show that after our proposed model is trained, one can see which areas of the image contribute to the presence/absence of the target attribute, and also how to change image pixels in those areas so that the attribute classifier decision changes in a consistent way with human perception. Finally, we focus on protecting the attribute models from un-interpretable behaviors provoked by adversarial perturbations. These behaviors create an inexplainable semantic gap since they are visually unnoticeable. We propose a method based on generative adversarial networks to alleviate this issue. We learn the training data distribution that is used to train the core classifier and use it to detect and denoise test samples. We show that the method is effective for defending facial attribute detectors
    corecore