149 research outputs found

    Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier

    Full text link
    Active authentication refers to the process in which users are unobtrusively monitored and authenticated continuously throughout their interactions with mobile devices. Generally, an active authentication problem is modelled as a one class classification problem due to the unavailability of data from the impostor users. Normally, the enrolled user is considered as the target class (genuine) and the unauthorized users are considered as unknown classes (impostor). We propose a convolutional neural network (CNN) based approach for one class classification in which a zero centered Gaussian noise and an autoencoder are used to model the pseudo-negative class and to regularize the network to learn meaningful feature representations for one class data, respectively. The overall network is trained using a combination of the cross-entropy and the reconstruction error losses. A key feature of the proposed approach is that any pre-trained CNN can be used as the base network for one class classification. Effectiveness of the proposed framework is demonstrated using three publically available face-based active authentication datasets and it is shown that the proposed method achieves superior performance compared to the traditional one class classification methods. The source code is available at: github.com/otkupjnoz/oc-acnn.Comment: Accepted and to appear at AFGR 201

    Semi-Supervised Specific Emitter Identification Method Using Metric-Adversarial Training

    Full text link
    Specific emitter identification (SEI) plays an increasingly crucial and potential role in both military and civilian scenarios. It refers to a process to discriminate individual emitters from each other by analyzing extracted characteristics from given radio signals. Deep learning (DL) and deep neural networks (DNNs) can learn the hidden features of data and build the classifier automatically for decision making, which have been widely used in the SEI research. Considering the insufficiently labeled training samples and large unlabeled training samples, semi-supervised learning-based SEI (SS-SEI) methods have been proposed. However, there are few SS-SEI methods focusing on extracting the discriminative and generalized semantic features of radio signals. In this paper, we propose an SS-SEI method using metric-adversarial training (MAT). Specifically, pseudo labels are innovatively introduced into metric learning to enable semi-supervised metric learning (SSML), and an objective function alternatively regularized by SSML and virtual adversarial training (VAT) is designed to extract discriminative and generalized semantic features of radio signals. The proposed MAT-based SS-SEI method is evaluated on an open-source large-scale real-world automatic-dependent surveillance-broadcast (ADS-B) dataset and WiFi dataset and is compared with state-of-the-art methods. The simulation results show that the proposed method achieves better identification performance than existing state-of-the-art methods. Specifically, when the ratio of the number of labeled training samples to the number of all training samples is 10\%, the identification accuracy is 84.80\% under the ADS-B dataset and 80.70\% under the WiFi dataset. Our code can be downloaded from https://github.com/lovelymimola/MAT-based-SS-SEI.Comment: 12 pages, 5 figures, Journa

    A survey of face recognition techniques under occlusion

    Get PDF
    The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed

    Face Anti-Spoofing and Deep Learning Based Unsupervised Image Recognition Systems

    Get PDF
    One of the main problems of a supervised deep learning approach is that it requires large amounts of labeled training data, which are not always easily available. This PhD dissertation addresses the above-mentioned problem by using a novel unsupervised deep learning face verification system called UFace, that does not require labeled training data as it automatically, in an unsupervised way, generates training data from even a relatively small size of data. The method starts by selecting, in unsupervised way, k-most similar and k-most dissimilar images for a given face image. Moreover, this PhD dissertation proposes a new loss function to make it work with the proposed method. Specifically, the method computes loss function k times for both similar and dissimilar images for each input image in order to increase the discriminative power of feature vectors to learn the inter-class and intra-class face variability. The training is carried out based on the similar and dissimilar input face image vector rather than the same training input face image vector in order to extract face embeddings. The UFace is evaluated on four benchmark face verification datasets: Labeled Faces in the Wild dataset (LFW), YouTube Faces dataset (YTF), Cross-age LFW (CALFW) and Celebrities in Frontal Profile in the Wild (CFP-FP) datasets. The results show that we gain an accuracy of 99.40\%, 96.04\%, 95.12\% and 97.89\% respectively. The achieved results, despite being unsupervised, is on par to a similar but fully supervised methods. Another, related to face verification, area of research is on face anti-spoofing systems. State-of-the-art face anti-spoofing systems use either deep learning, or manually extracted image quality features. However, many of the existing image quality features used in face anti-spoofing systems are not well discriminating spoofed and genuine faces. Additionally, State-of-the-art face anti-spoofing systems that use deep learning approaches do not generalize well. Thus, to address the above problem, this PhD dissertation proposes hybrid face anti-spoofing system that considers the best from image quality feature and deep learning approaches. This work selects and proposes a set of seven novel no-reference image quality features measurement, that discriminate well between spoofed and genuine faces, to complement the deep learning approach. It then, proposes two approaches: In the first approach, the scores from the image quality features are fused with the deep learning classifier scores in a weighted fashion. The combined scores are used to determine whether a given input face image is genuine or spoofed. In the second approach, the image quality features are concatenated with the deep learning features. Then, the concatenated features vector is fed to the classifier to improve the performance and generalization of anti-spoofing system. Extensive evaluations are conducted to evaluate their performance on five benchmark face anti-spoofing datasets: Replay-Attack, CASIA-MFSD, MSU-MFSD, OULU-NPU and SiW. Experiments on these datasets show that it gives better results than several of the state-of-the-art anti-spoofing systems in many scenarios
    • …
    corecore