418 research outputs found

    Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier

    Full text link
    Active authentication refers to the process in which users are unobtrusively monitored and authenticated continuously throughout their interactions with mobile devices. Generally, an active authentication problem is modelled as a one class classification problem due to the unavailability of data from the impostor users. Normally, the enrolled user is considered as the target class (genuine) and the unauthorized users are considered as unknown classes (impostor). We propose a convolutional neural network (CNN) based approach for one class classification in which a zero centered Gaussian noise and an autoencoder are used to model the pseudo-negative class and to regularize the network to learn meaningful feature representations for one class data, respectively. The overall network is trained using a combination of the cross-entropy and the reconstruction error losses. A key feature of the proposed approach is that any pre-trained CNN can be used as the base network for one class classification. Effectiveness of the proposed framework is demonstrated using three publically available face-based active authentication datasets and it is shown that the proposed method achieves superior performance compared to the traditional one class classification methods. The source code is available at: github.com/otkupjnoz/oc-acnn.Comment: Accepted and to appear at AFGR 201

    Mobile security and smart systems

    Get PDF

    Perceiving is Believing. Authentication with Behavioural and Cognitive Factors

    Get PDF
    Most computer users have experienced login problems such as, forgetting passwords, loosing token cards and authentication dongles, failing that complicated screen pattern once again, as well as, interaction difficulties in usability. Facing the difficulties of non-flexible strong authentication solutions, users tend to react with poor acceptance or to relax the assumed correct use of authentication procedures and devices, rendering the intended security useless. Biometrics can, sort of, solve some of those problems. However, despite the vast research, there is no perfect solution into designing a secure strong authentication procedure, falling into a trade off between intrusiveness, effectiveness, contextual adequacy and security guarantees. Taking advantage of new technology, recent research onmulti-modal, behavioural and cognitive oriented authentication proposals have sought to optimize trade off towards precision and convenience, reducing intrusiveness for the same amount of security. But these solutions also fall short with respect to different scenarios. Users perform currently multiple authentications everyday, through multiple devices, in panoply of different situations, involving different resources and diverse usage contexts, with no "better authentication solution" for all possible purposes. The proposed framework enhances the recent research in user authentication services with a broader view on the problems involving each solution, towards an usable secure authentication methodology combining and exploring the strengths of each method. It will than be used to prototype instances of new dynamic multifactor models (including novel models of behavioural and cognitive biometrics), materializing the PiB (perceiving is believing) authentication. Ultimately we show how the proposed framework can be smoothly integrated in applications and other authentication services and protocols, namely in the context of SSO Authentication Services and OAuth

    Deep multimodal biometric recognition using contourlet derivative weighted rank fusion with human face, fingerprint and iris images

    Get PDF
    The goal of multimodal biometric recognition system is to make a decision by identifying their physiological behavioural traits. Nevertheless, the decision-making process by biometric recognition system can be extremely complex due to high dimension unimodal features in temporal domain. This paper explains a deep multimodal biometric system for human recognition using three traits, face, fingerprint and iris. With the objective of reducing the feature vector dimension in the temporal domain, first pre-processing is performed using Contourlet Transform Model. Next, Local Derivative Ternary Pattern model is applied to the pre-processed features where the feature discrimination power is improved by obtaining the coefficients that has maximum variation across pre-processed multimodality features, therefore improving recognition accuracy. Weighted Rank Level Fusion is applied to the extracted multimodal features, that efficiently combine the biometric matching scores from several modalities (i.e. face, fingerprint and iris). Finally, a deep learning framework is presented for improving the recognition rate of the multimodal biometric system in temporal domain. The results of the proposed multimodal biometric recognition framework were compared with other multimodal methods. Out of these comparisons, the multimodal face, fingerprint and iris fusion offers significant improvements in the recognition rate of the suggested multimodal biometric system

    Data Behind Mobile Behavioural Biometrics – a Survey

    Get PDF
    Behavioural biometrics are becoming more and more popular. It is hard to find a sensor that is embedded in a mobile/wearable device, which can’t be exploited to extract behavioural biometric data. In this paper, we investigate data in behavioural biometrics and how this data is used in experiments, especially examining papers that introduce new datasets. We will not examine performance accomplished by the algorithms used since a system’s performance is enormously affected by the data used, its amount and quality. Altogether, 32 papers are examined, assessing how often they are cited, have databases published, what modality data are collected, and how the data is used. We offer a roadmap that should be taken into account when designing behavioural data collection and using collected data. We further look at the General Data Protection Regulation, and its significance to the scientific research in the field of biometrics. It is possible to conclude that there is a need for publicly available datasets with comprehensive experimental protocols, similarly established in facial recognition

    Features extraction scheme for behavioral biometric authentication in touchscreen mobile devices

    Get PDF
    Common authentication mechanisms in mobile devices such as passwords and Personal Identification Number have failed to keep up with the rapid pace of challenges associated with the use of ubiquitous devices over the Internet, since they can easily be lost or stolen. Thus, it is important to develop authentication mechanisms that can be adapted to such an environment. Biometric-based person recognition is a good alternative to overcome the difficulties of password and token approaches, since biometrics cannot be easily stolen or forgotten. An important characteristic of biometric authentication is that there is an explicit connection with the user's identity, since biometrics rely entirely on behavioral and physiological characteristics of human being. There are a variety of biometric authentication options that have emerged so far, all of which can be used on a mobile phone. These options include but are not limited to, face recognition via camera, fingerprint, voice recognition, keystroke and gesture recognition via touch screen. Touch gesture behavioural biometrics are commonly used as an alternative solution to existing traditional biometric mechanism. However, current touch gesture authentication schemes are fraught with authentication accuracy problems. In fact, the extracted features used in some researches on touch gesture schemes are limited to speed, time, position, finger size and finger pressure. However, extracting a few touch features from individual touches is not enough to accurately distinguish various users. In this research, behavioural features are extracted from recorded touch screen data and a discriminative classifier is trained on these extracted features for authentication. While the user performs the gesture, the touch screen sensor is leveraged on and twelve of the user‘s finger touch features are extracted. Eighty four different users participated in this research work, each user drew six gesture with a total of 504 instances. The extracted touch gesture features are normalised by scaling the values so that they fall within a small specified range. Thereafter, five different Feature Selection Algorithm were used to choose the most significant features subset. Six different machine learning classifiers were used to classify each instance in the data set into one of the predefined set of classes. Results from experiments conducted in the proposed touch gesture behavioral biometrics scheme achieved an average False Reject Rate (FRR) of 7.84%, average False Accept Rate (FAR) of 1%, average Equal Error Rate (EER) of 4.02% and authentication accuracy of 91.67%,. The comparative results showed that the proposed scheme outperforms other existing touch gesture authentication schemes in terms of FAR, EER and authentication accuracy by 1.67%, 6.74% and 4.65% respectively. The results of this research affirm that user authentication through gestures is promising, highly viable and can be used for mobile devices

    The 2013 face recognition evaluation in mobile environment

    Get PDF
    Automatic face recognition in unconstrained environments is a challenging task. To test current trends in face recognition algorithms, we organized an evaluation on face recognition in mobile environment. This paper presents the results of 8 different participants using two verification metrics. Most submitted algorithms rely on one or more of three types of features: local binary patterns, Gabor wavelet responses including Gabor phases, and color information. The best results are obtained from UNILJ-ALP, which fused several image representations and feature types, and UC-HU, which learns optimal features with a convolutional neural network. Additionally, we assess the usability of the algorithms in mobile devices with limited resources. © 2013 IEEE

    Facial Landmark Detection Evaluation on MOBIO Database

    Full text link
    MOBIO is a bi-modal database that was captured almost exclusively on mobile phones. It aims to improve research into deploying biometric techniques to mobile devices. Research has been shown that face and speaker recognition can be performed in a mobile environment. Facial landmark localization aims at finding the coordinates of a set of pre-defined key points for 2D face images. A facial landmark usually has specific semantic meaning, e.g. nose tip or eye centre, which provides rich geometric information for other face analysis tasks such as face recognition, emotion estimation and 3D face reconstruction. Pretty much facial landmark detection methods adopt still face databases, such as 300W, AFW, AFLW, or COFW, for evaluation, but seldomly use mobile data. Our work is first to perform facial landmark detection evaluation on the mobile still data, i.e., face images from MOBIO database. About 20,600 face images have been extracted from this audio-visual database and manually labeled with 22 landmarks as the groundtruth. Several state-of-the-art facial landmark detection methods are adopted to evaluate their performance on these data. The result shows that the data from MOBIO database is pretty challenging. This database can be a new challenging one for facial landmark detection evaluation.Comment: 13 pages, 10 figure
    corecore