547 research outputs found
PATH: Person Authentication using Trace Histories
In this paper, a solution to the problem of Active Authentication using trace
histories is addressed. Specifically, the task is to perform user verification
on mobile devices using historical location traces of the user as a function of
time. Considering the movement of a human as a Markovian motion, a modified
Hidden Markov Model (HMM)-based solution is proposed. The proposed method,
namely the Marginally Smoothed HMM (MSHMM), utilizes the marginal probabilities
of location and timing information of the observations to smooth-out the
emission probabilities while training. Hence, it can efficiently handle
unforeseen observations during the test phase. The verification performance of
this method is compared to a sequence matching (SM) method , a Markov
Chain-based method (MC) and an HMM with basic Laplace Smoothing (HMM-lap).
Experimental results using the location information of the UMD Active
Authentication Dataset-02 (UMDAA02) and the GeoLife dataset are presented. The
proposed MSHMM method outperforms the compared methods in terms of equal error
rate (EER). Additionally, the effects of different parameters on the proposed
method are discussed.Comment: 8 pages, 9 figures. Best Paper award at IEEE UEMCON 201
Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier
Active authentication refers to the process in which users are unobtrusively
monitored and authenticated continuously throughout their interactions with
mobile devices. Generally, an active authentication problem is modelled as a
one class classification problem due to the unavailability of data from the
impostor users. Normally, the enrolled user is considered as the target class
(genuine) and the unauthorized users are considered as unknown classes
(impostor). We propose a convolutional neural network (CNN) based approach for
one class classification in which a zero centered Gaussian noise and an
autoencoder are used to model the pseudo-negative class and to regularize the
network to learn meaningful feature representations for one class data,
respectively. The overall network is trained using a combination of the
cross-entropy and the reconstruction error losses. A key feature of the
proposed approach is that any pre-trained CNN can be used as the base network
for one class classification. Effectiveness of the proposed framework is
demonstrated using three publically available face-based active authentication
datasets and it is shown that the proposed method achieves superior performance
compared to the traditional one class classification methods. The source code
is available at: github.com/otkupjnoz/oc-acnn.Comment: Accepted and to appear at AFGR 201
Active User Authentication for Smartphones: A Challenge Data Set and Benchmark Results
In this paper, automated user verification techniques for smartphones are
investigated. A unique non-commercial dataset, the University of Maryland
Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication
research is introduced. This paper focuses on three sensors - front camera,
touch sensor and location service while providing a general description for
other modalities. Benchmark results for face detection, face verification,
touch-based user identification and location-based next-place prediction are
presented, which indicate that more robust methods fine-tuned to the mobile
platform are needed to achieve satisfactory verification accuracy. The dataset
will be made available to the research community for promoting additional
research.Comment: 8 pages, 12 figures, 6 tables. Best poster award at BTAS 201
BehavePassDB: Public Database for Mobile Behavioral Biometrics and Benchmark Evaluation
Mobile behavioral biometrics have become a popular topic of research, reaching promising results in
terms of authentication, exploiting a multimodal combination of touchscreen and background sensor
data. However, there is no way of knowing whether state-of-the-art classifiers in the literature can distinguish between the notion of user and device. In this article, we present a new database, BehavePassDB,
structured into separate acquisition sessions and tasks to mimic the most common aspects of mobile
Human-Computer Interaction (HCI). BehavePassDB is acquired through a dedicated mobile app installed
on the subjects devices, also including the case of different users on the same device for evaluation. We
propose a standard experimental protocol and benchmark for the research community to perform a fair
comparison of novel approaches with the state of the art1. We propose and evaluate a system based on
Long-Short Term Memory (LSTM) architecture with triplet loss and modality fusion at score levelThis project has received funding from the European Unions
Horizon 2020 research and innovation programme under the Marie
Skodowska-Curie grant agreement no. 860315, and from Orange
Labs. R. Tolosana and R. Vera-Rodriguez are also supported by
INTER-ACTION (PID2021-126521OB-I00 MICINN/FEDER
BehavePassDB: Public Database for Mobile Behavioral Biometrics and Benchmark Evaluation
Mobile behavioral biometrics have become a popular topic of research,
reaching promising results in terms of authentication, exploiting a multimodal
combination of touchscreen and background sensor data. However, there is no way
of knowing whether state-of-the-art classifiers in the literature can
distinguish between the notion of user and device. In this article, we present
a new database, BehavePassDB, structured into separate acquisition sessions and
tasks to mimic the most common aspects of mobile Human-Computer Interaction
(HCI). BehavePassDB is acquired through a dedicated mobile app installed on the
subjects' devices, also including the case of different users on the same
device for evaluation. We propose a standard experimental protocol and
benchmark for the research community to perform a fair comparison of novel
approaches with the state of the art. We propose and evaluate a system based on
Long-Short Term Memory (LSTM) architecture with triplet loss and modality
fusion at score level.Comment: 11 pages, 3 figure
Multimodal Learning and Its Application to Mobile Active Authentication
Mobile devices are becoming increasingly popular due to their flexibility and convenience in managing personal information such as bank accounts, profiles and passwords. With the increasing use of mobile devices comes the issue of security as the loss of a smartphone would compromise the personal information of the user.
Traditional methods for authenticating users on mobile devices are based on passwords or fingerprints. As long as mobile devices remain active, they do not incorporate any mechanisms for verifying if the user originally authenticated is still the user in control of the mobile device. Thus, unauthorized individuals may improperly obtain access to personal information of the user if a password is compromised or if a user does not exercise adequate vigilance after initial authentication on a device. To deal with this problem, active authentication systems have been proposed in which users are continuously monitored after the initial access to the mobile device. Active authentication systems can capture users' data (facial image data, screen touch data, motion data, etc) through sensors (camera, touch screen, accelerometer, etc), extract features from different sensors' data, build classification models and authenticate users via comparing additional sensor data against the models.
Mobile active authentication can be viewed as one application of the more general problem, namely, multimodal classification. The idea of multimodal classification is to utilize multiple sources (modalities) measuring the same instance to improve the overall performance compared to using a single source (modality). Multimodal classification also arises in many computer vision tasks such as image classification, RGBD object classification and scene recognition.
In this dissertation, we not only present methods and algorithms related to active authentication problems, but also propose multimodal recognition algorithms based on low-rank and joint sparse representations as well as multimodal metric learning algorithm to improve multimodal classification performance. The multimodal learning algorithms proposed in this dissertation make no assumption about the feature type or applications, thus they can be applied to various recognition tasks such as mobile active authentication, image classification and RGBD recognition.
First, we study the mobile active authentication problem by exploiting a dataset consisting of 50 users' face captured by the phone's frontal camera and screen touch data sensed by the screen for evaluating active authentication algorithms developed under this research. The dataset is named as UMD Active Authentication (UMDAA) dataset. Details on data preprocessing and feature extraction for touch data and face data are described respectively.
Second, we present an approach for active user authentication using screen touch gestures by building linear and kernelized dictionaries based on sparse representations and associated classifiers. Experiments using the screen touch data components of UMDAA dataset as well as two other publicly available screen touch datasets show that the dictionary-based classification method compares favorably to those discussed in the literature. Experiments done using screen touch data collected in three different sessions show a drop in performance when the training and test data come from different sessions. This suggests a need for applying domain adaptation methods to further improve the performance of the classifiers.
Third, we propose a domain adaptive sparse representation-based classification method that learns projections of data in a space where the sparsity of data is maintained. We provide an efficient iterative procedure for solving the proposed optimization problem. One of the key features of the proposed method is that it is computationally efficient as learning is done in the lower-dimensional space. Various experiments on UMDAA dataset show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive domain adaptation algorithms.
Fourth, we propose low-rank and joint sparse representations-based multimodal recognition. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all the modalities are imposed. One of our methods takes into account coupling information within different modalities simultaneously by enforcing the common low-rank and joint sparse representation among each modality's observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the proposed optimization problems. Extensive experiments on UMDAA dataset, WVU multimodal biometrics dataset and Pascal-Sentence image classification dataset show that that our methods provide better recognition performance than other feature-level fusion methods.
Finally, we propose a hierarchical multimodal metric learning algorithm for multimodal data in order to improve multimodal classification performance. We design metric for each modality as a product of two matrices: one matrix is modality specific, the other is enforced to be shared by all the modalities. The modality specific projection matrices capture the varying characteristics exhibited by multiple modalities and the common projection matrix establishes the relationship of the distance metrics corresponding to multiple modalities. The learned metrics significantly improves classification accuracy and experimental results of tagged image classification problem as well as various RGBD recognition problems show that the proposed algorithm outperforms existing learning algorithms based on multiple metrics as well as other state-of-the-art approaches tested on these datasets. Furthermore, we make the proposed multimodal metric learning algorithm non-linear by using kernel methods
- …