3,050 research outputs found

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    Learning Visual Classifiers From Limited Labeled Images

    Get PDF
    Recognizing humans and their activities from images and video is one of the key goals of computer vision. While supervised learning algorithms like Support Vector Machines and Boosting have offered robust solutions, they require large amount of labeled data for good performance. It is often difficult to acquire large labeled datasets due to the significant human effort involved in data annotation. However, it is considerably easier to collect unlabeled data due to the availability of inexpensive cameras and large public databases like Flickr and YouTube. In this dissertation, we develop efficient machine learning techniques for visual classification from small amount of labeled training data by utilizing the structure in the testing data, labeled data in a different domain and unlabeled data. This dissertation has three main parts. In the first part of the dissertation, we consider how multiple noisy samples available during testing can be utilized to perform accurate visual classification. Such multiple samples are easily available in video-based recognition problem, which is commonly encountered in visual surveillance. Specifically, we study the problem of unconstrained human recognition from iris images. We develop a Sparse Representation-based selection and recognition scheme, which learns the underlying structure of clean images. This learned structure is utilized to develop a quality measure, and a quality-based fusion scheme is proposed to combine the varying evidence. Furthermore, we extend the method to incorporate privacy, an important requirement inpractical biometric applications, without significantly affecting the recognition performance. In the second part, we analyze the problem of utilizing labeled data in a different domain to aid visual classification. We consider the problem of shifts in acquisition conditions during training and testing, which is very common in iris biometrics. In particular, we study the sensor mismatch problem, where the training samples are acquired using a sensor much older than the one used for testing. We provide one of the first solutions to this problem, a kernel learning framework to adapt iris data collected from one sensor to another. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to considerable improvement in cross sensor recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris recognition pipeline, it can easily be incorporated into existing iris recognition systems. In the last part of the dissertation, we analyze how unlabeled data available during training can assist visual classification applications. Here, we consider still image-based vision applications involving humans, where explicit motion cues are not available. A human pose often conveys not only the configuration of the body parts, but also implicit predictive information about the ensuing motion. We propose a probabilistic framework to infer this dynamic information associated with a human pose, using unlabeled and unsegmented videos available during training. The inference problem is posed as a non-parametric density estimation problem on non-Euclidean manifolds. Since direct modeling is intractable, we develop a data driven approach, estimating the density for the test sample under consideration. Statistical inference on the estimated density provides us with quantities of interest like the most probable future motion of the human and the amount of motion informatio

    Implicit Sensor-based Authentication of Smartphone Users with Smartwatch

    Full text link
    Smartphones are now frequently used by end-users as the portals to cloud-based services, and smartphones are easily stolen or co-opted by an attacker. Beyond the initial log-in mechanism, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data, whether in the cloud or in the smartphone. But attackers who have gained access to a logged-in smartphone have no incentive to re-authenticate, so this must be done in an automatic, non-bypassable way. Hence, this paper proposes a novel authentication system, iAuth, for implicit, continuous authentication of the end-user based on his or her behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We design a system that gives accurate authentication using machine learning and sensor data from multiple mobile devices. Our system can achieve 92.1% authentication accuracy with negligible system overhead and less than 2% battery consumption.Comment: Published in Hardware and Architectural Support for Security and Privacy (HASP), 201
    corecore