3 research outputs found

    mPSAuth: Privacy-Preserving and Scalable Authentication for Mobile Web Applications

    Full text link
    As nowadays most web application requests originate from mobile devices, authentication of mobile users is essential in terms of security considerations. To this end, recent approaches rely on machine learning techniques to analyze various aspects of user behavior as a basis for authentication decisions. These approaches face two challenges: first, examining behavioral data raises significant privacy concerns, and second, approaches must scale to support a large number of users. Existing approaches do not address these challenges sufficiently. We propose mPSAuth, an approach for continuously tracking various data sources reflecting user behavior (e.g., touchscreen interactions, sensor data) and estimating the likelihood of the current user being legitimate based on machine learning techniques. With mPSAuth, both the authentication protocol and the machine learning models operate on homomorphically encrypted data to ensure the users' privacy. Furthermore, the number of machine learning models used by mPSAuth is independent of the number of users, thus providing adequate scalability. In an extensive evaluation based on real-world data from a mobile application, we illustrate that mPSAuth can provide high accuracy with low encryption and communication overhead, while the effort for the inference is increased to a tolerable extent.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    A False Sense of Privacy: Towards a Reliable Evaluation Methodology for the Anonymization of Biometric Data

    Get PDF
    Biometric data contains distinctive human traits such as facial features or gait patterns. The use of biometric data permits an individuation so exact that the data is utilized effectively in identification and authentication systems. But for this same reason, privacy protections become indispensably necessary. Privacy protection is extensively afforded by the technique of anonymization. Anonymization techniques protect sensitive personal data from biometrics by obfuscating or removing information that allows linking records to the generating individuals, to achieve high levels of anonymity. However, our understanding and possibility to develop effective anonymization relies, in equal parts, on the effectiveness of the methods employed to evaluate anonymization performance. In this paper, we assess the state-of-the-art methods used to evaluate the performance of anonymization techniques for facial images and for gait patterns. We demonstrate that the state-of-the-art evaluation methods have serious and frequent shortcomings. In particular, we find that the underlying assumptions of the state-of-the-art are quite unwarranted. State-of-the-art methods generally assume a difficult recognition scenario and thus a weak adversary. However, that assumption causes state-of-the-art evaluations to grossly overestimate the performance of the anonymization. Therefore, we propose a strong adversary which is aware of the anonymization in place. This adversary model implements an appropriate measure of anonymization performance. We improve the selection process for the evaluation dataset, and we reduce the numbers of identities contained in the dataset while ensuring that these identities remain easily distinguishable from one another. Our novel evaluation methodology surpasses the state-of-the-art because we measure worst-case performance and so deliver a highly reliable evaluation of biometric anonymization techniques

    Privacy-Protecting Techniques for Behavioral Data: A Survey

    Get PDF
    Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved
    corecore