32 research outputs found

    Towards privacy-aware mobile-based continuous authentication systems

    Get PDF
    User authentication is used to verify the identify of individuals attempting to gain access to a certain system. It traditionally refers to the initial authentication using knowledge factors (e.g. passwords), or ownership factors (e.g. smart cards). However, initial authentication cannot protect the computer (or smartphone), if left unattended, after the initial login. Thus, continuous authentication was proposed to complement initial authentication by transparently and continuously testing the users\u27 behavior against the stored profile (machine learning model). Since continuous authentication utilizes users\u27 behavioral data to build machine learning models, certain privacy and security concerns have to be addressed before these systems can be widely deployed. In almost all of the continuous authentication research, non-privacy-preserving classification methods were used (such as SVM or KNN). The motivation of this work is twofold: (1) studying the implications of such assumption on continuous authentication security, and users\u27 privacy, and (2) proposing privacy-aware solutions to address the threats introduced by these assumptions. First, we study and propose reconstruction attacks and model inversion attacks in relation to continuous authentication systems, and we implement solutions that can be effective against our proposed attacks. We conduct this research assuming that a certain cloud service (which rely on continuous authentication) was compromised, and that the adversary is trying to utilize this compromised system to access a user\u27s account on another cloud service. We identify two types of adversaries based on how their knowledge is obtained: (1) full-profile adversary that has access to the victim\u27s profile, and (2) decision value adversary who is an active adversary that only has access to the cloud service mobile app (which is used to obtain a feature vector). Eventually, both adversaries use the user\u27s compromised feature vectors to generate raw data based on our proposed reconstruction methods: a numerical method that is tied to a single attacked system (set of features), and a randomized algorithm that is not restricted to a single set of features. We conducted experiments using a public data set where we evaluated the attacks performed by our two types of adversaries and two types of reconstruction algorithms, and we have shown that our attacks are feasible. Finally, we analyzed the results, and provided recommendations to resist our attacks. Our remedies directly limit the effectiveness of model inversion attacks; thus, dealing with decision value adversaries. Second, we study privacy-enhancing technologies for machine learning that can potentially prevent full-profile adversaries from utilizing the stored profiles to obtain the original feature vectors. We also study the problem of restricting undesired inference on users\u27 private data within the context of continuous authentication. We propose a gesture-based continuous authentication framework that utilizes supervised dimensionality reduction (S-DR) techniques to protect against undesired inference attacks, and meets the non-invertibility (security) requirement of cancelable biometrics. These S-DR methods are Discriminant Component Analysis (DCA), and Multiclass Discriminant Ratio (MDR). Using experiments on a public data set, our results show that DCA and MDR provide better privacy/utility performance than random projection, which was extensively utilized in cancelable biometrics. Third, since using DCA (or MDR) requires computing the projection matrix from data distributed across multiple data owners, we proposed privacy-preserving PCA/DCA protocols that enable a data user (cloud server) to compute the projection matrices without compromising the privacy of the individual data owners. To achieve this, we propose new protocols for computing the scatter matrices using additive homomorphic encryption, and performing the Eigen decomposition using Garbled circuits. We implemented our protocols using Java and Obliv-C, and conducted experiments on public datasets. We show that our protocols are efficient, and preserve the privacy while maintaining the accuracy

    Compressive Privacy for a Linear Dynamical System

    Full text link
    We consider a linear dynamical system in which the state vector consists of both public and private states. One or more sensors make measurements of the state vector and sends information to a fusion center, which performs the final state estimation. To achieve an optimal tradeoff between the utility of estimating the public states and protection of the private states, the measurements at each time step are linearly compressed into a lower dimensional space. Under the centralized setting where all measurements are collected by a single sensor, we propose an optimization problem and an algorithm to find the best compression matrix. Under the decentralized setting where measurements are made separately at multiple sensors, each sensor optimizes its own local compression matrix. We propose methods to separate the overall optimization problem into multiple sub-problems that can be solved locally at each sensor. We consider the cases where there is no message exchange between the sensors; and where each sensor takes turns to transmit messages to the other sensors. Simulations and empirical experiments demonstrate the efficiency of our proposed approach in allowing the fusion center to estimate the public states with good accuracy while preventing it from estimating the private states accurately

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF
    corecore