12 research outputs found

    Property Inference Attacks on Convolutional Neural Networks:Influence and Implications of Target Model's Complexity

    Get PDF
    Machine learning models' goal is to make correct predictions for specific tasks by learning important properties and patterns from data. By doing so, there is a chance that the model learns properties that are unrelated to its primary task. Property Inference Attacks exploit this and aim to infer from a given model (i.e., the target model) properties about the training dataset seemingly unrelated to the model's primary goal. If the training data is sensitive, such an attack could lead to privacy leakage. This paper investigates the influence of the target model's complexity on the accuracy of this type of attack, focusing on convolutional neural network classifiers. We perform attacks on models that are trained on facial images to predict whether someone's mouth is open. Our attacks' goal is to infer whether the training dataset is balanced gender-wise. Our findings reveal that the risk of a privacy breach is present independently of the target model's complexity: for all studied architectures, the attack's accuracy is clearly over the baseline. We discuss the implication of the property inference on personal data in the light of Data Protection Regulations and Guidelines

    To Cheat or Not to Cheat - A Game-Theoretic Analysis of Outsourced Computation Verification

    Get PDF
    peer reviewedIn the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their com- putations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its pay- o while ensuring the desired level of integrity for the out- sourced computation. We de ne two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model

    The Price of Privacy in Collaborative Learning

    Get PDF
    Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player

    DeVoS: Deniable Yet Verifiable Vote Updating

    Get PDF
    peer reviewedInternet voting systems are supposed to meet the same high standards as traditional paper-based systems when used in real political elections: freedom of choice, universal and equal suffrage, secrecy of the ballot, and independent verifiability of the election result. Although numerous Internet voting systems have been proposed to achieve these challenging goals simultaneously, few come close in reality. We propose a novel publicly verifiable and practically efficient Internet voting system, DeVoS, that advances the state of the art. The main feature of DeVoS is its ability to protect voters' freedom of choice in several dimensions. First, voters in DeVoS can intuitively update their votes in a way that is deniable to observers but verifiable by the voters; in this way voters can secretly overwrite potentially coerced votes. Second, in addition to (basic) vote privacy, DeVoS also guarantees strong participation privacy by end-to-end hiding which voters have submitted ballots and which have not. Finally, DeVoS is fully compatible with Perfectly Private Audit Trail, a state-of-the-art Internet voting protocol with practical everlasting privacy. In combination, DeVoS offers a new way to secure free Internet elections with strong and long-term privacy properties

    Protect both Integrity and Confidentiality in Outsourcing Collaborative Filtering Computations

    Get PDF
    In the cloud computing era, in order to avoid the computational burdens, many recommendation service providers tend to outsource their collaborative filtering computations to third-party cloud servers. In order to protect service quality, the integrity of computation results needs to be guaranteed. In this paper, we analyze two integrity verification approaches by Vaidya et al. and demonstrate their performances. In particular, we analyze the verification via auxiliary data approach which is only briefly mentioned in the original paper, and demonstrate the experimental results. We then propose a new solution to outsource all computations of the weighted Slope One algorithm in two-server setting and provide experimental results

    DeVoS: Deniable Yet Verifiable Vote Updating

    Get PDF
    Internet voting systems are supposed to meet the same high standards as traditional paper-based systems when used in real political elections: freedom of choice, universal and equal suffrage, secrecy of the ballot, and independent verifiability of the election result. Although numerous Internet voting systems have been proposed to achieve these challenging goals simultaneously, few come close in reality. We propose a novel publicly verifiable and practically efficient Internet voting system, DeVoS, that advances the state of the art. The main feature of DeVoS is its ability to protect voters\u27 freedom of choice in several dimensions. First, voters in DeVoS can intuitively update their votes in a way that is deniable to observers but verifiable by the voters; in this way voters can secretly overwrite potentially coerced votes. Second, in addition to (basic) vote privacy, DeVoS also guarantees strong participation privacy by end-to-end hiding which voters have submitted ballots and which have not. Finally, DeVoS is fully compatible with Perfectly Private Audit Trail, a state-of-the-art Internet voting protocol with practical everlasting privacy. In combination, DeVoS offers a new way to secure free Internet elections with strong and long-term privacy properties

    Together or Alone: The Price of Privacy in Collaborative Learinig

    Get PDF
    Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have the necessary data to train a reasonably accurate model. For such organizations, a realistic solution is to train their machine learning models based on their joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, by focusing on a two-player setting, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. Using recommendation systems as our main use case, we demonstrate how two players can make practical use of the proposed theoretical framework, including setting up the parameters and approximating the non-trivial Nash Equilibrium

    Integrity and Confidentiality Problems of Outsourcing

    No full text
    Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service-provider. Nonetheless, this on-demand network access raises the issues of security and privacy, which has become a primary concern of recent decades. In this dissertation, we tackle these problems from two perspectives: data confidentiality and result integrity. Concerning data confidentiality, we systematically classify the relaxations of the most widely used privacy preserving technique called Differential Privacy. We also establish a partial ordering of strength between these relaxations and enlist whether they satisfy additional desirable properties, such as composition and privacy axioms. Tackling the problem of confidentiality, we design a Collaborative Learning game, which helps the data holders to determine how to set the privacy parameter based on economic aspects. We also define the Price of Privacy to measure the overall degradation of accuracy resulting from the applied privacy protection. Moreover, we develop a procedure called Self-Division, which bridges the gap between the game and real-world scenarios. Concerning result integrity, we formulate a Stackelberg game between outsourcer and outsourcee where no absolute correctness is required. We provide the optimal strategies for the players and perform a sensitivity analysis. Furthermore, we extend the game by allowing the outsourcer no to verify and show its Nash Equilibriums. Regarding integrity verification, we analyze and compare two verification methods for Collaborative Filtering algorithms: the splitting and the auxiliary data approach. We observe that neither methods provides a full solution for the raised problem. Hence, we propose a solution, which besides outperforming these is also applicable to both stage of the algorithms

    Game-Theoretic Framework for Integrity Verification in Computation Outsourcing

    Get PDF
    n the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payo while ensuring the desired level of integrity for the outsourced computation. We de ne two Stackelberg games and analyze the optimal se ing’s sensitivity for the parameters of the model
    corecore