3 research outputs found

    A Mean-Field Stackelberg Game Approach for Obfuscation Adoption in Empirical Risk Minimization

    Full text link
    Data ecosystems are becoming larger and more complex due to online tracking, wearable computing, and the Internet of Things. But privacy concerns are threatening to erode the potential benefits of these systems. Recently, users have developed obfuscation techniques that issue fake search engine queries, undermine location tracking algorithms, or evade government surveillance. Interestingly, these techniques raise two conflicts: one between each user and the machine learning algorithms which track the users, and one between the users themselves. In this paper, we use game theory to capture the first conflict with a Stackelberg game and the second conflict with a mean field game. We combine both into a dynamic and strategic bi-level framework which quantifies accuracy using empirical risk minimization and privacy using differential privacy. In equilibrium, we identify necessary and sufficient conditions under which 1) each user is incentivized to obfuscate if other users are obfuscating, 2) the tracking algorithm can avoid this by promising a level of privacy protection, and 3) this promise is incentive-compatible for the tracking algorithm.Comment: IEEE Global SIP Symposium on Control & Information Theoretic Approaches to Privacy and Securit

    Game-Theoretic Analysis of Cyber Deception: Evidence-Based Strategies and Dynamic Risk Mitigation

    Full text link
    Deception is a technique to mislead human or computer systems by manipulating beliefs and information. For the applications of cyber deception, non-cooperative games become a natural choice of models to capture the adversarial interactions between the players and quantitatively characterizes the conflicting incentives and strategic responses. In this chapter, we provide an overview of deception games in three different environments and extend the baseline signaling game models to include evidence through side-channel knowledge acquisition to capture the information asymmetry, dynamics, and strategic behaviors of deception. We analyze the deception in binary information space based on a signaling game framework with a detector that gives off probabilistic evidence of the deception when the sender acts deceptively. We then focus on a class of continuous one-dimensional information space and take into account the cost of deception in the signaling game. We finally explore the multi-stage incomplete-information Bayesian game model for defensive deception for advanced persistent threats (APTs). We use the perfect Bayesian Nash equilibrium (PBNE) as the solution concept for the deception games and analyze the strategic equilibrium behaviors for both the deceivers and the deceivees.Comment: arXiv admin note: text overlap with arXiv:1810.0075

    Prospect Theoretic Analysis of Privacy-Preserving Mechanism

    Full text link
    We study a problem of privacy-preserving mechanism design. A data collector wants to obtain data from individuals to perform some computations. To relieve the privacy threat to the contributors, the data collector adopts a privacy-preserving mechanism by adding random noise to the computation result, at the cost of reduced accuracy. Individuals decide whether to contribute data when faced with the privacy issue. Due to the intrinsic uncertainty in privacy protection, we model individuals' privacy-related decision using Prospect Theory. Such a theory more accurately models individuals' behavior under uncertainty than the traditional expected utility theory, whose prediction always deviates from practical human behavior. We show that the data collector's utility maximization problem involves a polynomial of high and fractional order, the root of which is difficult to compute analytically. We get around this issue by considering a large population approximation, and obtain a closed-form solution that well approximates the precise solution. We discover that the data collector who considers the more realistic Prospect Theory based individual decision modeling would adopt a more conservative privacy-preserving mechanism, compared with the case based on the expected utility theory modeling. We also study the impact of Prospect Theory parameters, and concludes that more loss-averse or risk-seeking individuals will trigger a more conservative mechanism. When individuals have different Prospect Theory parameters, simulations demonstrate that the privacy protection first becomes stronger and then becomes weaker as the heterogeneity increases from a low value to a high one.Comment: Accepted by IEEE/ACM Transactions on Networkin
    corecore