21,436 research outputs found

    Binary Hypothesis Testing Game with Training Data

    Full text link
    We introduce a game-theoretic framework to study the hypothesis testing problem, in the presence of an adversary aiming at preventing a correct decision. Specifically, the paper considers a scenario in which an analyst has to decide whether a test sequence has been drawn according to a probability mass function (pmf) P_X or not. In turn, the goal of the adversary is to take a sequence generated according to a different pmf and modify it in such a way to induce a decision error. P_X is known only through one or more training sequences. We derive the asymptotic equilibrium of the game under the assumption that the analyst relies only on first order statistics of the test sequence, and compute the asymptotic payoff of the game when the length of the test sequence tends to infinity. We introduce the concept of indistinguishability region, as the set of pmf's that can not be distinguished reliably from P_X in the presence of attacks. Two different scenarios are considered: in the first one the analyst and the adversary share the same training sequence, in the second scenario, they rely on independent sequences. The obtained results are compared to a version of the game in which the pmf P_X is perfectly known to the analyst and the adversary

    Cores of Cooperative Games in Information Theory

    Get PDF
    Cores of cooperative games are ubiquitous in information theory, and arise most frequently in the characterization of fundamental limits in various scenarios involving multiple users. Examples include classical settings in network information theory such as Slepian-Wolf source coding and multiple access channels, classical settings in statistics such as robust hypothesis testing, and new settings at the intersection of networking and statistics such as distributed estimation problems for sensor networks. Cooperative game theory allows one to understand aspects of all of these problems from a fresh and unifying perspective that treats users as players in a game, sometimes leading to new insights. At the heart of these analyses are fundamental dualities that have been long studied in the context of cooperative games; for information theoretic purposes, these are dualities between information inequalities on the one hand and properties of rate, capacity or other resource allocation regions on the other.Comment: 12 pages, published at http://www.hindawi.com/GetArticle.aspx?doi=10.1155/2008/318704 in EURASIP Journal on Wireless Communications and Networking, Special Issue on "Theory and Applications in Multiuser/Multiterminal Communications", April 200

    The Flow Fingerprinting Game

    Full text link
    Linking two network flows that have the same source is essential in intrusion detection or in tracing anonymous connections. To improve the performance of this process, the flow can be modified (fingerprinted) to make it more distinguishable. However, an adversary located in the middle can modify the flow to impair the correlation by delaying the packets or introducing dummy traffic. We introduce a game-theoretic framework for this problem, that is used to derive the Nash Equilibrium. As obtaining the optimal adversary delays distribution is intractable, some approximations are done. We study the concrete example where these delays follow a truncated Gaussian distribution. We also compare the optimal strategies with other fingerprinting schemes. The results are useful for understanding the limits of flow correlation based on packet timings under an active attacker.Comment: Workshop on Information Forensics and Securit

    Estimation of discrete games with weak assumptions on information

    Get PDF

    Bridging the gap between general probabilistic theories and the device-independent framework for nonlocality and contextuality

    Get PDF
    Characterizing quantum correlations in terms of information-theoretic principles is a popular chapter of quantum foundations. Traditionally, the principles adopted for this scope have been expressed in terms of conditional probability distributions, specifying the probability that a black box produces a certain output upon receiving a certain input. This framework is known as "device-independent". Another major chapter of quantum foundations is the information-theoretic characterization of quantum theory, with its sets of states and measurements, and with its allowed dynamics. The different frameworks adopted for this scope are known under the umbrella term "general probabilistic theories". With only a few exceptions, the two programmes on characterizing quantum correlations and characterizing quantum theory have so far proceeded on separate tracks, each one developing its own methods and its own agenda. This paper aims at bridging the gap, by comparing the two frameworks and illustrating how the two programmes can benefit each other.Comment: 61 pages, no figures, published versio

    Context-Aware Generative Adversarial Privacy

    Full text link
    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special Issue on Information Theory in Machine Learning and Data Scienc
    • …
    corecore