4,832 research outputs found

    Beyond the Hype: On Using Blockchains in Trust Management for Authentication

    Full text link
    Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.Comment: A version of this paper was published in IEEE Trustcom. http://ieeexplore.ieee.org/document/8029486

    Context-Aware Generative Adversarial Privacy

    Full text link
    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special Issue on Information Theory in Machine Learning and Data Scienc

    Perfect Implementation

    Get PDF
    Privacy and trust aect our strategic thinking, yet they have not been precisely modeled in mechanism design. In settings of incomplete information, traditional implementations of a normal-form mechanism - by disregarding the players' privacy, or assuming trust in a mediator - may fail to reach the mechanism's objectives. We thus investigate implementations of a new type. We put forward the notion of a perfect implementation of a normal-form mechanism M: in essence, a concrete extensive-form mechanism exactly preserving all strategic properties of M, without relying on a trusted mediator or violating the privacy of the players. We prove that any normal-form mechanism can be perfectly implemented by a verifiable mediator using envelopes and an envelope-randomizing device (i.e., the same tools used for running fair lotteries or tallying secret votes). Differently from a trusted mediator, a veriable one only performs prescribed public actions, so that everyone can verify that he is acting properly, and that he never learns any information that should remain private

    Trust in social machines: the challenges

    No full text
    The World Wide Web has ushered in a new generation of applications constructively linking people and computers to create what have been called ‘social machines.’ The ‘components’ of these machines are people and technologies. It has long been recognised that for people to participate in social machines, they have to trust the processes. However, the notions of trust often used tend to be imported from agent-based computing, and may be too formal, objective and selective to describe human trust accurately. This paper applies a theory of human trust to social machines research, and sets out some of the challenges to system designers

    A system-theoretic framework for privacy preservation in continuous-time multiagent dynamics

    Full text link
    In multiagent dynamical systems, privacy protection corresponds to avoid disclosing the initial states of the agents while accomplishing a distributed task. The system-theoretic framework described in this paper for this scope, denoted dynamical privacy, relies on introducing output maps which act as masks, rendering the internal states of an agent indiscernible by the other agents as well as by external agents monitoring all communications. Our output masks are local (i.e., decided independently by each agent), time-varying functions asymptotically converging to the true states. The resulting masked system is also time-varying, and has the original unmasked system as its limit system. When the unmasked system has a globally exponentially stable equilibrium point, it is shown in the paper that the masked system has the same point as a global attractor. It is also shown that existence of equilibrium points in the masked system is not compatible with dynamical privacy. Application of dynamical privacy to popular examples of multiagent dynamics, such as models of social opinions, average consensus and synchronization, is investigated in detail.Comment: 38 pages, 4 figures, extended version of arXiv preprint arXiv:1808.0808
    • …
    corecore