32,287 research outputs found

    How to Incentivize Data-Driven Collaboration Among Competing Parties

    Full text link
    The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, and develop educational strategies. In some settings such as Genome Wide Association Studies or deep learning, sheer size of data seems critical. When data is held distributedly by many parties, they must share it to reap its full benefits. One obstacle to this revolution is the lack of willingness of different parties to share data, due to reasons such as loss of privacy or competitive edge. Cryptographic works address privacy aspects, but shed no light on individual parties' losses/gains when access to data carries tangible rewards. Even if it is clear that better overall conclusions can be drawn from collaboration, are individual collaborators better off by collaborating? Addressing this question is the topic of this paper. * We formalize a model of n-party collaboration for computing functions over private inputs in which participants receive their outputs in sequence, and the order depends on their private inputs. Each output "improves" on preceding outputs according to a score function. * We say a mechanism for collaboration achieves collaborative equilibrium if it ensures higher reward for all participants when collaborating (rather than working alone). We show that in general, computing a collaborative equilibrium is NP-complete, yet we design efficient algorithms to compute it in a range of natural model settings. Our collaboration mechanisms are in the standard model, and thus require a central trusted party; however, we show this assumption is unnecessary under standard cryptographic assumptions. We show how to implement the mechanisms in a decentralized way with new extensions of secure multiparty computation that impose order/timing constraints on output delivery to different players, as well as privacy and correctness

    Online privacy: towards informational self-determination on the internet : report from Dagstuhl Perspectives Workshop 11061

    Get PDF
    The Dagstuhl Perspectives Workshop "Online Privacy: Towards Informational Self-Determination on the Internet" (11061) has been held in February 6-11, 2011 at Schloss Dagstuhl. 30 participants from academia, public sector, and industry have identified the current status-of-the-art of and challenges for online privacy as well as derived recommendations for improving online privacy. Whereas the Dagstuhl Manifesto of this workshop concludes the results of the working groups and panel discussions, this article presents the talks of this workshop by their abstracts

    Information Leakage Games

    Full text link
    We consider a game-theoretic setting to model the interplay between attacker and defender in the context of information flow, and to reason about their optimal strategies. In contrast with standard game theory, in our games the utility of a mixed strategy is a convex function of the distribution on the defender's pure actions, rather than the expected value of their utilities. Nevertheless, the important properties of game theory, notably the existence of a Nash equilibrium, still hold for our (zero-sum) leakage games, and we provide algorithms to compute the corresponding optimal strategies. As typical in (simultaneous) game theory, the optimal strategy is usually mixed, i.e., probabilistic, for both the attacker and the defender. From the point of view of information flow, this was to be expected in the case of the defender, since it is well known that randomization at the level of the system design may help to reduce information leaks. Regarding the attacker, however, this seems the first work (w.r.t. the literature in information flow) proving formally that in certain cases the optimal attack strategy is necessarily probabilistic

    Private Matchings and Allocations

    Get PDF
    We consider a private variant of the classical allocation problem: given k goods and n agents with individual, private valuation functions over bundles of goods, how can we partition the goods amongst the agents to maximize social welfare? An important special case is when each agent desires at most one good, and specifies her (private) value for each good: in this case, the problem is exactly the maximum-weight matching problem in a bipartite graph. Private matching and allocation problems have not been considered in the differential privacy literature, and for good reason: they are plainly impossible to solve under differential privacy. Informally, the allocation must match agents to their preferred goods in order to maximize social welfare, but this preference is exactly what agents wish to hide. Therefore, we consider the problem under the relaxed constraint of joint differential privacy: for any agent i, no coalition of agents excluding i should be able to learn about the valuation function of agent i. In this setting, the full allocation is no longer published---instead, each agent is told what good to get. We first show that with a small number of identical copies of each good, it is possible to efficiently and accurately solve the maximum weight matching problem while guaranteeing joint differential privacy. We then consider the more general allocation problem, when bidder valuations satisfy the gross substitutes condition. Finally, we prove that the allocation problem cannot be solved to non-trivial accuracy under joint differential privacy without requiring multiple copies of each type of good.Comment: Journal version published in SIAM Journal on Computation; an extended abstract appeared in STOC 201
    corecore