16,908 research outputs found

    FLIP: A Utility Preserving Privacy Mechanism for Time Series

    Full text link
    Guaranteeing privacy in released data is an important goal for data-producing agencies. There has been extensive research on developing suitable privacy mechanisms in recent years. Particularly notable is the idea of noise addition with the guarantee of differential privacy. There are, however, concerns about compromising data utility when very stringent privacy mechanisms are applied. Such compromises can be quite stark in correlated data, such as time series data. Adding white noise to a stochastic process may significantly change the correlation structure, a facet of the process that is essential to optimal prediction. We propose the use of all-pass filtering as a privacy mechanism for regularly sampled time series data, showing that this procedure preserves utility while also providing sufficient privacy guarantees to entity-level time series.Comment: 19 pages, 5 figure

    Generalized differential privacy: regions of priors that admit robust optimal mechanisms

    Get PDF
    International audienceDifferential privacy is a notion of privacy that was initially designed for statistical databases, and has been recently extended to a more general class of domains. Both differential privacy and its generalized version can be achieved by adding random noise to the reported data. Thus, privacy is obtained at the cost of reducing the data's accuracy, and therefore their utility. In this paper we consider the problem of identifying optimal mechanisms for gen- eralized differential privacy, i.e. mechanisms that maximize the utility for a given level of privacy. The utility usually depends on a prior distribution of the data, and naturally it would be desirable to design mechanisms that are universally optimal, i.e., optimal for all priors. However it is already known that such mechanisms do not exist in general. We then characterize maximal classes of priors for which a mechanism which is optimal for all the priors of the class does exist. We show that such classes can be defined as convex polytopes in the priors space. As an application, we consider the problem of privacy that arises when using, for instance, location-based services, and we show how to define mechanisms that maximize the quality of service while preserving the desired level of geo- indistinguishability

    Batching of Tasks by Users of Pseudonymous Forums: Anonymity Compromise and Protection

    Full text link
    There are a number of forums where people participate under pseudonyms. One example is peer review, where the identity of reviewers for any paper is confidential. When participating in these forums, people frequently engage in "batching": executing multiple related tasks (e.g., commenting on multiple papers) at nearly the same time. Our empirical analysis shows that batching is common in two applications we consider \unicode{x2013} peer review and Wikipedia edits. In this paper, we identify and address the risk of deanonymization arising from linking batched tasks. To protect against linkage attacks, we take the approach of adding delay to the posting time of batched tasks. We first show that under some natural assumptions, no delay mechanism can provide a meaningful differential privacy guarantee. We therefore propose a "one-sided" formulation of differential privacy for protecting against linkage attacks. We design a mechanism that adds zero-inflated uniform delay to events and show it can preserve privacy. We prove that this noise distribution is in fact optimal in minimizing expected delay among mechanisms adding independent noise to each event, thereby establishing the Pareto frontier of the trade-off between the expected delay for batched and unbatched events. Finally, we conduct a series of experiments on Wikipedia and Bitcoin data that corroborate the practical utility of our algorithm in obfuscating batching without introducing onerous delay to a system

    The Optimal Mechanism in Differential Privacy

    Full text link
    We derive the optimal ϵ\epsilon-differentially private mechanism for single real-valued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has {\em staircase-shaped} probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a {\em geometric mixture of uniform probability distributions}, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the optimal noise probability distributions with minimum expectation of noise amplitude and power. Comparing the optimal performances with those of the Laplacian mechanism, we show that in the high privacy regime (ϵ\epsilon is small), Laplacian mechanism is asymptotically optimal as ϵ0\epsilon \to 0; in the low privacy regime (ϵ\epsilon is large), the minimum expectation of noise amplitude and minimum noise power are Θ(Δeϵ2)\Theta(\Delta e^{-\frac{\epsilon}{2}}) and Θ(Δ2e2ϵ3)\Theta(\Delta^2 e^{-\frac{2\epsilon}{3}}) as ϵ+\epsilon \to +\infty, while the expectation of noise amplitude and power using the Laplacian mechanism are Δϵ\frac{\Delta}{\epsilon} and 2Δ2ϵ2\frac{2\Delta^2}{\epsilon^2}, where Δ\Delta is the sensitivity of the query function. We conclude that the gains are more pronounced in the low privacy regime.Comment: 40 pages, 5 figures. Part of this work was presented in DIMACS Workshop on Recent Work on Differential Privacy across Computer Science, October 24 - 26, 201
    corecore