238,810 research outputs found
Privacy Games: Optimal User-Centric Data Obfuscation
In this paper, we design user-centric obfuscation mechanisms that impose the
minimum utility loss for guaranteeing user's privacy. We optimize utility
subject to a joint guarantee of differential privacy (indistinguishability) and
distortion privacy (inference error). This double shield of protection limits
the information leakage through obfuscation mechanism as well as the posterior
inference. We show that the privacy achieved through joint
differential-distortion mechanisms against optimal attacks is as large as the
maximum privacy that can be achieved by either of these mechanisms separately.
Their utility cost is also not larger than what either of the differential or
distortion mechanisms imposes. We model the optimization problem as a
leader-follower game between the designer of obfuscation mechanism and the
potential adversary, and design adaptive mechanisms that anticipate and protect
against optimal inference algorithms. Thus, the obfuscation mechanism is
optimal against any inference algorithm
Quantifying Differential Privacy under Temporal Correlations
Differential Privacy (DP) has received increased attention as a rigorous
privacy framework. Existing studies employ traditional DP mechanisms (e.g., the
Laplace mechanism) as primitives, which assume that the data are independent,
or that adversaries do not have knowledge of the data correlations. However,
continuously generated data in the real world tend to be temporally correlated,
and such correlations can be acquired by adversaries. In this paper, we
investigate the potential privacy loss of a traditional DP mechanism under
temporal correlations in the context of continuous data release. First, we
model the temporal correlations using Markov model and analyze the privacy
leakage of a DP mechanism when adversaries have knowledge of such temporal
correlations. Our analysis reveals that the privacy leakage of a DP mechanism
may accumulate and increase over time. We call it temporal privacy leakage.
Second, to measure such privacy leakage, we design an efficient algorithm for
calculating it in polynomial time. Although the temporal privacy leakage may
increase over time, we also show that its supremum may exist in some cases.
Third, to bound the privacy loss, we propose mechanisms that convert any
existing DP mechanism into one against temporal privacy leakage. Experiments
with synthetic data confirm that our approach is efficient and effective.Comment: appears at ICDE 201
MVG Mechanism: Differential Privacy under Matrix-Valued Query
Differential privacy mechanism design has traditionally been tailored for a
scalar-valued query function. Although many mechanisms such as the Laplace and
Gaussian mechanisms can be extended to a matrix-valued query function by adding
i.i.d. noise to each element of the matrix, this method is often suboptimal as
it forfeits an opportunity to exploit the structural characteristics typically
associated with matrix analysis. To address this challenge, we propose a novel
differential privacy mechanism called the Matrix-Variate Gaussian (MVG)
mechanism, which adds a matrix-valued noise drawn from a matrix-variate
Gaussian distribution, and we rigorously prove that the MVG mechanism preserves
-differential privacy. Furthermore, we introduce the concept
of directional noise made possible by the design of the MVG mechanism.
Directional noise allows the impact of the noise on the utility of the
matrix-valued query function to be moderated. Finally, we experimentally
demonstrate the performance of our mechanism using three matrix-valued queries
on three privacy-sensitive datasets. We find that the MVG mechanism notably
outperforms four previous state-of-the-art approaches, and provides comparable
utility to the non-private baseline.Comment: Appeared in CCS'1
Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations
Differential Privacy (DP) has received increasing attention as a rigorous
privacy framework. Many existing studies employ traditional DP mechanisms
(e.g., the Laplace mechanism) as primitives to continuously release private
data for protecting privacy at each time point (i.e., event-level privacy),
which assume that the data at different time points are independent, or that
adversaries do not have knowledge of correlation between data. However,
continuously generated data tend to be temporally correlated, and such
correlations can be acquired by adversaries. In this paper, we investigate the
potential privacy loss of a traditional DP mechanism under temporal
correlations. First, we analyze the privacy leakage of a DP mechanism under
temporal correlation that can be modeled using Markov Chain. Our analysis
reveals that, the event-level privacy loss of a DP mechanism may
\textit{increase over time}. We call the unexpected privacy loss
\textit{temporal privacy leakage} (TPL). Although TPL may increase over time,
we find that its supremum may exist in some cases. Second, we design efficient
algorithms for calculating TPL. Third, we propose data releasing mechanisms
that convert any existing DP mechanism into one against TPL. Experiments
confirm that our approach is efficient and effective.Comment: accepted in TKDE special issue "Best of ICDE 2017". arXiv admin note:
substantial text overlap with arXiv:1610.0754
Buying Private Data without Verification
We consider the problem of designing a survey to aggregate non-verifiable
information from a privacy-sensitive population: an analyst wants to compute
some aggregate statistic from the private bits held by each member of a
population, but cannot verify the correctness of the bits reported by
participants in his survey. Individuals in the population are strategic agents
with a cost for privacy, \ie, they not only account for the payments they
expect to receive from the mechanism, but also their privacy costs from any
information revealed about them by the mechanism's outcome---the computed
statistic as well as the payments---to determine their utilities. How can the
analyst design payments to obtain an accurate estimate of the population
statistic when individuals strategically decide both whether to participate and
whether to truthfully report their sensitive information?
We design a differentially private peer-prediction mechanism that supports
accurate estimation of the population statistic as a Bayes-Nash equilibrium in
settings where agents have explicit preferences for privacy. The mechanism
requires knowledge of the marginal prior distribution on bits , but does
not need full knowledge of the marginal distribution on the costs ,
instead requiring only an approximate upper bound. Our mechanism guarantees
-differential privacy to each agent against any adversary who can
observe the statistical estimate output by the mechanism, as well as the
payments made to the other agents . Finally, we show that with
slightly more structured assumptions on the privacy cost functions of each
agent, the cost of running the survey goes to as the number of agents
diverges.Comment: Appears in EC 201
- …