45,232 research outputs found
Stochastic Privacy
Online services such as web search and e-commerce applications typically rely
on the collection of data about users, including details of their activities on
the web. Such personal data is used to enhance the quality of service via
personalization of content and to maximize revenues via better targeting of
advertisements and deeper engagement of users on sites. To date, service
providers have largely followed the approach of either requiring or requesting
consent for opting-in to share their data. Users may be willing to share
private information in return for better quality of service or for incentives,
or in return for assurances about the nature and extend of the logging of data.
We introduce \emph{stochastic privacy}, a new approach to privacy centering on
a simple concept: A guarantee is provided to users about the upper-bound on the
probability that their personal data will be used. Such a probability, which we
refer to as \emph{privacy risk}, can be assessed by users as a preference or
communicated as a policy by a service provider. Service providers can work to
personalize and to optimize revenues in accordance with preferences about
privacy risk. We present procedures, proofs, and an overall system for
maximizing the quality of services, while respecting bounds on allowable or
communicated privacy risk. We demonstrate the methodology with a case study and
evaluation of the procedures applied to web search personalization. We show how
we can achieve near-optimal utility of accessing information with provable
guarantees on the probability of sharing data
Development and Analysis of Deterministic Privacy-Preserving Policies Using Non-Stochastic Information Theory
A deterministic privacy metric using non-stochastic information theory is
developed. Particularly, minimax information is used to construct a measure of
information leakage, which is inversely proportional to the measure of privacy.
Anyone can submit a query to a trusted agent with access to a non-stochastic
uncertain private dataset. Optimal deterministic privacy-preserving policies
for responding to the submitted query are computed by maximizing the measure of
privacy subject to a constraint on the worst-case quality of the response
(i.e., the worst-case difference between the response by the agent and the
output of the query computed on the private dataset). The optimal
privacy-preserving policy is proved to be a piecewise constant function in the
form of a quantization operator applied on the output of the submitted query.
The measure of privacy is also used to analyze the performance of -anonymity
methodology (a popular deterministic mechanism for privacy-preserving release
of datasets using suppression and generalization techniques), proving that it
is in fact not privacy-preserving.Comment: improved introduction and numerical exampl
Corrupt Bandits for Preserving Local Privacy
We study a variant of the stochastic multi-armed bandit (MAB) problem in
which the rewards are corrupted. In this framework, motivated by privacy
preservation in online recommender systems, the goal is to maximize the sum of
the (unobserved) rewards, based on the observation of transformation of these
rewards through a stochastic corruption process with known parameters. We
provide a lower bound on the expected regret of any bandit algorithm in this
corrupted setting. We devise a frequentist algorithm, KLUCB-CF, and a Bayesian
algorithm, TS-CF and give upper bounds on their regret. We also provide the
appropriate corruption parameters to guarantee a desired level of local privacy
and analyze how this impacts the regret. Finally, we present some experimental
results that confirm our analysis
- …