19,320 research outputs found
Linear and Range Counting under Metric-based Local Differential Privacy
Local differential privacy (LDP) enables private data sharing and analytics
without the need for a trusted data collector. Error-optimal primitives (for,
e.g., estimating means and item frequencies) under LDP have been well studied.
For analytical tasks such as range queries, however, the best known error bound
is dependent on the domain size of private data, which is potentially
prohibitive. This deficiency is inherent as LDP protects the same level of
indistinguishability between any pair of private data values for each data
downer.
In this paper, we utilize an extension of -LDP called Metric-LDP or
-LDP, where a metric defines heterogeneous privacy guarantees for
different pairs of private data values and thus provides a more flexible knob
than does to relax LDP and tune utility-privacy trade-offs. We show
that, under such privacy relaxations, for analytical workloads such as linear
counting, multi-dimensional range counting queries, and quantile queries, we
can achieve significant gains in utility. In particular, for range queries
under -LDP where the metric is the -distance function scaled by
, we design mechanisms with errors independent on the domain sizes;
instead, their errors depend on the metric , which specifies in what
granularity the private data is protected. We believe that the primitives we
design for -LDP will be useful in developing mechanisms for other analytical
tasks, and encourage the adoption of LDP in practice
Quantification of De-anonymization Risks in Social Networks
The risks of publishing privacy-sensitive data have received considerable
attention recently. Several de-anonymization attacks have been proposed to
re-identify individuals even if data anonymization techniques were applied.
However, there is no theoretical quantification for relating the data utility
that is preserved by the anonymization techniques and the data vulnerability
against de-anonymization attacks.
In this paper, we theoretically analyze the de-anonymization attacks and
provide conditions on the utility of the anonymized data (denoted by anonymized
utility) to achieve successful de-anonymization. To the best of our knowledge,
this is the first work on quantifying the relationships between anonymized
utility and de-anonymization capability. Unlike previous work, our
quantification analysis requires no assumptions about the graph model, thus
providing a general theoretical guide for developing practical
de-anonymization/anonymization techniques.
Furthermore, we evaluate state-of-the-art de-anonymization attacks on a
real-world Facebook dataset to show the limitations of previous work. By
comparing these experimental results and the theoretically achievable
de-anonymization capability derived in our analysis, we further demonstrate the
ineffectiveness of previous de-anonymization attacks and the potential of more
powerful de-anonymization attacks in the future.Comment: Published in International Conference on Information Systems Security
and Privacy, 201
Preserving Link Privacy in Social Network Based Systems
A growing body of research leverages social network based trust relationships
to improve the functionality of the system. However, these systems expose
users' trust relationships, which is considered sensitive information in
today's society, to an adversary.
In this work, we make the following contributions. First, we propose an
algorithm that perturbs the structure of a social graph in order to provide
link privacy, at the cost of slight reduction in the utility of the social
graph. Second we define general metrics for characterizing the utility and
privacy of perturbed graphs. Third, we evaluate the utility and privacy of our
proposed algorithm using real world social graphs. Finally, we demonstrate the
applicability of our perturbation algorithm on a broad range of secure systems,
including Sybil defenses and secure routing.Comment: 16 pages, 15 figure
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework
As the modern world becomes increasingly digitized and interconnected,
distributed signal processing has proven to be effective in processing its
large volume of data. However, a main challenge limiting the broad use of
distributed signal processing techniques is the issue of privacy in handling
sensitive data. To address this privacy issue, we propose a novel yet general
subspace perturbation method for privacy-preserving distributed optimization,
which allows each node to obtain the desired solution while protecting its
private data. In particular, we show that the dual variables introduced in each
distributed optimizer will not converge in a certain subspace determined by the
graph topology. Additionally, the optimization variable is ensured to converge
to the desired solution, because it is orthogonal to this non-convergent
subspace. We therefore propose to insert noise in the non-convergent subspace
through the dual variable such that the private data are protected, and the
accuracy of the desired solution is completely unaffected. Moreover, the
proposed method is shown to be secure under two widely-used adversary models:
passive and eavesdropping. Furthermore, we consider several distributed
optimizers such as ADMM and PDMM to demonstrate the general applicability of
the proposed method. Finally, we test the performance through a set of
applications. Numerical tests indicate that the proposed method is superior to
existing methods in terms of several parameters like estimated accuracy,
privacy level, communication cost and convergence rate
Privacy Games: Optimal User-Centric Data Obfuscation
In this paper, we design user-centric obfuscation mechanisms that impose the
minimum utility loss for guaranteeing user's privacy. We optimize utility
subject to a joint guarantee of differential privacy (indistinguishability) and
distortion privacy (inference error). This double shield of protection limits
the information leakage through obfuscation mechanism as well as the posterior
inference. We show that the privacy achieved through joint
differential-distortion mechanisms against optimal attacks is as large as the
maximum privacy that can be achieved by either of these mechanisms separately.
Their utility cost is also not larger than what either of the differential or
distortion mechanisms imposes. We model the optimization problem as a
leader-follower game between the designer of obfuscation mechanism and the
potential adversary, and design adaptive mechanisms that anticipate and protect
against optimal inference algorithms. Thus, the obfuscation mechanism is
optimal against any inference algorithm
- …