14 research outputs found
Optimal error of query sets under the differentially-private matrix mechanism
A common goal of privacy research is to release synthetic data that satisfies
a formal privacy guarantee and can be used by an analyst in place of the
original data. To achieve reasonable accuracy, a synthetic data set must be
tuned to support a specified set of queries accurately, sacrificing fidelity
for other queries.
This work considers methods for producing synthetic data under differential
privacy and investigates what makes a set of queries "easy" or "hard" to
answer. We consider answering sets of linear counting queries using the matrix
mechanism, a recent differentially-private mechanism that can reduce error by
adding complex correlated noise adapted to a specified workload.
Our main result is a novel lower bound on the minimum total error required to
simultaneously release answers to a set of workload queries. The bound reveals
that the hardness of a query workload is related to the spectral properties of
the workload when it is represented in matrix form. The bound is most
informative for -differential privacy but also applies to
-differential privacy.Comment: 35 pages; Short version to appear in the 16th International
Conference on Database Theory (ICDT), 201
Impossibility of dimension reduction in the nuclear norm
Let (the Schatten--von Neumann trace class) denote the Banach
space of all compact linear operators whose nuclear norm
is finite, where
are the singular values of . We prove that
for arbitrarily large there exists a subset
with that cannot be
embedded with bi-Lipschitz distortion into any -dimensional
linear subspace of . is not even a -Lipschitz
quotient of any subset of any -dimensional linear subspace of
. Thus, does not admit a dimension reduction
result \'a la Johnson and Lindenstrauss (1984), which complements the work of
Harrow, Montanaro and Short (2011) on the limitations of quantum dimension
reduction under the assumption that the embedding into low dimensions is a
quantum channel. Such a statement was previously known with
replaced by the Banach space of absolutely summable sequences via the
work of Brinkman and Charikar (2003). In fact, the above set can
be taken to be the same set as the one that Brinkman and Charikar considered,
viewed as a collection of diagonal matrices in . The challenge is
to demonstrate that cannot be faithfully realized in an arbitrary
low-dimensional subspace of , while Brinkman and Charikar
obtained such an assertion only for subspaces of that consist of
diagonal operators (i.e., subspaces of ). We establish this by proving
that the Markov 2-convexity constant of any finite dimensional linear subspace
of is at most a universal constant multiple of
Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies
Privacy definitions provide ways for trading-off the privacy of individuals
in a statistical database for the utility of downstream analysis of the data.
In this paper, we present Blowfish, a class of privacy definitions inspired by
the Pufferfish framework, that provides a rich interface for this trade-off. In
particular, we allow data publishers to extend differential privacy using a
policy, which specifies (a) secrets, or information that must be kept secret,
and (b) constraints that may be known about the data. While the secret
specification allows increased utility by lessening protection for certain
individual properties, the constraint specification provides added protection
against an adversary who knows correlations in the data (arising from
constraints). We formalize policies and present novel algorithms that can
handle general specifications of sensitive information and certain count
constraints. We show that there are reasonable policies under which our privacy
mechanisms for k-means clustering, histograms and range queries introduce
significantly lesser noise than their differentially private counterparts. We
quantify the privacy-utility trade-offs for various policies analytically and
empirically on real datasets.Comment: Full version of the paper at SIGMOD'14 Snowbird, Utah US
Linear and Range Counting under Metric-based Local Differential Privacy
Local differential privacy (LDP) enables private data sharing and analytics
without the need for a trusted data collector. Error-optimal primitives (for,
e.g., estimating means and item frequencies) under LDP have been well studied.
For analytical tasks such as range queries, however, the best known error bound
is dependent on the domain size of private data, which is potentially
prohibitive. This deficiency is inherent as LDP protects the same level of
indistinguishability between any pair of private data values for each data
downer.
In this paper, we utilize an extension of -LDP called Metric-LDP or
-LDP, where a metric defines heterogeneous privacy guarantees for
different pairs of private data values and thus provides a more flexible knob
than does to relax LDP and tune utility-privacy trade-offs. We show
that, under such privacy relaxations, for analytical workloads such as linear
counting, multi-dimensional range counting queries, and quantile queries, we
can achieve significant gains in utility. In particular, for range queries
under -LDP where the metric is the -distance function scaled by
, we design mechanisms with errors independent on the domain sizes;
instead, their errors depend on the metric , which specifies in what
granularity the private data is protected. We believe that the primitives we
design for -LDP will be useful in developing mechanisms for other analytical
tasks, and encourage the adoption of LDP in practice
Convex Optimization for Linear Query Processing under Approximate Differential Privacy
Differential privacy enables organizations to collect accurate aggregates
over sensitive data with strong, rigorous guarantees on individuals' privacy.
Previous work has found that under differential privacy, computing multiple
correlated aggregates as a batch, using an appropriate \emph{strategy}, may
yield higher accuracy than computing each of them independently. However,
finding the best strategy that maximizes result accuracy is non-trivial, as it
involves solving a complex constrained optimization program that appears to be
non-linear and non-convex. Hence, in the past much effort has been devoted in
solving this non-convex optimization program. Existing approaches include
various sophisticated heuristics and expensive numerical solutions. None of
them, however, guarantees to find the optimal solution of this optimization
problem.
This paper points out that under (, )-differential privacy,
the optimal solution of the above constrained optimization problem in search of
a suitable strategy can be found, rather surprisingly, by solving a simple and
elegant convex optimization program. Then, we propose an efficient algorithm
based on Newton's method, which we prove to always converge to the optimal
solution with linear global convergence rate and quadratic local convergence
rate. Empirical evaluations demonstrate the accuracy and efficiency of the
proposed solution.Comment: to appear in ACM SIGKDD 201
Optimizing Batch Linear Queries under Exact and Approximate Differential Privacy
Differential privacy is a promising privacy-preserving paradigm for
statistical query processing over sensitive data. It works by injecting random
noise into each query result, such that it is provably hard for the adversary
to infer the presence or absence of any individual record from the published
noisy results. The main objective in differentially private query processing is
to maximize the accuracy of the query results, while satisfying the privacy
guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an
appropriate strategy, processing a batch of correlated queries as a whole
achieves considerably higher accuracy than answering them individually.
However, to our knowledge there is currently no practical solution to find such
a strategy for an arbitrary query batch; existing methods either return
strategies of poor quality (often worse than naive methods) or require
prohibitively expensive computations for even moderately large domains.
Motivated by this, we propose low-rank mechanism (LRM), the first practical
differentially private technique for answering batch linear queries with high
accuracy. LRM works for both exact (i.e., -) and approximate (i.e.,
(, )-) differential privacy definitions. We derive the
utility guarantees of LRM, and provide guidance on how to set the privacy
parameters given the user's utility expectation. Extensive experiments using
real data demonstrate that our proposed method consistently outperforms
state-of-the-art query processing solutions under differential privacy, by
large margins.Comment: ACM Transactions on Database Systems (ACM TODS). arXiv admin note:
text overlap with arXiv:1212.230