102 research outputs found
An Improved Private Mechanism for Small Databases
We study the problem of answering a workload of linear queries ,
on a database of size at most drawn from a universe
under the constraint of (approximate) differential privacy.
Nikolov, Talwar, and Zhang~\cite{NTZ} proposed an efficient mechanism that, for
any given and , answers the queries with average error that is
at most a factor polynomial in and
worse than the best possible. Here we improve on this guarantee and give a
mechanism whose competitiveness ratio is at most polynomial in and
, and has no dependence on . Our mechanism
is based on the projection mechanism of Nikolov, Talwar, and Zhang, but in
place of an ad-hoc noise distribution, we use a distribution which is in a
sense optimal for the projection mechanism, and analyze it using convex duality
and the restricted invertibility principle.Comment: To appear in ICALP 2015, Track
The Geometry of Differential Privacy: the Sparse and Approximate Cases
In this work, we study trade-offs between accuracy and privacy in the context
of linear queries over histograms. This is a rich class of queries that
includes contingency tables and range queries, and has been a focus of a long
line of work. For a set of linear queries over a database , we
seek to find the differentially private mechanism that has the minimum mean
squared error. For pure differential privacy, an approximation to
the optimal mechanism is known. Our first contribution is to give an approximation guarantee for the case of (\eps,\delta)-differential
privacy. Our mechanism is simple, efficient and adds correlated Gaussian noise
to the answers. We prove its approximation guarantee relative to the hereditary
discrepancy lower bound of Muthukrishnan and Nikolov, using tools from convex
geometry.
We next consider this question in the case when the number of queries exceeds
the number of individuals in the database, i.e. when . It is known that better mechanisms exist in this setting. Our second
main contribution is to give an (\eps,\delta)-differentially private
mechanism which is optimal up to a \polylog(d,N) factor for any given query
set and any given upper bound on . This approximation is
achieved by coupling the Gaussian noise addition approach with a linear
regression step. We give an analogous result for the \eps-differential
privacy setting. We also improve on the mean squared error upper bound for
answering counting queries on a database of size by Blum, Ligett, and Roth,
and match the lower bound implied by the work of Dinur and Nissim up to
logarithmic factors.
The connection between hereditary discrepancy and the privacy mechanism
enables us to derive the first polylogarithmic approximation to the hereditary
discrepancy of a matrix
On The Hereditary Discrepancy of Homogeneous Arithmetic Progressions
We show that the hereditary discrepancy of homogeneous arithmetic
progressions is lower bounded by . This bound is tight up
to the constant in the exponent. Our lower bound goes via proving an
exponential lower bound on the discrepancy of set systems of subcubes of the
boolean cube .Comment: To appear in the Proceedings of the American Mathematical Societ
Nearly Optimal Private Convolution
We study computing the convolution of a private input with a public input
, while satisfying the guarantees of -differential
privacy. Convolution is a fundamental operation, intimately related to Fourier
Transforms. In our setting, the private input may represent a time series of
sensitive events or a histogram of a database of confidential personal
information. Convolution then captures important primitives including linear
filtering, which is an essential tool in time series analysis, and aggregation
queries on projections of the data.
We give a nearly optimal algorithm for computing convolutions while
satisfying -differential privacy. Surprisingly, we follow
the simple strategy of adding independent Laplacian noise to each Fourier
coefficient and bounding the privacy loss using the composition theorem of
Dwork, Rothblum, and Vadhan. We derive a closed form expression for the optimal
noise to add to each Fourier coefficient using convex programming duality. Our
algorithm is very efficient -- it is essentially no more computationally
expensive than a Fast Fourier Transform.
To prove near optimality, we use the recent discrepancy lowerbounds of
Muthukrishnan and Nikolov and derive a spectral lower bound using a
characterization of discrepancy in terms of determinants
Approximating Hereditary Discrepancy via Small Width Ellipsoids
The Discrepancy of a hypergraph is the minimum attainable value, over
two-colorings of its vertices, of the maximum absolute imbalance of any
hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum
discrepancy of a restriction of the hypergraph to a subset of its vertices, is
a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related
the natural extension of this quantity to matrices to rounding algorithms for
linear programs, and gave a determinant based lower bound on the hereditary
discrepancy. Matousek (2011) showed that this bound is tight up to a
polylogarithmic factor, leaving open the question of actually computing this
bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time
-approximation to hereditary discrepancy, as a by-product
of their work in differential privacy. In this paper, we give a direct simple
-approximation algorithm for this problem. We show that up to
this approximation factor, the hereditary discrepancy of a matrix is
characterized by the optimal value of simple geometric convex program that
seeks to minimize the largest norm of any point in a ellipsoid
containing the columns of . This characterization promises to be a useful
tool in discrepancy theory
Structure in Communication Complexity and Constant-Cost Complexity Classes
Several theorems and conjectures in communication complexity state or
speculate that the complexity of a matrix in a given communication model is
controlled by a related analytic or algebraic matrix parameter, e.g., rank,
sign-rank, discrepancy, etc. The forward direction is typically easy as the
structural implications of small complexity often imply a bound on some matrix
parameter. The challenge lies in establishing the reverse direction, which
requires understanding the structure of Boolean matrices for which a given
matrix parameter is small or large. We will discuss several research directions
that align with this overarching theme.Comment: This is a column to be published in the complexity theory column of
SIGACT New
- …