34,446 research outputs found
Sampling-Based Query Re-Optimization
Despite of decades of work, query optimizers still make mistakes on
"difficult" queries because of bad cardinality estimates, often due to the
interaction of multiple predicates and correlations in the data. In this paper,
we propose a low-cost post-processing step that can take a plan produced by the
optimizer, detect when it is likely to have made such a mistake, and take steps
to fix it. Specifically, our solution is a sampling-based iterative procedure
that requires almost no changes to the original query optimizer or query
evaluation mechanism of the system. We show that this indeed imposes low
overhead and catches cases where three widely used optimizers (PostgreSQL and
two commercial systems) make large errors.Comment: This is the extended version of a paper with the same title and
authors that appears in the Proceedings of the ACM SIGMOD International
Conference on Management of Data (SIGMOD 2016
Accurate and Efficient Private Release of Datacubes and Contingency Tables
A central problem in releasing aggregate information about sensitive data is
to do so accurately while providing a privacy guarantee on the output. Recent
work focuses on the class of linear queries, which include basic counting
queries, data cubes, and contingency tables. The goal is to maximize the
utility of their output, while giving a rigorous privacy guarantee. Most
results follow a common template: pick a "strategy" set of linear queries to
apply to the data, then use the noisy answers to these queries to reconstruct
the queries of interest. This entails either picking a strategy set that is
hoped to be good for the queries, or performing a costly search over the space
of all possible strategies.
In this paper, we propose a new approach that balances accuracy and
efficiency: we show how to improve the accuracy of a given query set by
answering some strategy queries more accurately than others. This leads to an
efficient optimal noise allocation for many popular strategies, including
wavelets, hierarchies, Fourier coefficients and more. For the important case of
marginal queries we show that this strictly improves on previous methods, both
analytically and empirically. Our results also extend to ensuring that the
returned query answers are consistent with an (unknown) data set at minimal
extra cost in terms of time and noise
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
We suggest a general oracle-based framework that captures different parallel
stochastic optimization settings described by a dependency graph, and derive
generic lower bounds in terms of this graph. We then use the framework and
derive lower bounds for several specific parallel optimization settings,
including delayed updates and parallel processing with intermittent
communication. We highlight gaps between lower and upper bounds on the oracle
complexity, and cases where the "natural" algorithms are not known to be
optimal
Convex Optimization for Linear Query Processing under Approximate Differential Privacy
Differential privacy enables organizations to collect accurate aggregates
over sensitive data with strong, rigorous guarantees on individuals' privacy.
Previous work has found that under differential privacy, computing multiple
correlated aggregates as a batch, using an appropriate \emph{strategy}, may
yield higher accuracy than computing each of them independently. However,
finding the best strategy that maximizes result accuracy is non-trivial, as it
involves solving a complex constrained optimization program that appears to be
non-linear and non-convex. Hence, in the past much effort has been devoted in
solving this non-convex optimization program. Existing approaches include
various sophisticated heuristics and expensive numerical solutions. None of
them, however, guarantees to find the optimal solution of this optimization
problem.
This paper points out that under (, )-differential privacy,
the optimal solution of the above constrained optimization problem in search of
a suitable strategy can be found, rather surprisingly, by solving a simple and
elegant convex optimization program. Then, we propose an efficient algorithm
based on Newton's method, which we prove to always converge to the optimal
solution with linear global convergence rate and quadratic local convergence
rate. Empirical evaluations demonstrate the accuracy and efficiency of the
proposed solution.Comment: to appear in ACM SIGKDD 201
- …