3 research outputs found
Classical lower bounds from quantum upper bounds
We prove lower bounds on complexity measures, such as the approximate degree
of a Boolean function and the approximate rank of a Boolean matrix, using
quantum arguments. We prove these lower bounds using a quantum query algorithm
for the combinatorial group testing problem.
We show that for any function f, the approximate degree of computing the OR
of n copies of f is Omega(sqrt{n}) times the approximate degree of f, which is
optimal. No such general result was known prior to our work, and even the lower
bound for the OR of ANDs function was only resolved in 2013.
We then prove an analogous result in communication complexity, showing that
the logarithm of the approximate rank (or more precisely, the approximate
gamma_2 norm) of F: X x Y -> {0,1} grows by a factor of Omega~(sqrt{n}) when we
take the OR of n copies of F, which is also essentially optimal. As a
corollary, we give a new proof of Razborov's celebrated Omega(sqrt{n}) lower
bound on the quantum communication complexity of the disjointness problem.
Finally, we generalize both these results from composition with the OR
function to composition with arbitrary symmetric functions, yielding nearly
optimal lower bounds in this setting as well.Comment: 46 pages; to appear at FOCS 201
A Statistical Taylor Theorem and Extrapolation of Truncated Densities
We show a statistical version of Taylor's theorem and apply this result to
non-parametric density estimation from truncated samples, which is a classical
challenge in Statistics \cite{woodroofe1985estimating, stute1993almost}. The
single-dimensional version of our theorem has the following implication: "For
any distribution on with a smooth log-density function, given
samples from the conditional distribution of on , we can efficiently identify an approximation to over the
\emph{whole} interval , with quality of approximation that improves
with the smoothness of ."
To the best of knowledge, our result is the first in the area of
non-parametric density estimation from truncated samples, which works under the
hard truncation model, where the samples outside some survival set are
never observed, and applies to multiple dimensions. In contrast, previous works
assume single dimensional data where each sample has a different survival set
so that samples from the whole support will ultimately be collected.Comment: Appeared at COLT202
A New Minimax Theorem for Randomized Algorithms
The celebrated minimax principle of Yao (1977) says that for any
Boolean-valued function with finite domain, there is a distribution
over the domain of such that computing to error against
inputs from is just as hard as computing to error on
worst-case inputs. Notably, however, the distribution depends on the
target error level : the hard distribution which is tight for bounded
error might be trivial to solve to small bias, and the hard distribution which
is tight for a small bias level might be far from tight for bounded error
levels.
In this work, we introduce a new type of minimax theorem which can provide a
hard distribution that works for all bias levels at once. We show that
this works for randomized query complexity, randomized communication
complexity, some randomized circuit models, quantum query and communication
complexities, approximate polynomial degree, and approximate logrank. We also
prove an improved version of Impagliazzo's hardcore lemma.
Our proofs rely on two innovations over the classical approach of using Von
Neumann's minimax theorem or linear programming duality. First, we use Sion's
minimax theorem to prove a minimax theorem for ratios of bilinear functions
representing the cost and score of algorithms.
Second, we introduce a new way to analyze low-bias randomized algorithms by
viewing them as "forecasting algorithms" evaluated by a proper scoring rule.
The expected score of the forecasting version of a randomized algorithm appears
to be a more fine-grained way of analyzing the bias of the algorithm. We show
that such expected scores have many elegant mathematical properties: for
example, they can be amplified linearly instead of quadratically. We anticipate
forecasting algorithms will find use in future work in which a fine-grained
analysis of small-bias algorithms is required.Comment: 57 page