23 research outputs found
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
We suggest a general oracle-based framework that captures different parallel
stochastic optimization settings described by a dependency graph, and derive
generic lower bounds in terms of this graph. We then use the framework and
derive lower bounds for several specific parallel optimization settings,
including delayed updates and parallel processing with intermittent
communication. We highlight gaps between lower and upper bounds on the oracle
complexity, and cases where the "natural" algorithms are not known to be
optimal
Parallel Submodular Function Minimization
We consider the parallel complexity of submodular function minimization
(SFM). We provide a pair of methods which obtain two new query versus depth
trade-offs a submodular function defined on subsets of elements that has
integer values between and . The first method has depth and query
complexity and the second method has depth and query complexity . Despite a line of work
on improved parallel lower bounds for SFM, prior to our work the only known
algorithms for parallel SFM either followed from more general methods for
sequential SFM or highly-parallel minimization of convex -Lipschitz
functions. Interestingly, to obtain our second result we provide the first
highly-parallel algorithm for minimizing -Lipschitz function over
the hypercube which obtains near-optimal depth for obtaining constant accuracy
No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization
We study the first-order convex optimization problem, where we have black-box
access to a (not necessarily smooth) function
and its (sub)gradient. Our goal is to find an -approximate minimum of
starting from a point that is distance at most from the true minimum.
If is -Lipschitz, then the classic gradient descent algorithm solves
this problem with queries. Importantly, the number of
queries is independent of the dimension and gradient descent is optimal in
this regard: No deterministic or randomized algorithm can achieve better
complexity that is still independent of the dimension .
In this paper we reprove the randomized lower bound of
using a simpler argument than previous lower
bounds. We then show that although the function family used in the lower bound
is hard for randomized algorithms, it can be solved using
quantum queries. We then show an improved lower bound against quantum
algorithms using a different set of instances and establish our main result
that in general even quantum algorithms need queries
to solve the problem. Hence there is no quantum speedup over gradient descent
for black-box first-order convex optimization without further assumptions on
the function family.Comment: 25 page
Memory-Query Tradeoffs for Randomized Convex Optimization
We show that any randomized first-order algorithm which minimizes a
-dimensional, -Lipschitz convex function over the unit ball must either
use bits of memory or make
queries, for any constant and when the precision
is quasipolynomially small in . Our result implies that cutting plane
methods, which use bits of memory and queries,
are Pareto-optimal among randomized first-order algorithms, and quadratic
memory is required to achieve optimal query complexity for convex optimization
Submodular Maximization with Matroid and Packing Constraints in Parallel
We consider the problem of maximizing the multilinear extension of a
submodular function subject a single matroid constraint or multiple packing
constraints with a small number of adaptive rounds of evaluation queries.
We obtain the first algorithms with low adaptivity for submodular
maximization with a matroid constraint. Our algorithms achieve a
approximation for monotone functions and a
approximation for non-monotone functions, which nearly matches the best
guarantees known in the fully adaptive setting. The number of rounds of
adaptivity is , which is an exponential speedup over
the existing algorithms.
We obtain the first parallel algorithm for non-monotone submodular
maximization subject to packing constraints. Our algorithm achieves a
approximation using parallel rounds, which is again an exponential speedup
in parallel time over the existing algorithms. For monotone functions, we
obtain a approximation in
parallel rounds. The number of parallel
rounds of our algorithm matches that of the state of the art algorithm for
solving packing LPs with a linear objective.
Our results apply more generally to the problem of maximizing a diminishing
returns submodular (DR-submodular) function