100,212 research outputs found
Functional lower bounds for arithmetic circuits and connections to boolean circuit complexity
We say that a circuit over a field functionally computes an
-variate polynomial if for every we have that . This is in contrast to syntactically computing , when as
formal polynomials. In this paper, we study the question of proving lower
bounds for homogeneous depth- and depth- arithmetic circuits for
functional computation. We prove the following results :
1. Exponential lower bounds homogeneous depth- arithmetic circuits for a
polynomial in .
2. Exponential lower bounds for homogeneous depth- arithmetic circuits
with bounded individual degree for a polynomial in .
Our main motivation for this line of research comes from our observation that
strong enough functional lower bounds for even very special depth-
arithmetic circuits for the Permanent imply a separation between and
. Thus, improving the second result to get rid of the bounded individual
degree condition could lead to substantial progress in boolean circuit
complexity. Besides, it is known from a recent result of Kumar and Saptharishi
[KS15] that over constant sized finite fields, strong enough average case
functional lower bounds for homogeneous depth- circuits imply
superpolynomial lower bounds for homogeneous depth- circuits.
Our proofs are based on a family of new complexity measures called shifted
evaluation dimension, and might be of independent interest
Query Complexity of Derivative-Free Optimization
This paper provides lower bounds on the convergence rate of Derivative Free
Optimization (DFO) with noisy function evaluations, exposing a fundamental and
unavoidable gap between the performance of algorithms with access to gradients
and those with access to only function evaluations. However, there are
situations in which DFO is unavoidable, and for such situations we propose a
new DFO algorithm that is proved to be near optimal for the class of strongly
convex objective functions. A distinctive feature of the algorithm is that it
uses only Boolean-valued function comparisons, rather than function
evaluations. This makes the algorithm useful in an even wider range of
applications, such as optimization based on paired comparisons from human
subjects, for example. We also show that regardless of whether DFO is based on
noisy function evaluations or Boolean-valued function comparisons, the
convergence rate is the same
Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
A regularization algorithm using inexact function values and inexact
derivatives is proposed and its evaluation complexity analyzed. This algorithm
is applicable to unconstrained problems and to problems with inexpensive
constraints (that is constraints whose evaluation and enforcement has
negligible cost) under the assumption that the derivative of highest degree is
-H\"{o}lder continuous. It features a very flexible adaptive mechanism
for determining the inexactness which is allowed, at each iteration, when
computing objective function values and derivatives. The complexity analysis
covers arbitrary optimality order and arbitrary degree of available approximate
derivatives. It extends results of Cartis, Gould and Toint (2018) on the
evaluation complexity to the inexact case: if a th order minimizer is sought
using approximations to the first derivatives, it is proved that a suitable
approximate minimizer within is computed by the proposed algorithm
in at most iterations and at most
approximate
evaluations. An algorithmic variant, although more rigid in practice, can be
proved to find such an approximate minimizer in
evaluations.While
the proposed framework remains so far conceptual for high degrees and orders,
it is shown to yield simple and computationally realistic inexact methods when
specialized to the unconstrained and bound-constrained first- and second-order
cases. The deterministic complexity results are finally extended to the
stochastic context, yielding adaptive sample-size rules for subsampling methods
typical of machine learning.Comment: 32 page
Learning from Scarce Experience
Searching the space of policies directly for the optimal policy has been one
popular method for solving partially observable reinforcement learning
problems. Typically, with each change of the target policy, its value is
estimated from the results of following that very policy. This requires a large
number of interactions with the environment as different polices are
considered. We present a family of algorithms based on likelihood ratio
estimation that use data gathered when executing one policy (or collection of
policies) to estimate the value of a different policy. The algorithms combine
estimation and optimization stages. The former utilizes experience to build a
non-parametric representation of an optimized function. The latter performs
optimization on this estimate. We show positive empirical results and provide
the sample complexity bound.Comment: 8 pages 4 figure
Frugal Optimization for Cost-related Hyperparameters
The increasing demand for democratizing machine learning algorithms calls for
hyperparameter optimization (HPO) solutions at low cost. Many machine learning
algorithms have hyperparameters which can cause a large variation in the
training cost. But this effect is largely ignored in existing HPO methods,
which are incapable to properly control cost during the optimization process.
To address this problem, we develop a new cost-frugal HPO solution. The core of
our solution is a simple but new randomized direct-search method, for which we
prove a convergence rate of and an
-approximation guarantee on the total cost. We provide
strong empirical results in comparison with state-of-the-art HPO methods on
large AutoML benchmarks.Comment: 29 pages (including supplementary appendix
- …