1,150 research outputs found
ZOOpt: Toolbox for Derivative-Free Optimization
Recent advances of derivative-free optimization allow efficient approximating
the global optimal solutions of sophisticated functions, such as functions with
many local optima, non-differentiable and non-continuous functions. This
article describes the ZOOpt (https://github.com/eyounx/ZOOpt) toolbox that
provides efficient derivative-free solvers and are designed easy to use. ZOOpt
provides a Python package for single-thread optimization, and a light-weighted
distributed version with the help of the Julia language for Python described
functions. ZOOpt toolbox particularly focuses on optimization problems in
machine learning, addressing high-dimensional, noisy, and large-scale problems.
The toolbox is being maintained toward ready-to-use tool in real-world machine
learning tasks
Learning to Race through Coordinate Descent Bayesian Optimisation
In the automation of many kinds of processes, the observable outcome can
often be described as the combined effect of an entire sequence of actions, or
controls, applied throughout its execution. In these cases, strategies to
optimise control policies for individual stages of the process might not be
applicable, and instead the whole policy might have to be optimised at once. On
the other hand, the cost to evaluate the policy's performance might also be
high, being desirable that a solution can be found with as few interactions as
possible with the real system. We consider the problem of optimising control
policies to allow a robot to complete a given race track within a minimum
amount of time. We assume that the robot has no prior information about the
track or its own dynamical model, just an initial valid driving example.
Localisation is only applied to monitor the robot and to provide an indication
of its position along the track's centre axis. We propose a method for finding
a policy that minimises the time per lap while keeping the vehicle on the track
using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert
space. We apply an algorithm to search more efficiently over high-dimensional
policy-parameter spaces with BO, by iterating over each dimension individually,
in a sequential coordinate descent-like scheme. Experiments demonstrate the
performance of the algorithm against other methods in a simulated car racing
environment.Comment: Accepted as conference paper for the 2018 IEEE International
Conference on Robotics and Automation (ICRA
A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality
We investigate the unconstrained global optimization of functions with low
effective dimensionality, that are constant along certain (unknown) linear
subspaces. Extending the technique of random subspace embeddings in [Wang et
al., Bayesian optimization in a billion dimensions via random embeddings. JAIR,
55(1): 361--387, 2016], we study a generic Random Embeddings for Global
Optimization (REGO) framework that is compatible with any global minimization
algorithm. Instead of the original, potentially large-scale optimization
problem, within REGO, a Gaussian random, low-dimensional problem with bound
constraints is formulated and solved in a reduced space. We provide novel
probabilistic bounds for the success of REGO in solving the original, low
effective-dimensionality problem, which show its independence of the
(potentially large) ambient dimension and its precise dependence on the
dimensions of the effective and randomly embedding subspaces. These results
significantly improve existing theoretical analyses by providing the exact
distribution of a reduced minimizer and its Euclidean norm and by the general
assumptions required on the problem. We validate our theoretical findings by
extensive numerical testing of REGO with three types of global optimization
solvers, illustrating the improved scalability of REGO compared to the
full-dimensional application of the respective solvers.Comment: 32 pages, 10 figures, submitted to Information and Inference: a
journal of the IMA, also submitted to optimization-online repositor
Practical recommendations for gradient-based training of deep architectures
Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures
A Region-Shrinking-Based Acceleration for Classification-Based Derivative-Free Optimization
Derivative-free optimization algorithms play an important role in scientific
and engineering design optimization problems, especially when derivative
information is not accessible. In this paper, we study the framework of
classification-based derivative-free optimization algorithms. By introducing a
concept called hypothesis-target shattering rate, we revisit the computational
complexity upper bound of this type of algorithms. Inspired by the revisited
upper bound, we propose an algorithm named "RACE-CARS", which adds a random
region-shrinking step compared with "SRACOS" (Hu et al., 2017).. We further
establish a theorem showing the acceleration of region-shrinking. Experiments
on the synthetic functions as well as black-box tuning for
language-model-as-a-service demonstrate empirically the efficiency of
"RACE-CARS". An ablation experiment on the introduced hyperparameters is also
conducted, revealing the mechanism of "RACE-CARS" and putting forward an
empirical hyperparameter-tuning guidance
- …