72,832 research outputs found
Maximum Likelihood-based Online Adaptation of Hyper-parameters in CMA-ES
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely
accepted as a robust derivative-free continuous optimization algorithm for
non-linear and non-convex optimization problems. CMA-ES is well known to be
almost parameterless, meaning that only one hyper-parameter, the population
size, is proposed to be tuned by the user. In this paper, we propose a
principled approach called self-CMA-ES to achieve the online adaptation of
CMA-ES hyper-parameters in order to improve its overall performance.
Experimental results show that for larger-than-default population size, the
default settings of hyper-parameters of CMA-ES are far from being optimal, and
that self-CMA-ES allows for dynamically approaching optimal settings.Comment: 13th International Conference on Parallel Problem Solving from Nature
(PPSN 2014) (2014
Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning
We present a developmental framework based on a long-term memory and
reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This
architecture allows a robot to optimize autonomously hyper-parameters that need
to be tuned from any action and/or vision module, treated as a black-box. The
learning can take advantage of past experiences (stored in the episodic and
procedural memories) in order to warm-start the exploration using a set of
hyper-parameters previously optimized from objects similar to the new unknown
one (stored in a semantic memory). As example, the system has been used to
optimized 9 continuous hyper-parameters of a professional software (Kamido)
both in simulation and with a real robot (industrial robotic arm Fanuc) with a
total of 13 different objects. The robot is able to find a good object-specific
optimization in 68 (simulation) or 40 (real) trials. In simulation, we
demonstrate the benefit of the transfer learning based on visual similarity, as
opposed to an amnesic learning (i.e. learning from scratch all the time).
Moreover, with the real robot, we show that the method consistently outperforms
the manual optimization from an expert with less than 2 hours of training time
to achieve more than 88% of success
An ADMM Based Framework for AutoML Pipeline Configuration
We study the AutoML problem of automatically configuring machine learning
pipelines by jointly selecting algorithms and their appropriate
hyper-parameters for all steps in supervised learning pipelines. This black-box
(gradient-free) optimization with mixed integer & continuous variables is a
challenging problem. We propose a novel AutoML scheme by leveraging the
alternating direction method of multipliers (ADMM). The proposed framework is
able to (i) decompose the optimization problem into easier sub-problems that
have a reduced number of variables and circumvent the challenge of mixed
variable categories, and (ii) incorporate black-box constraints along-side the
black-box optimization objective. We empirically evaluate the flexibility (in
utilizing existing AutoML techniques), effectiveness (against open source
AutoML toolkits),and unique capability (of executing AutoML with practically
motivated black-box constraints) of our proposed scheme on a collection of
binary classification data sets from UCI ML& OpenML repositories. We observe
that on an average our framework provides significant gains in comparison to
other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical
advantages of this framework
Discretizing Continuous Action Space for On-Policy Optimization
In this work, we show that discretizing action space for continuous control
is a simple yet powerful technique for on-policy optimization. The explosion in
the number of discrete actions can be efficiently addressed by a policy with
factorized distribution across action dimensions. We show that the discrete
policy achieves significant performance gains with state-of-the-art on-policy
optimization algorithms (PPO, TRPO, ACKTR) especially on high-dimensional tasks
with complex dynamics. Additionally, we show that an ordinal parameterization
of the discrete distribution can introduce the inductive bias that encodes the
natural ordering between discrete actions. This ordinal architecture further
significantly improves the performance of PPO/TRPO.Comment: Accepted at AAAI Conference on Artificial Intelligence (2020) in New
York, NY, USA. An open source implementation can be found at
https://github.com/robintyh1/onpolicybaseline
- …