3,149 research outputs found
An ADMM Based Framework for AutoML Pipeline Configuration
We study the AutoML problem of automatically configuring machine learning
pipelines by jointly selecting algorithms and their appropriate
hyper-parameters for all steps in supervised learning pipelines. This black-box
(gradient-free) optimization with mixed integer & continuous variables is a
challenging problem. We propose a novel AutoML scheme by leveraging the
alternating direction method of multipliers (ADMM). The proposed framework is
able to (i) decompose the optimization problem into easier sub-problems that
have a reduced number of variables and circumvent the challenge of mixed
variable categories, and (ii) incorporate black-box constraints along-side the
black-box optimization objective. We empirically evaluate the flexibility (in
utilizing existing AutoML techniques), effectiveness (against open source
AutoML toolkits),and unique capability (of executing AutoML with practically
motivated black-box constraints) of our proposed scheme on a collection of
binary classification data sets from UCI ML& OpenML repositories. We observe
that on an average our framework provides significant gains in comparison to
other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical
advantages of this framework
f-Divergence constrained policy improvement
To ensure stability of learning, state-of-the-art generalized policy
iteration algorithms augment the policy improvement step with a trust region
constraint bounding the information loss. The size of the trust region is
commonly determined by the Kullback-Leibler (KL) divergence, which not only
captures the notion of distance well but also yields closed-form solutions. In
this paper, we consider a more general class of f-divergences and derive the
corresponding policy update rules. The generic solution is expressed through
the derivative of the convex conjugate function to f and includes the KL
solution as a special case. Within the class of f-divergences, we further focus
on a one-parameter family of -divergences to study effects of the
choice of divergence on policy improvement. Previously known as well as new
policy updates emerge for different values of . We show that every type
of policy update comes with a compatible policy evaluation resulting from the
chosen f-divergence. Interestingly, the mean-squared Bellman error minimization
is closely related to policy evaluation with the Pearson -divergence
penalty, while the KL divergence results in the soft-max policy update and a
log-sum-exp critic. We carry out asymptotic analysis of the solutions for
different values of and demonstrate the effects of using different
divergence functions on a multi-armed bandit problem and on common standard
reinforcement learning problems
- …