450,665 research outputs found
Practical Bayesian Optimization of Machine Learning Algorithms
Machine learning algorithms frequently require careful tuning of model
hyperparameters, regularization terms, and optimization parameters.
Unfortunately, this tuning is often a "black art" that requires expert
experience, unwritten rules of thumb, or sometimes brute-force search. Much
more appealing is the idea of developing automatic approaches which can
optimize the performance of a given learning algorithm to the task at hand. In
this work, we consider the automatic tuning problem within the framework of
Bayesian optimization, in which a learning algorithm's generalization
performance is modeled as a sample from a Gaussian process (GP). The tractable
posterior distribution induced by the GP leads to efficient use of the
information gathered by previous experiments, enabling optimal choices about
what parameters to try next. Here we show how the effects of the Gaussian
process prior and the associated inference procedure can have a large impact on
the success or failure of Bayesian optimization. We show that thoughtful
choices can lead to results that exceed expert-level performance in tuning
machine learning algorithms. We also describe new algorithms that take into
account the variable cost (duration) of learning experiments and that can
leverage the presence of multiple cores for parallel experimentation. We show
that these proposed algorithms improve on previous automatic procedures and can
reach or surpass human expert-level optimization on a diverse set of
contemporary algorithms including latent Dirichlet allocation, structured SVMs
and convolutional neural networks
Recommended from our members
Optimisation of the hydrotesting sequence in tank farm construction using an adaptive genetic algorithm with stochastic preferential logic
In the construction of tank farms there is a requirement for the tanks to be hydro-tested in order to verify that they are leak proof as well as proving the lack of differential settlement in the foundations. The tanks will be required to be filled to a predetermined level and then to maintain this loaded state for a certain period of time before being drained. In areas such as the Middle East water for hydro-testing is not freely available as sea water is often not suitable for this purpose, so fresh water needs to be produced or transported to the construction site for this purpose. It is therefore of major benefit to the project to schedule the hydro-testing of the tanks in such a manner as to minimize the utilization of hydro-test water.
This problem is a special case of the Resource Constrained Project Scheduling Problem (RCPSP) and in this research we have modified our previously developed Fitness differential adaptive genetic algorithm [4, 6 & 7] to the solution of this real world problem.
The Algorithm has been ported from the original MATLAB code into Microsoft Project using VBA in order to provide a more user friendly, practical interface
Robust Block Coordinate Descent
In this paper we present a novel randomized block coordinate descent method
for the minimization of a convex composite objective function. The method uses
(approximate) partial second-order (curvature) information, so that the
algorithm performance is more robust when applied to highly nonseparable or ill
conditioned problems. We call the method Robust Coordinate Descent (RCD). At
each iteration of RCD, a block of coordinates is sampled randomly, a quadratic
model is formed about that block and the model is minimized
approximately/inexactly to determine the search direction. An inexpensive line
search is then employed to ensure a monotonic decrease in the objective
function and acceptance of large step sizes. We prove global convergence of the
RCD algorithm, and we also present several results on the local convergence of
RCD for strongly convex functions. Finally, we present numerical results on
large-scale problems to demonstrate the practical performance of the method.Comment: 23 pages, 6 figure
- β¦