5,203 research outputs found
Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms
Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.Comment: 9 pages, 3 figure
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
Stability of longitudinal coupling for Josephson charge qubits
For inductively coupled superconducting quantum bits, we determine the
conditions when the coupling commutes with the single-qubit terms. We show that
in certain parameter regimes such longitudinal coupling can be stabilized with
respect to variations of the circuit parameters. In addition, we analyze its
stability against fluctuations of the control fields.Comment: 5 pages, 2 figures; additional discussion and reference
Recommended from our members
Experimental pool boiling investigations of vertical coalescence for FC-72 on silicon from an isolated artificial cavity
In this study bubble growth from an isolated artificial cavity micro-fabricated on a horizontal 380 µm thick silicon wafer was investigated. The horizontally oriented boiling surface was heated by a thin resistance heater integrated on the rear of the silicon test section. The temperature was measured using an integrated micro-sensor situated on the boiling surface with the artificial cavity located in its geometrical centre. A resistive track was used as the sensor, which when calibrated, exhibited a near-linear behaviour with increasing temperature. To conduct pool boiling experiments the test section was immersed in degassed fluorinert FC-72. Bubble nucleation, growth and detachment at different pressures were observed using high-speed imaging. Coalescence was observed at the boundary between the isolated bubble and interference regimes. The occurrence of vertical coalescence was found to be more frequent, with increasing wall superheat and decreasing pressure.
The equivalent sphere volumes of two bubbles before and after coalescence were evaluated from area measurements. It was observed that the second nucleated bubble is always smaller than its predecessor. The vapour generation appears not to stop during coalescence as the volume of the merged bubble was typically 5-18% larger than the sum of the bubble volumes just before coalescence
- …