38,611 research outputs found
ZOOpt: Toolbox for Derivative-Free Optimization
Recent advances of derivative-free optimization allow efficient approximating
the global optimal solutions of sophisticated functions, such as functions with
many local optima, non-differentiable and non-continuous functions. This
article describes the ZOOpt (https://github.com/eyounx/ZOOpt) toolbox that
provides efficient derivative-free solvers and are designed easy to use. ZOOpt
provides a Python package for single-thread optimization, and a light-weighted
distributed version with the help of the Julia language for Python described
functions. ZOOpt toolbox particularly focuses on optimization problems in
machine learning, addressing high-dimensional, noisy, and large-scale problems.
The toolbox is being maintained toward ready-to-use tool in real-world machine
learning tasks
Active Classification: Theory and Application to Underwater Inspection
We discuss the problem in which an autonomous vehicle must classify an object
based on multiple views. We focus on the active classification setting, where
the vehicle controls which views to select to best perform the classification.
The problem is formulated as an extension to Bayesian active learning, and we
show connections to recent theoretical guarantees in this area. We formally
analyze the benefit of acting adaptively as new information becomes available.
The analysis leads to a probabilistic algorithm for determining the best views
to observe based on information theoretic costs. We validate our approach in
two ways, both related to underwater inspection: 3D polyhedra recognition in
synthetic depth maps and ship hull inspection with imaging sonar. These tasks
encompass both the planning and recognition aspects of the active
classification problem. The results demonstrate that actively planning for
informative views can reduce the number of necessary views by up to 80% when
compared to passive methods.Comment: 16 page
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural
networks (CNNs) that is more efficient than recent state-of-the-art methods
based on reinforcement learning and evolutionary algorithms. Our approach uses
a sequential model-based optimization (SMBO) strategy, in which we search for
structures in order of increasing complexity, while simultaneously learning a
surrogate model to guide the search through structure space. Direct comparison
under the same search space shows that our method is up to 5 times more
efficient than the RL method of Zoph et al. (2018) in terms of number of models
evaluated, and 8 times faster in terms of total compute. The structures we
discover in this way achieve state of the art classification accuracies on
CIFAR-10 and ImageNet.Comment: To appear in ECCV 2018 as oral. The code and checkpoint for PNASNet-5
trained on ImageNet (both Mobile and Large) can now be downloaded from
https://github.com/tensorflow/models/tree/master/research/slim#Pretrained.
Also see https://github.com/chenxi116/PNASNet.TF for refactored and
simplified TensorFlow code; see https://github.com/chenxi116/PNASNet.pytorch
for exact conversion to PyTorc
- …