573 research outputs found
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural
networks (CNNs) that is more efficient than recent state-of-the-art methods
based on reinforcement learning and evolutionary algorithms. Our approach uses
a sequential model-based optimization (SMBO) strategy, in which we search for
structures in order of increasing complexity, while simultaneously learning a
surrogate model to guide the search through structure space. Direct comparison
under the same search space shows that our method is up to 5 times more
efficient than the RL method of Zoph et al. (2018) in terms of number of models
evaluated, and 8 times faster in terms of total compute. The structures we
discover in this way achieve state of the art classification accuracies on
CIFAR-10 and ImageNet.Comment: To appear in ECCV 2018 as oral. The code and checkpoint for PNASNet-5
trained on ImageNet (both Mobile and Large) can now be downloaded from
https://github.com/tensorflow/models/tree/master/research/slim#Pretrained.
Also see https://github.com/chenxi116/PNASNet.TF for refactored and
simplified TensorFlow code; see https://github.com/chenxi116/PNASNet.pytorch
for exact conversion to PyTorc
Search Tree Pruning for Progressive Neural Architecture Search
Our neural architecture search algorithm progressively searches a tree of neural network architectures. Child nodes are created by inserting new layers determined by a transition graph into a parent network up to a maximum depth and pruned when performance is worse than its parent. This increases efficiency but makes the algorithm greedy. Simpler networks are successfully found before more complex ones that can achieve benchmark performance similar to other top-performing networks
POPNASv2: An Efficient Multi-Objective Neural Architecture Search Technique
Automating the research for the best neural network model is a task that has gained more and more relevance in the last few years. In this context, Neural Architecture Search (NAS) represents the most effective technique whose results rival
the state of the art hand-crafted architectures.
However, this approach requires a lot of computational capabilities as well as research time, which make prohibitive its usage in many real-world scenarios.
With its sequential model-based optimization strategy, Progressive Neural Architecture Search (PNAS) represents a possible step forward to face this resources issue. Despite the quality of the found network architectures, this technique is still limited in research time.
A significant step in this direction has been done by Pareto-Optimal Progressive Neural Architecture Search (POPNAS), which expand PNAS with a time predictor to enable a trade-off between search time and accuracy, considering a multi-objective optimization problem.
This paper proposes a new version of the Pareto-Optimal Progressive Neural Architecture Search, called POPNASv2.
Our approach enhances its first version and improves its performance.
We expanded the search space by adding new operators and improved the quality of both predictors to build more accurate Pareto fronts.
Moreover, we introduced cell equivalence checks and enriched the search strategy with an adaptive greedy exploration step.
Our efforts allow POPNASv2 to achieve PNAS-like performance with an average 4x factor search time speed-up.
The official version of this tool is located in the following link: AndreaFalanti/popnas-v2 (github.com
- …