17,085 research outputs found
BayesNAS: A Bayesian Approach for Neural Architecture Search
One-Shot Neural Architecture Search (NAS) is a promising method to
significantly reduce search time without any separate training. It can be
treated as a Network Compression problem on the architecture parameters from an
over-parameterized network. However, there are two issues associated with most
one-shot NAS methods. First, dependencies between a node and its predecessors
and successors are often disregarded which result in improper treatment over
zero operations. Second, architecture parameters pruning based on their
magnitude is questionable. In this paper, we employ the classic Bayesian
learning approach to alleviate these two issues by modeling architecture
parameters using hierarchical automatic relevance determination (HARD) priors.
Unlike other NAS methods, we train the over-parameterized network for only one
epoch then update the architecture. Impressively, this enabled us to find the
architecture on CIFAR-10 within only 0.2 GPU days using a single GPU.
Competitive performance can be also achieved by transferring to ImageNet. As a
byproduct, our approach can be applied directly to compress convolutional
neural networks by enforcing structural sparsity which achieves extremely
sparse networks without accuracy deterioration.Comment: International Conference on Machine Learning 201
-ARM: Network Sparsification via Stochastic Binary Optimization
We consider network sparsification as an -norm regularized binary
optimization problem, where each unit of a neural network (e.g., weight,
neuron, or channel, etc.) is attached with a stochastic binary gate, whose
parameters are jointly optimized with original network parameters. The
Augment-Reinforce-Merge (ARM), a recently proposed unbiased gradient estimator,
is investigated for this binary optimization problem. Compared to the hard
concrete gradient estimator from Louizos et al., ARM demonstrates superior
performance of pruning network architectures while retaining almost the same
accuracies of baseline methods. Similar to the hard concrete estimator, ARM
also enables conditional computation during model training but with improved
effectiveness due to the exact binary stochasticity. Thanks to the flexibility
of ARM, many smooth or non-smooth parametric functions, such as scaled sigmoid
or hard sigmoid, can be used to parameterize this binary optimization problem
and the unbiasness of the ARM estimator is retained, while the hard concrete
estimator has to rely on the hard sigmoid function to achieve conditional
computation and thus accelerated training. Extensive experiments on multiple
public datasets demonstrate state-of-the-art pruning rates with almost the same
accuracies of baseline methods. The resulting algorithm -ARM sparsifies
the Wide-ResNet models on CIFAR-10 and CIFAR-100 while the hard concrete
estimator cannot. The code is public available at
https://github.com/leo-yangli/l0-arm.Comment: Published as a conference paper at ECML 201
- β¦