81 research outputs found
HyperAdam: A Learnable Task-Adaptive Adam for Network Training
Deep neural networks are traditionally trained using human-designed
stochastic optimization algorithms, such as SGD and Adam. Recently, the
approach of learning to optimize network parameters has emerged as a promising
research topic. However, these learned black-box optimizers sometimes do not
fully utilize the experience in human-designed optimizers, therefore have
limitation in generalization ability. In this paper, a new optimizer, dubbed as
\textit{HyperAdam}, is proposed that combines the idea of "learning to
optimize" and traditional Adam optimizer. Given a network for training, its
parameter update in each iteration generated by HyperAdam is an adaptive
combination of multiple updates generated by Adam with varying decay rates. The
combination weights and decay rates in HyperAdam are adaptively learned
depending on the task. HyperAdam is modeled as a recurrent neural network with
AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for
various network training, such as multilayer perceptron, CNN and LSTM
Evolving parametrized Loss for Image Classification Learning on Small Datasets
This paper proposes a meta-learning approach to evolving a parametrized loss
function, which is called Meta-Loss Network (MLN), for training the image
classification learning on small datasets. In our approach, the MLN is embedded
in the framework of classification learning as a differentiable objective
function. The MLN is evolved with the Evolutionary Strategy algorithm (ES) to
an optimized loss function, such that a classifier, which optimized to minimize
this loss, will achieve a good generalization effect. A classifier learns on a
small training dataset to minimize MLN with Stochastic Gradient Descent (SGD),
and then the MLN is evolved with the precision of the small-dataset-updated
classifier on a large validation dataset. In order to evaluate our approach,
the MLN is trained with a large number of small sample learning tasks sampled
from FashionMNIST and tested on validation tasks sampled from FashionMNIST and
CIFAR10. Experiment results demonstrate that the MLN effectively improved
generalization compared to classical cross-entropy error and mean squared
error
Learning Transferable Architectures for Scalable Image Recognition
Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset
- …