34 research outputs found
Learning Transferable Architectures for Scalable Image Recognition
Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
In deep learning, models typically reuse the same parameters for all inputs.
Mixture of Experts (MoE) defies this and instead selects different parameters
for each incoming example. The result is a sparsely-activated model -- with
outrageous numbers of parameters -- but a constant computational cost. However,
despite several notable successes of MoE, widespread adoption has been hindered
by complexity, communication costs and training instability -- we address these
with the Switch Transformer. We simplify the MoE routing algorithm and design
intuitive improved models with reduced communication and computational costs.
Our proposed training techniques help wrangle the instabilities and we show
large sparse models may be trained, for the first time, with lower precision
(bfloat16) formats. We design models based off T5-Base and T5-Large to obtain
up to 7x increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains
over the mT5-Base version across all 101 languages. Finally, we advance the
current scale of language models by pre-training up to trillion parameter
models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the
T5-XXL model