54 research outputs found
Learning Transferable Architectures for Scalable Image Recognition
Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset
Meta-learning Based Beamforming Design for MISO Downlink
Downlink beamforming is an essential technology for wireless cellular
networks; however, the design of beamforming vectors that maximize the weighted
sum rate (WSR) is an NP-hard problem and iterative algorithms are typically
applied to solve it. The weighted minimum mean square error (WMMSE) algorithm
is the most widely used one, which iteratively minimizes the WSR and converges
to a local optimal. Motivated by the recent developments in meta-learning
techniques to solve non-convex optimization problems, we propose a
meta-learning based iterative algorithm for WSR maximization in a MISO downlink
channel. A long-short-term-memory (LSTM) network-based meta-learning model is
built to learn a dynamic optimization strategy to update the variables
iteratively. The learned strategy aims to optimize each variable in a less
greedy manner compared to WMMSE, which updates variables by computing their
first-order stationary points at each iteration step. The proposed algorithm
outperforms WMMSE significantly in the high signal to noise ratio(SNR) regime
and shows the comparable performance when the SNR is low.Comment: conferenc
- …