767 research outputs found
Fitting the Search Space of Weight-sharing NAS with Graph Convolutional Networks
Neural architecture search has attracted wide attentions in both academia and
industry. To accelerate it, researchers proposed weight-sharing methods which
first train a super-network to reuse computation among different operators,
from which exponentially many sub-networks can be sampled and efficiently
evaluated. These methods enjoy great advantages in terms of computational
costs, but the sampled sub-networks are not guaranteed to be estimated
precisely unless an individual training process is taken. This paper owes such
inaccuracy to the inevitable mismatch between assembled network layers, so that
there is a random error term added to each estimation. We alleviate this issue
by training a graph convolutional network to fit the performance of sampled
sub-networks so that the impact of random errors becomes minimal. With this
strategy, we achieve a higher rank correlation coefficient in the selected set
of candidates, which consequently leads to better performance of the final
architecture. In addition, our approach also enjoys the flexibility of being
used under different hardware constraints, since the graph convolutional
network has provided an efficient lookup table of the performance of
architectures in the entire search space.Comment: Accepted to AAAI 202
Optimizing Neural Architecture Search using Limited GPU Time in a Dynamic Search Space: A Gene Expression Programming Approach
Efficient identification of people and objects, segmentation of regions of
interest and extraction of relevant data in images, texts, audios and videos
are evolving considerably in these past years, which deep learning methods,
combined with recent improvements in computational resources, contributed
greatly for this achievement. Although its outstanding potential, development
of efficient architectures and modules requires expert knowledge and amount of
resource time available. In this paper, we propose an evolutionary-based neural
architecture search approach for efficient discovery of convolutional models in
a dynamic search space, within only 24 GPU hours. With its efficient search
environment and phenotype representation, Gene Expression Programming is
adapted for network's cell generation. Despite having limited GPU resource time
and broad search space, our proposal achieved similar state-of-the-art to
manually-designed convolutional networks and also NAS-generated ones, even
beating similar constrained evolutionary-based NAS works. The best cells in
different runs achieved stable results, with a mean error of 2.82% in CIFAR-10
dataset (which the best model achieved an error of 2.67%) and 18.83% for
CIFAR-100 (best model with 18.16%). For ImageNet in the mobile setting, our
best model achieved top-1 and top-5 errors of 29.51% and 10.37%, respectively.
Although evolutionary-based NAS works were reported to require a considerable
amount of GPU time for architecture search, our approach obtained promising
results in little time, encouraging further experiments in evolutionary-based
NAS, for search and network representation improvements.Comment: Accepted for presentation at the IEEE Congress on Evolutionary
Computation (IEEE CEC) 202
Neural Architecture Search for Compressed Sensing Magnetic Resonance Image Reconstruction
Recent works have demonstrated that deep learning (DL) based compressed
sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by
reconstructing MR images from sub-sampled k-space data. However, network
architectures adopted in previous methods are all designed by handcraft. Neural
Architecture Search (NAS) algorithms can automatically build neural network
architectures which have outperformed human designed ones in several vision
tasks. Inspired by this, here we proposed a novel and efficient network for the
MR image reconstruction problem via NAS instead of manual attempts.
Particularly, a specific cell structure, which was integrated into the
model-driven MR reconstruction pipeline, was automatically searched from a
flexible pre-defined operation search space in a differentiable manner.
Experimental results show that our searched network can produce better
reconstruction results compared to previous state-of-the-art methods in terms
of PSNR and SSIM with 4-6 times fewer computation resources. Extensive
experiments were conducted to analyze how hyper-parameters affect
reconstruction performance and the searched structures. The generalizability of
the searched architecture was also evaluated on different organ MR datasets.
Our proposed method can reach a better trade-off between computation cost and
reconstruction performance for MR reconstruction problem with good
generalizability and offer insights to design neural networks for other medical
image applications. The evaluation code will be available at
https://github.com/yjump/NAS-for-CSMRI.Comment: To be appear in Computerized Medical Imaging and Graphic
- …