6,170 research outputs found
An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation
Deep convolutional neural networks (CNNs) have shown excellent performance in
object recognition tasks and dense classification problems such as semantic
segmentation. However, training deep neural networks on large and sparse
datasets is still challenging and can require large amounts of computation and
memory. In this work, we address the task of performing semantic segmentation
on large data sets, such as three-dimensional medical images. We propose an
adaptive sampling scheme that uses a-posterior error maps, generated throughout
training, to focus sampling on difficult regions, resulting in improved
learning. Our contribution is threefold: 1) We give a detailed description of
the proposed sampling algorithm to speed up and improve learning performance on
large images. We propose a deep dual path CNN that captures information at fine
and coarse scales, resulting in a network with a large field of view and high
resolution outputs. We show that our method is able to attain new
state-of-the-art results on the VISCERAL Anatomy benchmark
CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression
Lossy image compression algorithms are pervasively used to reduce the size of
images transmitted over the web and recorded on data storage media. However, we
pay for their high compression rate with visual artifacts degrading the user
experience. Deep convolutional neural networks have become a widespread tool to
address high-level computer vision tasks very successfully. Recently, they have
found their way into the areas of low-level computer vision and image
processing to solve regression problems mostly with relatively shallow
networks.
We present a novel 12-layer deep convolutional network for image compression
artifact suppression with hierarchical skip connections and a multi-scale loss
function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an
improvement of up to 0.36 dB over the best previous ConvNet result. We show
that a network trained for a specific quality factor (QF) is resilient to the
QF used to compress the input image - a single network trained for QF 60
provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.Comment: 8 page
Learning Transferable Architectures for Scalable Image Recognition
Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset
- …