626 research outputs found
Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution
Convolutional neural networks have recently demonstrated high-quality
reconstruction for single-image super-resolution. In this paper, we propose the
Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively
reconstruct the sub-band residuals of high-resolution images. At each pyramid
level, our model takes coarse-resolution feature maps as input, predicts the
high-frequency residuals, and uses transposed convolutions for upsampling to
the finer level. Our method does not require the bicubic interpolation as the
pre-processing step and thus dramatically reduces the computational complexity.
We train the proposed LapSRN with deep supervision using a robust Charbonnier
loss function and achieve high-quality reconstruction. Furthermore, our network
generates multi-scale predictions in one feed-forward pass through the
progressive reconstruction, thereby facilitates resource-aware applications.
Extensive quantitative and qualitative evaluations on benchmark datasets show
that the proposed algorithm performs favorably against the state-of-the-art
methods in terms of speed and accuracy.Comment: This work is accepted in CVPR 2017. The code and datasets are
available on http://vllab.ucmerced.edu/wlai24/LapSRN
CT-SRCNN: Cascade Trained and Trimmed Deep Convolutional Neural Networks for Image Super Resolution
We propose methodologies to train highly accurate and efficient deep
convolutional neural networks (CNNs) for image super resolution (SR). A cascade
training approach to deep learning is proposed to improve the accuracy of the
neural networks while gradually increasing the number of network layers. Next,
we explore how to improve the SR efficiency by making the network slimmer. Two
methodologies, the one-shot trimming and the cascade trimming, are proposed.
With the cascade trimming, the network's size is gradually reduced layer by
layer, without significant loss on its discriminative ability. Experiments on
benchmark image datasets show that our proposed SR network achieves the
state-of-the-art super resolution accuracy, while being more than 4 times
faster compared to existing deep super resolution networks.Comment: Accepted to IEEE Winter Conf. on Applications of Computer Vision
(WACV) 2018, Lake Tahoe, US
Cascade Subspace Clustering for Outlier Detection
Many methods based on sparse and low-rank representation been developed along
with guarantees of correct outlier detection. Self-representation states that a
point in a subspace can always be expressed as a linear combination of other
points in the subspace. A suitable Markov Chain can be defined on the
self-representation and it allows us to recognize the difference between
inliers and outliers. However, the reconstruction error of self-representation
that is still informative to detect outlier detection, is neglected.Inspired by
the gradient boosting, in this paper, we propose a new outlier detection
framework that combines a series of weak "outlier detectors" into a single
strong one in an iterative fashion by constructing multi-pass
self-representation. At each stage, we construct a self-representation based on
elastic-net and define a suitable Markov Chain on it to detect outliers. The
residual of the self-representation is used for the next stage to learn the
next weaker outlier detector. Such a stage will repeat many times. And the
final decision of outliers is generated by the previous all results.
Experimental results on image and speaker datasets demonstrate its superiority
with respect to state-of-the-art sparse and low-rank outlier detection methods.Comment: arXiv admin note: text overlap with arXiv:1704.03925 by other author
Single Image Super-Resolution Using Multi-Scale Convolutional Neural Network
Methods based on convolutional neural network (CNN) have demonstrated
tremendous improvements on single image super-resolution. However, the previous
methods mainly restore images from one single area in the low resolution (LR)
input, which limits the flexibility of models to infer various scales of
details for high resolution (HR) output. Moreover, most of them train a
specific model for each up-scale factor. In this paper, we propose a
multi-scale super resolution (MSSR) network. Our network consists of
multi-scale paths to make the HR inference, which can learn to synthesize
features from different scales. This property helps reconstruct various kinds
of regions in HR images. In addition, only one single model is needed for
multiple up-scale factors, which is more efficient without loss of restoration
quality. Experiments on four public datasets demonstrate that the proposed
method achieved state-of-the-art performance with fast speed
- …