2 research outputs found
Deep Adaptive Inference Networks for Single Image Super-Resolution
Recent years have witnessed tremendous progress in single image
super-resolution (SISR) owing to the deployment of deep convolutional neural
networks (CNNs). For most existing methods, the computational cost of each SISR
model is irrelevant to local image content, hardware platform and application
scenario. Nonetheless, content and resource adaptive model is more preferred,
and it is encouraging to apply simpler and efficient networks to the easier
regions with less details and the scenarios with restricted efficiency
constraints. In this paper, we take a step forward to address this issue by
leveraging the adaptive inference networks for deep SISR (AdaDSR). In
particular, our AdaDSR involves an SISR model as backbone and a lightweight
adapter module which takes image features and resource constraint as input and
predicts a map of local network depth. Adaptive inference can then be performed
with the support of efficient sparse convolution, where only a fraction of the
layers in the backbone is performed at a given position according to its
predicted depth. The network learning can be formulated as the joint
optimization of reconstruction and network depth losses. In the inference
stage, the average depth can be flexibly tuned to meet a range of efficiency
constraints. Experiments demonstrate the effectiveness and adaptability of our
AdaDSR in contrast to its counterparts (e.g., EDSR and RCAN).Comment: Code can be found at https://github.com/csmliu/AdaDS
Exploring Sparsity in Image Super-Resolution for Efficient Inference
Current CNN-based super-resolution (SR) methods process all locations equally
with computational resources being uniformly assigned in space. However, since
missing details in low-resolution (LR) images mainly exist in regions of edges
and textures, less computational resources are required for those flat regions.
Therefore, existing CNN-based methods involve redundant computation in flat
regions, which increases their computational cost and limits their applications
on mobile devices. In this paper, we explore the sparsity in image SR to
improve inference efficiency of SR networks. Specifically, we develop a Sparse
Mask SR (SMSR) network to learn sparse masks to prune redundant computation.
Within our SMSR, spatial masks learn to identify "important" regions while
channel masks learn to mark redundant channels in those "unimportant" regions.
Consequently, redundant computation can be accurately localized and skipped
while maintaining comparable performance. It is demonstrated that our SMSR
achieves state-of-the-art performance with 41%/33%/27% FLOPs being reduced for
x2/3/4 SR. Code is available at: https://github.com/LongguangWang/SMSR.Comment: Accepted by CVPR 202