12,185 research outputs found
Clustering-Oriented Multiple Convolutional Neural Networks for Single Image Super-Resolution
In contrast to the human visual system (HVS) that applies different processing schemes to visual information of different textural categories, most existing deep learning models for image super-resolution tend to exploit an indiscriminate scheme for processing one whole image. Inspired by the human cognitive mechanism, we propose a multiple convolutional neural network framework trained based on different textural clusters of image local patches. To this end, we commence by grouping patches into K clusters via K-means, which enables each cluster center to encode image priors of a certain texture category. We then train K convolutional neural networks for super-resolution based on the K clusters of patches separately, such that the multiple convolutional neural networks comprehensively capture the patch textural variability. Furthermore, each convolutional neural network characterizes one specific texture category and is used for restoring patches belonging to the cluster. In this way, the texture variation within a whole image is characterized by assigning local patches to their closest cluster centers, and the super-resolution of each local patch is conducted via the convolutional neural network trained by its cluster. Our proposed framework not only exploits the deep learning capability of convolutional neural networks but also adapts them to depict texture diversities for super-resolution. Experimental super-resolution evaluations on benchmark image datasets validate that our framework achieves state-of-the-art performance in terms of peak signal-to-noise ratio and structural similarity. Our multiple convolutional neural network framework provides an enhanced image super-resolution strategy over existing single-mode deep learning models
Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks
Convolutional Neural Networks (CNNs) have been consistently proved
state-of-the-art results in image Super-Resolution (SR), representing an
exceptional opportunity for the remote sensing field to extract further
information and knowledge from captured data. However, most of the works
published in the literature have been focusing on the Single-Image
Super-Resolution problem so far. At present, satellite based remote sensing
platforms offer huge data availability with high temporal resolution and low
spatial resolution. In this context, the presented research proposes a novel
residual attention model (RAMS) that efficiently tackles the multi-image
super-resolution task, simultaneously exploiting spatial and temporal
correlations to combine multiple images. We introduce the mechanism of visual
feature attention with 3D convolutions in order to obtain an aware data fusion
and information extraction of the multiple low-resolution images, transcending
limitations of the local region of convolutional operations. Moreover, having
multiple inputs with the same scene, our representation learning network makes
extensive use of nestled residual connections to let flow redundant
low-frequency signals and focus the computation on more important
high-frequency components. Extensive experimentation and evaluations against
other available solutions, either for single or multi-image super-resolution,
have demonstrated that the proposed deep learning-based solution can be
considered state-of-the-art for Multi-Image Super-Resolution for remote sensing
applications
Cross-Modality High-Frequency Transformer for MR Image Super-Resolution
Improving the resolution of magnetic resonance (MR) image data is critical to
computer-aided diagnosis and brain function analysis. Higher resolution helps
to capture more detailed content, but typically induces to lower
signal-to-noise ratio and longer scanning time. To this end, MR image
super-resolution has become a widely-interested topic in recent times. Existing
works establish extensive deep models with the conventional architectures based
on convolutional neural networks (CNN). In this work, to further advance this
research field, we make an early effort to build a Transformer-based MR image
super-resolution framework, with careful designs on exploring valuable domain
prior knowledge. Specifically, we consider two-fold domain priors including the
high-frequency structure prior and the inter-modality context prior, and
establish a novel Transformer architecture, called Cross-modality
high-frequency Transformer (Cohf-T), to introduce such priors into
super-resolving the low-resolution (LR) MR images. Comprehensive experiments on
two datasets indicate that Cohf-T achieves new state-of-the-art performance
- …