106,597 research outputs found

    Clustering-Oriented Multiple Convolutional Neural Networks for Single Image Super-Resolution

    Get PDF
    In contrast to the human visual system (HVS) that applies different processing schemes to visual information of different textural categories, most existing deep learning models for image super-resolution tend to exploit an indiscriminate scheme for processing one whole image. Inspired by the human cognitive mechanism, we propose a multiple convolutional neural network framework trained based on different textural clusters of image local patches. To this end, we commence by grouping patches into K clusters via K-means, which enables each cluster center to encode image priors of a certain texture category. We then train K convolutional neural networks for super-resolution based on the K clusters of patches separately, such that the multiple convolutional neural networks comprehensively capture the patch textural variability. Furthermore, each convolutional neural network characterizes one specific texture category and is used for restoring patches belonging to the cluster. In this way, the texture variation within a whole image is characterized by assigning local patches to their closest cluster centers, and the super-resolution of each local patch is conducted via the convolutional neural network trained by its cluster. Our proposed framework not only exploits the deep learning capability of convolutional neural networks but also adapts them to depict texture diversities for super-resolution. Experimental super-resolution evaluations on benchmark image datasets validate that our framework achieves state-of-the-art performance in terms of peak signal-to-noise ratio and structural similarity. Our multiple convolutional neural network framework provides an enhanced image super-resolution strategy over existing single-mode deep learning models

    Deep Bi-Dense Networks for Image Super-Resolution

    Full text link
    © 2018 IEEE. This paper proposes Deep Bi-Dense Networks (DBD-N) for single image super-resolution. Our approach extends previous intra-block dense connection approaches by including novel inter-block dense connections. In this way, feature information propagates from a single dense block to all subsequent blocks, instead of to a single successor. To build a DBDN, we firstly construct intra-dense blocks, which extract and compress abundant local features via densely connected convolutional layers and compression layers for further feature learning. Then, we use an inter-block dense net to connect intra-dense blocks, which allow each intra-dense block propagates its own local features to all successors. Additionally, our bi-dense construction connects each block to the output, alleviating the vanishing gradient problems in training. The evaluation of our proposed method on five benchmark data sets shows that our DBDN outperforms the state of the art in SISR with a moderate number of network parameters

    Learning to super-resolve images using self-similarities

    Get PDF
    The single image super-resolution problem entails estimating a high-resolution version of a low-resolution image. Recent studies have shown that high resolution versions of the patches of a given low-resolution image are likely to be found within the given image itself. This recurrence of patches across scales in an image forms the basis of self-similarity driven algorithms for image super-resolution. Self-similarity driven approaches have the appeal that they do not require any external training set; the mapping from low-resolution to high-resolution is obtained using the cross scale patch recurrence. In this dissertation, we address three important problems in super-resolution, and present novel self-similarity based solutions to them: First, we push the state-of-the-art in terms of super-resolution of fine textural details in the scene. We propose two algorithms that use self-similarity in conjunction with the fact that textures are better characterized by their responses to a set of spatially localized bandpass filters, as compared to intensity values directly. Our proposed algorithms seek self-similarities in the sub-bands of the image, for better synthesizing fine textural details. Second, we address the problem of super-resolving an image in the presence of noise. To this end, we propose the first super-resolution algorithm based on self-similarity that effectively exploits the high-frequency content present in noise (which is ordinarily discarded by denoising algorithms) for synthesizing useful textures in high-resolution. Third, we present an algorithm that is able to better super-resolve images containing geometric regularities such as in urban scenes, cityscapes etc. We do so by extracting planar surfaces and their parameters (mid-level cues) from the scene and exploiting the detected scene geometry for better guiding the self-similarity search process. Apart from the above self-similarity algorithms, this dissertation also presents a novel edge-based super-resolution algorithm that super-resolves an image by learning from training data how edge profiles transform across resolutions. We obtain edge profiles via a detailed and explicit examination of local image structure, which we show to be more robust and accurate as compared to conventional gradient profiles

    Learning a Mixture of Deep Networks for Single Image Super-Resolution

    Full text link
    Single image super-resolution (SR) is an ill-posed problem which aims to recover high-resolution (HR) images from their low-resolution (LR) observations. The crux of this problem lies in learning the complex mapping between low-resolution patches and the corresponding high-resolution patches. Prior arts have used either a mixture of simple regression models or a single non-linear neural network for this propose. This paper proposes the method of learning a mixture of SR inference modules in a unified framework to tackle this problem. Specifically, a number of SR inference modules specialized in different image local patterns are first independently applied on the LR image to obtain various HR estimates, and the resultant HR estimates are adaptively aggregated to form the final HR image. By selecting neural networks as the SR inference module, the whole procedure can be incorporated into a unified network and be optimized jointly. Extensive experiments are conducted to investigate the relation between restoration performance and different network architectures. Compared with other current image SR approaches, our proposed method achieves state-of-the-arts restoration results on a wide range of images consistently while allowing more flexible design choices. The source codes are available in http://www.ifp.illinois.edu/~dingliu2/accv2016
    • …
    corecore