5,849 research outputs found
Super-resolution of 3D Magnetic Resonance Images by Random Shifting and Convolutional Neural Networks
Enhancing resolution is a permanent goal in magnetic resonance (MR) imaging, in order to keep improving diagnostic capability and registration methods. Super-resolution (SR) techniques are applied at the postprocessing stage, and their use and development have progressively increased during the last years. In particular, example-based methods have been mostly proposed in recent state-of-the-art works. In this paper, a combination of a deep-learning SR system and a random shifting technique to improve the quality of MR images is proposed, implemented and tested. The model was compared to four competitors: cubic spline interpolation, non-local means upsampling, low-rank total variation and a three-dimensional convolutional neural network trained with patches of HR brain images (SRCNN3D). The newly proposed method showed better results in Peak Signal-to-Noise Ratio, Structural Similarity index, and Bhattacharyya coefficient. Computation times were at the
same level as those of these up-to-date methods. When applied to downsampled MR structural T1 images, the new method also yielded better qualitative results, both in the restored images and in the images of residuals.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Single image example-based super-resolution using cross-scale patch matching and Markov random field modelling
Example-based super-resolution has become increasingly popular over the last few years for its ability to overcome the limitations of classical multi-frame approach. In this paper we present a new example-based method that uses the input low-resolution image itself as a search space for high-resolution patches by exploiting self-similarity across different resolution scales. Found examples are combined in a high-resolution image by the means of Markov Random Field modelling that forces their global agreement. Additionally, we apply back-projection and steering kernel regression as post-processing techniques. In this way, we are able to produce sharp and artefact-free results that are comparable or better than standard interpolation and state-of-the-art super-resolution techniques
Geometry-Aware Neighborhood Search for Learning Local Models for Image Reconstruction
Local learning of sparse image models has proven to be very effective to
solve inverse problems in many computer vision applications. To learn such
models, the data samples are often clustered using the K-means algorithm with
the Euclidean distance as a dissimilarity metric. However, the Euclidean
distance may not always be a good dissimilarity measure for comparing data
samples lying on a manifold. In this paper, we propose two algorithms for
determining a local subset of training samples from which a good local model
can be computed for reconstructing a given input test sample, where we take
into account the underlying geometry of the data. The first algorithm, called
Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme
which can be seen as an out-of-sample extension of the replicator graph
clustering method for local model learning. The second method, called
Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive
alternative for training subset selection. The proposed AGNN and GOC methods
are evaluated in image super-resolution, deblurring and denoising applications
and shown to outperform spectral clustering, soft clustering, and geodesic
distance based subset selection in most settings.Comment: 15 pages, 10 figures and 5 table
Recommended from our members
Face image super-resolution using 2D CCA
In this paper a face super-resolution method using two-dimensional canonical correlation analysis (2D CCA) is presented. A detail compensation step is followed to add high-frequency components to the reconstructed high-resolution face. Unlike most of the previous researches on face super-resolution algorithms that first transform the images into vectors, in our approach the relationship between the high-resolution and the low-resolution face image are maintained in their original 2D representation. In addition, rather than approximating the entire face, different parts of a face image are super-resolved separately to better preserve the local structure. The proposed method is compared with various state-of-the-art super-resolution algorithms using multiple evaluation criteria including face recognition performance. Results on publicly available datasets show that the proposed method super-resolves high quality face images which are very close to the ground-truth and performance gain is not dataset dependent. The method is very efficient in both the training and testing phases compared to the other approaches. © 2013 Elsevier B.V
- …