21 research outputs found
MRI Super-Resolution using Multi-Channel Total Variation
This paper presents a generative model for super-resolution in routine
clinical magnetic resonance images (MRI), of arbitrary orientation and
contrast. The model recasts the recovery of high resolution images as an
inverse problem, in which a forward model simulates the slice-select profile of
the MR scanner. The paper introduces a prior based on multi-channel total
variation for MRI super-resolution. Bias-variance trade-off is handled by
estimating hyper-parameters from the low resolution input scans. The model was
validated on a large database of brain images. The validation showed that the
model can improve brain segmentation, that it can recover anatomical
information between images of different MR contrasts, and that it generalises
well to the large variability present in MR images of different subjects. The
implementation is freely available at https://github.com/brudfors/spm_superre
Unsupervised MRI Super-Resolution Using Deep External Learning and Guided Residual Dense Network with Multimodal Image Priors
Deep learning techniques have led to state-of-the-art single image
super-resolution (SISR) with natural images. Pairs of high-resolution (HR) and
low-resolution (LR) images are used to train the deep learning model (mapping
function). These techniques have also been applied to medical image
super-resolution (SR). Compared with natural images, medical images have
several unique characteristics. First, there are no HR images for training in
real clinical applications because of the limitations of imaging systems and
clinical requirements. Second, other modal HR images are available (e.g., HR
T1-weighted images are available for enhancing LR T2-weighted images). In this
paper, we propose an unsupervised SISR technique based on simple prior
knowledge of the human anatomy; this technique does not require HR images for
training. Furthermore, we present a guided residual dense network, which
incorporates a residual dense network with a guided deep convolutional neural
network for enhancing the resolution of LR images by referring to different HR
images of the same subject. Experiments on a publicly available brain MRI
database showed that our proposed method achieves better performance than the
state-of-the-art methods.Comment: 10 pages, 3 figure
Multi-contrast MRI Super-resolution via Implicit Neural Representations
Clinical routine and retrospective cohorts commonly include multi-parametric
Magnetic Resonance Imaging; however, they are mostly acquired in different
anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints.
Thus acquired views suffer from poor out-of-plane resolution and affect
downstream volumetric image analysis that typically requires isotropic 3D
scans. Combining different views of multi-contrast scans into high-resolution
isotropic 3D scans is challenging due to the lack of a large training cohort,
which calls for a subject-specific framework.This work proposes a novel
solution to this problem leveraging Implicit Neural Representations (INR). Our
proposed INR jointly learns two different contrasts of complementary views in a
continuous spatial function and benefits from exchanging anatomical information
between them. Trained within minutes on a single commodity GPU, our model
provides realistic super-resolution across different pairs of contrasts in our
experiments with three datasets. Using Mutual Information (MI) as a metric, we
find that our model converges to an optimum MI amongst sequences, achieving
anatomically faithful reconstruction. Code is available at:
https://github.com/jqmcginnis/multi_contrast_inr
Multi-contrast brain magnetic resonance image super-resolution using the local weight similarity
Abstract
Background
Low-resolution images may be acquired in magnetic resonance imaging (MRI) due to limited data acquisition time or other physical constraints, and their resolutions can be improved with super-resolution methods. Since MRI can offer images of an object with different contrasts, e.g., T1-weighted or T2-weighted, the shared information between inter-contrast images can be used to benefit super-resolution.
Methods
In this study, an MRI image super-resolution approach to enhance in-plane resolution is proposed by exploring the statistical information estimated from another contrast MRI image that shares similar anatomical structures. We assume some edge structures are shown both in T1-weighted and T2-weighted MRI brain images acquired of the same subject, and the proposed approach aims to recover such kind of structures to generate a high-resolution image from its low-resolution counterpart.
Results
The statistical information produces a local weight of image that are found to be nearly invariant to the image contrast and thus this weight can be used to transfer the shared information from one contrast to another. We analyze this property with comprehensive mathematics as well as numerical experiments.
Conclusion
Experimental results demonstrate that the image quality of low-resolution images can be remarkably improved with the proposed method if this weight is borrowed from a high resolution image with another contrast.
Graphical Abstract
Multi-contrast MRI Image Super-resolution with Contrast-invariant Regression Weight
MRI super-resolution using multi-channel total variation
This paper presents a generative model for super-resolution in routine clinical magnetic resonance images (MRI), of arbitrary orientation and contrast. The model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice-select profile of the MR scanner. The paper introduces a prior based on multi-channel total variation for MRI super-resolution. Bias-variance trade-off is handled by estimating hyper-parameters from the low resolution input scans. The model was validated on a large database of brain images. The validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different MR contrasts, and that it generalises well to the large variability present in MR images of different subjects
Medical Image Imputation from Image Collections
We present an algorithm for creating high resolution anatomically plausible
images consistent with acquired clinical brain MRI scans with large inter-slice
spacing. Although large data sets of clinical images contain a wealth of
information, time constraints during acquisition result in sparse scans that
fail to capture much of the anatomy. These characteristics often render
computational analysis impractical as many image analysis algorithms tend to
fail when applied to such images. Highly specialized algorithms that explicitly
handle sparse slice spacing do not generalize well across problem domains. In
contrast, we aim to enable application of existing algorithms that were
originally developed for high resolution research scans to significantly
undersampled scans. We introduce a generative model that captures fine-scale
anatomical structure across subjects in clinical image collections and derive
an algorithm for filling in the missing data in scans with large inter-slice
spacing. Our experimental results demonstrate that the resulting method
outperforms state-of-the-art upsampling super-resolution techniques, and
promises to facilitate subsequent analysis not previously possible with scans
of this quality. Our implementation is freely available at
https://github.com/adalca/papago .Comment: Accepted at IEEE Transactions on Medical Imaging (\c{opyright} 2018
IEEE
Joint interpolation of multi-sensor sea surface geophysical fields using non-local and statistical priors
This work addresses the joint analysis of multi-source and multi-resolution remote sensing data for the interpolation of high-resolution geophysical fields. As case-study application, we consider the interpolation of sea surface temperature fields. We propose a novel statistical model, which combines two key features: an exemplar-based prior and second-order statistical priors. The exemplar-based prior, referred to as a non-local prior, exploits similarities between local patches (small field regions) to interpolate missing data areas from previously observed exemplars. This non-local prior also sets an explicit conditioning between the multi-sensor data. Two complementary statistical priors, namely a prior on the spatial covariance and a prior on the marginal distribution of the high-resolution details, are considered as sea surface geophysical fields are expected to depict specific spectral and marginal features in relation to the underlying turbulent ocean dynamics. We report experiments on both synthetic data and real SST data. These experiments demonstrate the contributions of the proposed combination of non-local and statistical priors to interpolate visually-consistent and geophysically-sound SST fields from multi-source satellite data. We further discuss the key features and parameterizations of this model as well as its relevance with respect to classical interpolation techniques
Multiple sparse representations classification
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy.We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level
Second-Order Regression-Based MR Image Upsampling
The spatial resolution of magnetic resonance imaging (MRI) is often limited due to several reasons, including a short data acquisition time. Several advanced interpolation-based image upsampling algorithms have been developed to increase the resolution of MR images. These methods estimate the voxel intensity in a high-resolution (HR) image by a weighted combination of voxels in the original low-resolution (LR) MR image. As these methods fall into the zero-order point estimation framework, they only include a local constant approximation of the image voxel and hence cannot fully represent the underlying image structure(s). To this end, we extend the existing zero-order point estimation to higher orders of regression, allowing us to approximate a mapping function between local LR-HR image patches by a polynomial function. Extensive experiments on open-access MR image datasets and actual clinical MR images demonstrate that our algorithm can maintain sharp edges and preserve fine details, while the current state-of-the-art algorithms remain prone to some visual artifacts such as blurring and staircasing artifacts