71 research outputs found

    Self Super-Resolution for Magnetic Resonance Images using Deep Networks

    Full text link
    High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many clinical applications, however, there is a trade-off between resolution, speed of acquisition, and noise. It is common for MR images to have worse through-plane resolution~(slice thickness) than in-plane resolution. In these MRI images, high frequency information in the through-plane direction is not acquired, and cannot be resolved through interpolation. To address this issue, super-resolution methods have been developed to enhance spatial resolution. As an ill-posed problem, state-of-the-art super-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution~(LR) images to high resolution~(HR) images. For several reasons, such HR atlas images are often not available for MRI sequences. This paper presents a self super-resolution~(SSR) algorithm, which does not use any external atlas images, yet can still resolve HR images only reliant on the acquired LR image. We use a blurred version of the input image to create training data for a state-of-the-art super-resolution deep network. The trained network is applied to the original input image to estimate the HR image. Our SSR result shows a significant improvement on through-plane resolution compared to competing SSR methods.Comment: Accepted by IEEE International Symposium on Biomedical Imaging (ISBI) 201

    On Finite Difference Jacobian Computation in Deformable Image Registration

    Full text link
    Producing spatial transformations that are diffeomorphic has been a central problem in deformable image registration. As a diffeomorphic transformation should have positive Jacobian determinant ∣J∣|J| everywhere, the number of voxels with ∣J∣<0|J|<0 has been used to test for diffeomorphism and also to measure the irregularity of the transformation. For digital transformations, ∣J∣|J| is commonly approximated using central difference, but this strategy can yield positive ∣J∣|J|'s for transformations that are clearly not diffeomorphic -- even at the voxel resolution level. To show this, we first investigate the geometric meaning of different finite difference approximations of ∣J∣|J|. We show that to determine diffeomorphism for digital images, use of any individual finite difference approximations of ∣J∣|J| is insufficient. We show that for a 2D transformation, four unique finite difference approximations of ∣J∣|J|'s must be positive to ensure the entire domain is invertible and free of folding at the pixel level. We also show that in 3D, ten unique finite differences approximations of ∣J∣|J|'s are required to be positive. Our proposed digital diffeomorphism criteria solves several errors inherent in the central difference approximation of ∣J∣|J| and accurately detects non-diffeomorphic digital transformations

    Coordinate Translator for Learning Deformable Medical Image Registration

    Full text link
    The majority of deep learning (DL) based deformable image registration methods use convolutional neural networks (CNNs) to estimate displacement fields from pairs of moving and fixed images. This, however, requires the convolutional kernels in the CNN to not only extract intensity features from the inputs but also understand image coordinate systems. We argue that the latter task is challenging for traditional CNNs, limiting their performance in registration tasks. To tackle this problem, we first introduce Coordinate Translator, a differentiable module that identifies matched features between the fixed and moving image and outputs their coordinate correspondences without the need for training. It unloads the burden of understanding image coordinate systems for CNNs, allowing them to focus on feature extraction. We then propose a novel deformable registration network, im2grid, that uses multiple Coordinate Translator's with the hierarchical features extracted from a CNN encoder and outputs a deformation field in a coarse-to-fine fashion. We compared im2grid with the state-of-the-art DL and non-DL methods for unsupervised 3D magnetic resonance image registration. Our experiments show that im2grid outperforms these methods both qualitatively and quantitatively

    AniRes2D: Anisotropic Residual-enhanced Diffusion for 2D MR Super-Resolution

    Full text link
    Anisotropic low-resolution (LR) magnetic resonance (MR) images are fast to obtain but hinder automated processing. We propose to use denoising diffusion probabilistic models (DDPMs) to super-resolve these 2D-acquired LR MR slices. This paper introduces AniRes2D, a novel approach combining DDPM with a residual prediction for 2D super-resolution (SR). Results demonstrate that AniRes2D outperforms several other DDPM-based models in quantitative metrics, visual quality, and out-of-domain evaluation. We use a trained AniRes2D to super-resolve 3D volumes slice by slice, where comparative quantitative results and reduced skull aliasing are achieved compared to a recent state-of-the-art self-supervised 3D super-resolution method. Furthermore, we explored the use of noise conditioning augmentation (NCA) as an alternative augmentation technique for DDPM-based SR models, but it was found to reduce performance. Our findings contribute valuable insights to the application of DDPMs for SR of anisotropic MR images.Comment: Accepted for presentation at SPIE Medical Imaging 2024, Clinical and Biomedical Imagin

    Applying an Open-Source Segmentation Algorithm to Different OCT Devices in Multiple Sclerosis Patients and Healthy Controls: Implications for Clinical Trials

    Get PDF
    Background. The lack of segmentation algorithms operative across optical coherence tomography (OCT) platforms hinders utility of retinal layer measures in MS trials. Objective. To determine cross-sectional and longitudinal agreement of retinal layer thicknesses derived from an open-source, fully-automated, segmentation algorithm, applied to two spectral-domain OCT devices. Methods. Cirrus HD-OCT and Spectralis OCT macular scans from 68 MS patients and 22 healthy controls were segmented. A longitudinal cohort comprising 51 subjects (mean follow-up: 1.4 ± 0.9 years) was also examined. Bland-Altman analyses and interscanner agreement indices were utilized to assess agreement between scanners. Results. Low mean differences (−2.16 to 0.26 μm) and narrow limits of agreement (LOA) were noted for ganglion cell and inner and outer nuclear layer thicknesses cross-sectionally. Longitudinally we found low mean differences (−0.195 to 0.21 μm) for changes in all layers, with wider LOA. Comparisons of rate of change in layer thicknesses over time revealed consistent results between the platforms. Conclusions. Retinal thickness measures for the majority of the retinal layers agree well cross-sectionally and longitudinally between the two scanners at the cohort level, with greater variability at the individual level. This open-source segmentation algorithm enables combining data from different OCT platforms, broadening utilization of OCT as an outcome measure in MS trials

    Intensity Inhomogeneity Correction of SD-OCT Data Using Macular Flatspace

    Get PDF
    Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works by sharpening the intensity histogram, which reduces the variation of intensities within different classes. To apply it here, the data are first converted to a standardized space called macular flat space (MFS). MFS allows the intensities within each layer to be more easily normalized by removing the natural curvature of the retina. N3 is then run on the MFS data using a modified smoothing model, which improves the efficiency of the original algorithm. We show that our method more accurately corrects gain fields on synthetic OCT data when compared to running N3 on non-flattened data. It also reduces the overall variability of the intensities within each layer, without sacrificing contrast between layers, and improves the performance of registration between OCT images

    Optimal operating MR contrast for brain ventricle parcellation

    Full text link
    Development of MR harmonization has enabled different contrast MRIs to be synthesized while preserving the underlying anatomy. In this paper, we use image harmonization to explore the impact of different T1-w MR contrasts on a state-of-the-art ventricle parcellation algorithm VParNet. We identify an optimal operating contrast (OOC) for ventricle parcellation; by showing that the performance of a pretrained VParNet can be boosted by adjusting contrast to the OOC
    • …
    corecore