179 research outputs found

    Self Super-Resolution for Magnetic Resonance Images using Deep Networks

    Full text link
    High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many clinical applications, however, there is a trade-off between resolution, speed of acquisition, and noise. It is common for MR images to have worse through-plane resolution~(slice thickness) than in-plane resolution. In these MRI images, high frequency information in the through-plane direction is not acquired, and cannot be resolved through interpolation. To address this issue, super-resolution methods have been developed to enhance spatial resolution. As an ill-posed problem, state-of-the-art super-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution~(LR) images to high resolution~(HR) images. For several reasons, such HR atlas images are often not available for MRI sequences. This paper presents a self super-resolution~(SSR) algorithm, which does not use any external atlas images, yet can still resolve HR images only reliant on the acquired LR image. We use a blurred version of the input image to create training data for a state-of-the-art super-resolution deep network. The trained network is applied to the original input image to estimate the HR image. Our SSR result shows a significant improvement on through-plane resolution compared to competing SSR methods.Comment: Accepted by IEEE International Symposium on Biomedical Imaging (ISBI) 201

    New Brighton community : improving science communication to better community wellbeing and engagement.

    Get PDF
    Due to the dynamic nature of the coastal environment, it is important to understand community values associated with it in over to ensure their persistence and protection over time. As a natural barrier to flooding and sea level rise, dune systems can playa significant role in climate change adaption for coastal communities. The combination of all four community wellbeings, cultural, economic, environment and social is rare in published literature compared to studies examining economic and/or environmental wellbeings. This results in a misconception that some values are less important to communities than others. This research focuses on the New Brighton community and highlights their values associated with, and perspectives of, the dune system. Furthermore, it aids in understanding the different methods of science communication that might work best for the community, from a community perspective. Results highlight the strong sense of place held by New Brighton community residents and visitors alike, and their, valuing of all four community wellbeings: cultural, economic, environmental and social. Furthermore, the results showcase the wide variety of science communication methods available and reveal a need for more social media/online presence and education as an effective form of science communication. This research has found that it is important to voice the perspectives and values held by the community, and to illustrate the science communication methods they wanted. This research aids in understanding the New Brighton beach users community, suggests ways to enhance or better tailor the science communication for meaningful community engagement between local government, scientists and the New Brighton communit

    On Finite Difference Jacobian Computation in Deformable Image Registration

    Full text link
    Producing spatial transformations that are diffeomorphic has been a central problem in deformable image registration. As a diffeomorphic transformation should have positive Jacobian determinant J|J| everywhere, the number of voxels with J<0|J|<0 has been used to test for diffeomorphism and also to measure the irregularity of the transformation. For digital transformations, J|J| is commonly approximated using central difference, but this strategy can yield positive J|J|'s for transformations that are clearly not diffeomorphic -- even at the voxel resolution level. To show this, we first investigate the geometric meaning of different finite difference approximations of J|J|. We show that to determine diffeomorphism for digital images, use of any individual finite difference approximations of J|J| is insufficient. We show that for a 2D transformation, four unique finite difference approximations of J|J|'s must be positive to ensure the entire domain is invertible and free of folding at the pixel level. We also show that in 3D, ten unique finite differences approximations of J|J|'s are required to be positive. Our proposed digital diffeomorphism criteria solves several errors inherent in the central difference approximation of J|J| and accurately detects non-diffeomorphic digital transformations

    Coordinate Translator for Learning Deformable Medical Image Registration

    Full text link
    The majority of deep learning (DL) based deformable image registration methods use convolutional neural networks (CNNs) to estimate displacement fields from pairs of moving and fixed images. This, however, requires the convolutional kernels in the CNN to not only extract intensity features from the inputs but also understand image coordinate systems. We argue that the latter task is challenging for traditional CNNs, limiting their performance in registration tasks. To tackle this problem, we first introduce Coordinate Translator, a differentiable module that identifies matched features between the fixed and moving image and outputs their coordinate correspondences without the need for training. It unloads the burden of understanding image coordinate systems for CNNs, allowing them to focus on feature extraction. We then propose a novel deformable registration network, im2grid, that uses multiple Coordinate Translator's with the hierarchical features extracted from a CNN encoder and outputs a deformation field in a coarse-to-fine fashion. We compared im2grid with the state-of-the-art DL and non-DL methods for unsupervised 3D magnetic resonance image registration. Our experiments show that im2grid outperforms these methods both qualitatively and quantitatively

    Shallow vs deep learning architectures for white matter lesion segmentation in the early stages of multiple sclerosis

    Get PDF
    In this work, we present a comparison of a shallow and a deep learning architecture for the automated segmentation of white matter lesions in MR images of multiple sclerosis patients. In particular, we train and test both methods on early stage disease patients, to verify their performance in challenging conditions, more similar to a clinical setting than what is typically provided in multiple sclerosis segmentation challenges. Furthermore, we evaluate a prototype naive combination of the two methods, which refines the final segmentation. All methods were trained on 32 patients, and the evaluation was performed on a pure test set of 73 cases. Results show low lesion-wise false positives (30%) for the deep learning architecture, whereas the shallow architecture yields the best Dice coefficient (63%) and volume difference (19%). Combining both shallow and deep architectures further improves the lesion-wise metrics (69% and 26% lesion-wise true and false positive rate, respectively).Comment: Accepted to the MICCAI 2018 Brain Lesion (BrainLes) worksho

    Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images using Bayesian Deep Learning

    Full text link
    Optical coherence tomography (OCT) is commonly used to analyze retinal layers for assessment of ocular diseases. In this paper, we propose a method for retinal layer segmentation and quantification of uncertainty based on Bayesian deep learning. Our method not only performs end-to-end segmentation of retinal layers, but also gives the pixel wise uncertainty measure of the segmentation output. The generated uncertainty map can be used to identify erroneously segmented image regions which is useful in downstream analysis. We have validated our method on a dataset of 1487 images obtained from 15 subjects (OCT volumes) and compared it against the state-of-the-art segmentation algorithms that does not take uncertainty into account. The proposed uncertainty based segmentation method results in comparable or improved performance, and most importantly is more robust against noise

    Intensity Inhomogeneity Correction of SD-OCT Data Using Macular Flatspace

    Get PDF
    Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works by sharpening the intensity histogram, which reduces the variation of intensities within different classes. To apply it here, the data are first converted to a standardized space called macular flat space (MFS). MFS allows the intensities within each layer to be more easily normalized by removing the natural curvature of the retina. N3 is then run on the MFS data using a modified smoothing model, which improves the efficiency of the original algorithm. We show that our method more accurately corrects gain fields on synthetic OCT data when compared to running N3 on non-flattened data. It also reduces the overall variability of the intensities within each layer, without sacrificing contrast between layers, and improves the performance of registration between OCT images
    corecore