42,254 research outputs found

    FRNET: Flattened Residual Network for Infant MRI Skull Stripping

    Full text link
    Skull stripping for brain MR images is a basic segmentation task. Although many methods have been proposed, most of them focused mainly on the adult MR images. Skull stripping for infant MR images is more challenging due to the small size and dynamic intensity changes of brain tissues during the early ages. In this paper, we propose a novel CNN based framework to robustly extract brain region from infant MR image without any human assistance. Specifically, we propose a simplified but more robust flattened residual network architecture (FRnet). We also introduce a new boundary loss function to highlight ambiguous and low contrast regions between brain and non-brain regions. To make the whole framework more robust to MR images with different imaging quality, we further introduce an artifact simulator for data augmentation. We have trained and tested our proposed framework on a large dataset (N=343), covering newborns to 48-month-olds, and obtained performance better than the state-of-the-art methods in all age groups.Comment: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI

    Deep Neural Networks for Anatomical Brain Segmentation

    Full text link
    We present a novel approach to automatically segment magnetic resonance (MR) images of the human brain into anatomical regions. Our methodology is based on a deep artificial neural network that assigns each voxel in an MR image of the brain to its corresponding anatomical region. The inputs of the network capture information at different scales around the voxel of interest: 3D and orthogonal 2D intensity patches capture the local spatial context while large, compressed 2D orthogonal patches and distances to the regional centroids enforce global spatial consistency. Contrary to commonly used segmentation methods, our technique does not require any non-linear registration of the MR images. To benchmark our model, we used the dataset provided for the MICCAI 2012 challenge on multi-atlas labelling, which consists of 35 manually segmented MR images of the brain. We obtained competitive results (mean dice coefficient 0.725, error rate 0.163) showing the potential of our approach. To our knowledge, our technique is the first to tackle the anatomical segmentation of the whole brain using deep neural networks

    Incorporating Relaxivities to More Accurately Reconstruct MR Images

    Get PDF
    Purpose To develop a mathematical model that incorporates the magnetic resonance relaxivities into the image reconstruction process in a single step. Materials and methods In magnetic resonance imaging, the complex-valued measurements of the acquired signal at each point in frequency space are expressed as a Fourier transformation of the proton spin density weighted by Fourier encoding anomalies: T2⁎, T1, and a phase determined by magnetic field inhomogeneity (∆B) according to the MR signal equation. Such anomalies alter the expected symmetry and the signal strength of the k-space observations, resulting in images distorted by image warping, blurring, and loss in image intensity. Although T1 on tissue relaxation time provides valuable quantitative information on tissue characteristics, the T1 recovery term is typically neglected by assuming a long repetition time. In this study, the linear framework presented in the work of Rowe et al., 2007, and of Nencka et al., 2009 is extended to develop a Fourier reconstruction operation in terms of a real-valued isomorphism that incorporates the effects of T2⁎, ∆B, and T1. This framework provides a way to precisely quantify the statistical properties of the corrected image-space data by offering a linear relationship between the observed frequency space measurements and reconstructed corrected image-space measurements. The model is illustrated both on theoretical data generated by considering T2⁎, T1, and/or ∆B effects, and on experimentally acquired fMRI data by focusing on the incorporation of T1. A comparison is also made between the activation statistics computed from the reconstructed data with and without the incorporation of T1 effects. Result Accounting for T1 effects in image reconstruction is shown to recover image contrast that exists prior to T1 equilibrium. The incorporation of T1 is also shown to induce negligible correlation in reconstructed images and preserve functional activations. Conclusion With the use of the proposed method, the effects of T2⁎ and ∆B can be corrected, and T1 can be incorporated into the time series image-space data during image reconstruction in a single step. Incorporation of T1 provides improved tissue segmentation over the course of time series and therefore can improve the precision of motion correction and image registration

    Wireless Accelerometer for Neonatal MRI Motion Artifact Correction

    Get PDF
    A wireless accelerometer has been used in conjunction with a dedicated 3T neonatal MRI system installed on a Neonatal Intensive Care Unit to measure in-plane rotation which is a common problem with neonatal MRI. Rotational data has been acquired in real-time from phantoms simultaneously with MR images which shows that the wireless accelerometer can be used in close proximity to the MR system. No artifacts were observed on the MR images from the accelerometer or from the MR system on the accelerometer output. Initial attempts to correct the raw data using the measured rotational angles have been performed, but further work will be required to make a robust correction algorithm

    Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images

    Full text link
    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure.Comment: 15 pages, 12 figure
    corecore