856 research outputs found

    Spatial fuzzy c-means thresholding for semiautomated calculation of percentage lung ventilated volume from hyperpolarized gas and (1) H MRI

    Get PDF
    Purpose To develop an image-processing pipeline for semiautomated (SA) and reproducible analysis of hyperpolarized gas lung ventilation and proton anatomical magnetic resonance imaging (MRI) scan pairs. To compare results from the software for total lung volume (TLV), ventilated volume (VV), and percentage lung ventilated volume (%VV) calculation to the current manual “basic” method and a K-means segmentation method. Materials and Methods Six patients were imaged with hyperpolarized 3He and same-breath lung 1H MRI at 1.5T and six other patients were scanned with hyperpolarized 129Xe and separate-breath 1H MRI. One expert observer and two users with experience in lung image segmentation carried out the image analysis. Spearman (R), Intraclass (ICC) correlations, Bland–Altman limits of agreement (LOA), and Dice Similarity Coefficients (DSC) between output lung volumes were calculated. Results When comparing values of %VV, agreement between observers improved using the SA method (mean; R = 0.984, ICC = 0.980, LOA = 7.5%) when compared to the basic method (mean; R = 0.863, ICC = 0.873, LOA = 14.2%) nonsignificantly (pR = 0.25, pICC = 0.25, and pLOA = 0.50 respectively). DSC of VV and TLV masks significantly improved (P < 0.01) using the SA method (mean; DSCVV = 0.973, DSCTLV = 0.980) when compared to the basic method (mean; DSCVV = 0.947, DSCTLV = 0.957). K-means systematically overestimated %VV when compared to both basic (mean overestimation = 5.0%) and SA methods (mean overestimation = 9.7%), and had poor agreement with the other methods (mean ICC; K-means vs. basic = 0.685, K-means vs. SA = 0.740). Conclusion A semiautomated image processing software was developed that improves interobserver agreement and correlation of lung ventilation volume percentage when compared to the currently used basic method and provides more consistent segmentations than the K-means method. Level of Evidence: 3 Technical Efficacy: Stage

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Shallow vs deep learning architectures for white matter lesion segmentation in the early stages of multiple sclerosis

    Get PDF
    In this work, we present a comparison of a shallow and a deep learning architecture for the automated segmentation of white matter lesions in MR images of multiple sclerosis patients. In particular, we train and test both methods on early stage disease patients, to verify their performance in challenging conditions, more similar to a clinical setting than what is typically provided in multiple sclerosis segmentation challenges. Furthermore, we evaluate a prototype naive combination of the two methods, which refines the final segmentation. All methods were trained on 32 patients, and the evaluation was performed on a pure test set of 73 cases. Results show low lesion-wise false positives (30%) for the deep learning architecture, whereas the shallow architecture yields the best Dice coefficient (63%) and volume difference (19%). Combining both shallow and deep architectures further improves the lesion-wise metrics (69% and 26% lesion-wise true and false positive rate, respectively).Comment: Accepted to the MICCAI 2018 Brain Lesion (BrainLes) worksho

    3D deep convolutional neural network-based ventilated lung segmentation using multi-nuclear hyperpolarized gas MRI

    Get PDF
    Hyperpolarized gas MRI enables visualization of regional lung ventilation with high spatial resolution. Segmentation of the ventilated lung is required to calculate clinically relevant biomarkers. Recent research in deep learning (DL) has shown promising results for numerous segmentation problems. In this work, we evaluate a 3D V-Net to segment ventilated lung regions on hyperpolarized gas MRI scans. The dataset consists of 743 helium-3 (3He) or xenon-129 (129Xe) volumetric scans and corresponding expert segmentations from 326 healthy subjects and patients with a wide range of pathologies. We evaluated segmentation performance for several DL experimental methods via overlap, distance and error metrics and compared them to conventional segmentation methods, namely, spatial fuzzy c-means (SFCM) and K-means clustering. We observed that training on combined 3He and 129Xe MRI scans outperformed other DL methods, achieving a mean ± SD Dice of 0.958 ± 0.022, average boundary Hausdorff distance of 2.22 ± 2.16 mm, Hausdorff 95th percentile of 8.53 ± 12.98 mm and relative error of 0.087 ± 0.049. Moreover, no difference in performance was observed between 129Xe and 3He scans in the testing set. Combined training on 129Xe and 3He yielded statistically significant improvements over the conventional methods (p < 0.0001). The DL approach evaluated provides accurate, robust and rapid segmentations of ventilated lung regions and successfully excludes non-lung regions such as the airways and noise artifacts and is expected to eliminate the need for, or significantly reduce, subsequent time-consuming manual editing

    Retinal blood vessels extraction using probabilistic modelling

    Get PDF
    © 2014 Kaba et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The analysis of retinal blood vessels plays an important role in detecting and treating retinal diseases. In this review, we present an automated method to segment blood vessels of fundus retinal image. The proposed method could be used to support a non-intrusive diagnosis in modern ophthalmology for early detection of retinal diseases, treatment evaluation or clinical study. This study combines the bias correction and an adaptive histogram equalisation to enhance the appearance of the blood vessels. Then the blood vessels are extracted using probabilistic modelling that is optimised by the expectation maximisation algorithm. The method is evaluated on fundus retinal images of STARE and DRIVE datasets. The experimental results are compared with some recently published methods of retinal blood vessels segmentation. The experimental results show that our method achieved the best overall performance and it is comparable to the performance of human experts.The Department of Information Systems, Computing and Mathematics, Brunel University

    Automatic Segmentation of Muscle Tissue and Inter-muscular Fat in Thigh and Calf MRI Images

    Full text link
    Magnetic resonance imaging (MRI) of thigh and calf muscles is one of the most effective techniques for estimating fat infiltration into muscular dystrophies. The infiltration of adipose tissue into the diseased muscle region varies in its severity across, and within, patients. In order to efficiently quantify the infiltration of fat, accurate segmentation of muscle and fat is needed. An estimation of the amount of infiltrated fat is typically done visually by experts. Several algorithmic solutions have been proposed for automatic segmentation. While these methods may work well in mild cases, they struggle in moderate and severe cases due to the high variability in the intensity of infiltration, and the tissue's heterogeneous nature. To address these challenges, we propose a deep-learning approach, producing robust results with high Dice Similarity Coefficient (DSC) of 0.964, 0.917 and 0.933 for muscle-region, healthy muscle and inter-muscular adipose tissue (IMAT) segmentation, respectively.Comment: 9 pages, 4 figures, 2 tables, MICCAI 2019, the 22nd International Conference on Medical Image Computing and Computer Assisted Interventio
    corecore