23 research outputs found

    Automated brain lesion segmentation in magnetic resonance images

    Get PDF
    In this thesis, we investigate the potential of automation in brain lesion segmentation in magnetic resonance images. We first develop a novel supervised method, which segments regions in magnetic resonance images using gated recurrent units, provided training data with pixel-wise annotations on what to segment is available. We improve on this method using the latest technical advances in the field of machine learning and insights on possible weaknesses of our method, and adapt it specifically for the task of lesion segmentation in the brain. We show the feasibility of our approach on multiple public benchmarks, consistently reaching positions at the top of the list of competing methods. Adapting our problem successfully to the problem of landmark localization, we show the generalizability of the approach. Moving away from large training cohorts with manual segmentations to data where it is only known that a certain pathology is present, we propose a weakly-supervised segmentation approach. Given a set of images with known pathology of a certain kind and a healthy reference set, our formulation can segment the difference of the two data distributions. Lastly, we show how information from already existing lesion maps can be extracted in a meaningful way by connecting lesions across time in longitudinal studies. We hence present a full tool set for the automated processing of lesions in magnetic resonance images

    Pathology Segmentation using Distributional Differences to Images of Healthy Origin

    Full text link
    Fully supervised segmentation methods require a large training cohort of already segmented images, providing information at the pixel level of each image. We present a method to automatically segment and model pathologies in medical images, trained solely on data labelled on the image level as either healthy or containing a visual defect. We base our method on CycleGAN, an image-to-image translation technique, to translate images between the domains of healthy and pathological images. We extend the core idea with two key contributions. Implementing the generators as residual generators allows us to explicitly model the segmentation of the pathology. Realizing the translation from the healthy to the pathological domain using a variational autoencoder allows us to specify one representation of the pathology, as this transformation is otherwise not unique. Our model hence not only allows us to create pixelwise semantic segmentations, it is also able to create inpaintings for the segmentations to render the pathological image healthy. Furthermore, we can draw new unseen pathology samples from this model based on the distribution in the data. We show quantitatively, that our method is able to segment pathologies with a surprising accuracy being only slightly inferior to a state-of-the-art fully supervised method, although the latter has per-pixel rather than per-image training information. Moreover, we show qualitative results of both the segmentations and inpaintings. Our findings motivate further research into weakly-supervised segmentation using image level annotations, allowing for faster and cheaper acquisition of training data without a large sacrifice in segmentation accuracy

    MRI lung lobe segmentation in pediatric cystic fibrosis patients using a recurrent neural network trained with publicly accessible CT datasets

    Full text link
    PURPOSE To introduce a widely applicable workflow for pulmonary lobe segmentation of MR images using a recurrent neural network (RNN) trained with chest CT datasets. The feasibility is demonstrated for 2D coronal ultrafast balanced SSFP (ufSSFP) MRI. METHODS Lung lobes of 250 publicly accessible CT datasets of adults were segmented with an open-source CT-specific algorithm. To match 2D ufSSFP MRI data of pediatric patients, both CT data and segmentations were translated into pseudo-MR images that were masked to suppress anatomy outside the lung. Network-1 was trained with pseudo-MR images and lobe segmentations and then applied to 1000 masked ufSSFP images to predict lobe segmentations. These outputs were directly used as targets to train Network-2 and Network-3 with non-masked ufSSFP data as inputs, as well as an additional whole-lung mask as input for Network-2. Network predictions were compared to reference manual lobe segmentations of ufSSFP data in 20 pediatric cystic fibrosis patients. Manual lobe segmentations were performed by splitting available whole-lung segmentations into lobes. RESULTS Network-1 was able to segment the lobes of ufSSFP images, and Network-2 and Network-3 further increased segmentation accuracy and robustness. The average all-lobe Dice similarity coefficients were 95.0 ± 2.8 (mean ± pooled SD [%]) and 96.4 ± 2.5, 93.0 ± 2.0; and the average median Hausdorff distances were 6.1 ± 0.9 (mean ± SD [mm]), 5.3 ± 1.1, 7.1 ± 1.3 for Network-1, Network-2, and Network-3, respectively. CONCLUSION Recurrent neural network lung lobe segmentation of 2D ufSSFP imaging is feasible, in good agreement with manual segmentations. The proposed workflow might provide access to automated lobe segmentations for various lung MRI examinations and quantitative analyses

    MRI lung lobe segmentation in pediatric cystic fibrosis patients using a recurrent neural network trained with publicly accessible CT datasets.

    Get PDF
    PURPOSE To introduce a widely applicable workflow for pulmonary lobe segmentation of MR images using a recurrent neural network (RNN) trained with chest CT datasets. The feasibility is demonstrated for 2D coronal ultrafast balanced SSFP (ufSSFP) MRI. METHODS Lung lobes of 250 publicly accessible CT datasets of adults were segmented with an open-source CT-specific algorithm. To match 2D ufSSFP MRI data of pediatric patients, both CT data and segmentations were translated into pseudo-MR images that were masked to suppress anatomy outside the lung. Network-1 was trained with pseudo-MR images and lobe segmentations and then applied to 1000 masked ufSSFP images to predict lobe segmentations. These outputs were directly used as targets to train Network-2 and Network-3 with non-masked ufSSFP data as inputs, as well as an additional whole-lung mask as input for Network-2. Network predictions were compared to reference manual lobe segmentations of ufSSFP data in 20 pediatric cystic fibrosis patients. Manual lobe segmentations were performed by splitting available whole-lung segmentations into lobes. RESULTS Network-1 was able to segment the lobes of ufSSFP images, and Network-2 and Network-3 further increased segmentation accuracy and robustness. The average all-lobe Dice similarity coefficients were 95.0 ± 2.8 (mean ± pooled SD [%]) and 96.4 ± 2.5, 93.0 ± 2.0; and the average median Hausdorff distances were 6.1 ± 0.9 (mean ± SD [mm]), 5.3 ± 1.1, 7.1 ± 1.3 for Network-1, Network-2, and Network-3, respectively. CONCLUSION Recurrent neural network lung lobe segmentation of 2D ufSSFP imaging is feasible, in good agreement with manual segmentations. The proposed workflow might provide access to automated lobe segmentations for various lung MRI examinations and quantitative analyses

    Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge

    Get PDF
    Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation

    Multi-Dimensional Gated Recurrent Units for the Segmentation of Biomedical 3D-Data

    No full text
    corecore