52 research outputs found

    Simultaneous lesion and neuroanatomy segmentation in Multiple Sclerosis using deep neural networks

    Get PDF
    Segmentation of both white matter lesions and deep grey matter structures is an important task in the quantification of magnetic resonance imaging in multiple sclerosis. Typically these tasks are performed separately: in this paper we present a single segmentation solution based on convolutional neural networks (CNNs) for providing fast, reliable segmentations of multimodal magnetic resonance images into lesion classes and normal-appearing grey- and white-matter structures. We show substantial, statistically significant improvements in both Dice coefficient and in lesion-wise specificity and sensitivity, compared to previous approaches, and agreement with individual human raters in the range of human inter-rater variability. The method is trained on data gathered from a single centre: nonetheless, it performs well on data from centres, scanners and field-strengths not represented in the training dataset. A retrospective study found that the classifier successfully identified lesions missed by the human raters. Lesion labels were provided by human raters, while weak labels for other brain structures (including CSF, cortical grey matter, cortical white matter, cerebellum, amygdala, hippocampus, subcortical GM structures and choroid plexus) were provided by Freesurfer 5.3. The segmentations of these structures compared well, not only with Freesurfer 5.3, but also with FSL-First and Freesurfer 6.0

    Uncertainty-driven refinement of tumor-core segmentation using 3D-to-2D networks with label uncertainty

    Full text link
    The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density. Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing. We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.Comment: Presented (virtually) in the MICCAI Brainles workshop 2020. Accepted for publication in Brainles proceeding

    CortexMorph: fast cortical thickness estimation via diffeomorphic registration using VoxelMorph

    Full text link
    The thickness of the cortical band is linked to various neurological and psychiatric conditions, and is often estimated through surface-based methods such as Freesurfer in MRI studies. The DiReCT method, which calculates cortical thickness using a diffeomorphic deformation of the gray-white matter interface towards the pial surface, offers an alternative to surface-based methods. Recent studies using a synthetic cortical thickness phantom have demonstrated that the combination of DiReCT and deep-learning-based segmentation is more sensitive to subvoxel cortical thinning than Freesurfer. While anatomical segmentation of a T1-weighted image now takes seconds, existing implementations of DiReCT rely on iterative image registration methods which can take up to an hour per volume. On the other hand, learning-based deformable image registration methods like VoxelMorph have been shown to be faster than classical methods while improving registration accuracy. This paper proposes CortexMorph, a new method that employs unsupervised deep learning to directly regress the deformation field needed for DiReCT. By combining CortexMorph with a deep-learning-based segmentation model, it is possible to estimate region-wise thickness in seconds from a T1-weighted image, while maintaining the ability to detect cortical atrophy. We validate this claim on the OASIS-3 dataset and the synthetic cortical thickness phantom of Rusak et al.Comment: Accepted (early acceptance) at MICCAI 202

    Generalizable deep learning based medical image segmentation

    Get PDF
    Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications. To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques. In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain. For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios. In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation. In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method. Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces

    Triplanar 3D-to-2D networks with dense connections and dilated convolutions: application to the KITS 2019 challenge

    Get PDF
    We describe a method for the segmentation of kidney and kidney tumors based on computed tomography imaging, based on the KITS 2019 challenge dataset

    Reliable brain morphometry from contrast-enhanced T1w-MRI in patients with multiple sclerosis.

    Get PDF
    Brain morphometry is usually based on non-enhanced (pre-contrast) T1-weighted MRI. However, such dedicated protocols are sometimes missing in clinical examinations. Instead, an image with a contrast agent is often available. Existing tools such as FreeSurfer yield unreliable results when applied to contrast-enhanced (CE) images. Consequently, these acquisitions are excluded from retrospective morphometry studies, which reduces the sample size. We hypothesize that deep learning (DL)-based morphometry methods can extract morphometric measures also from contrast-enhanced MRI. We have extended DL+DiReCT to cope with contrast-enhanced MRI. Training data for our DL-based model were enriched with non-enhanced and CE image pairs from the same session. The segmentations were derived with FreeSurfer from the non-enhanced image and used as ground truth for the coregistered CE image. A longitudinal dataset of patients with multiple sclerosis (MS), comprising relapsing remitting (RRMS) and primary progressive (PPMS) subgroups, was used for the evaluation. Global and regional cortical thickness derived from non-enhanced and CE images were contrasted to results from FreeSurfer. Correlation coefficients of global mean cortical thickness between non-enhanced and CE images were significantly larger with DL+DiReCT (r = 0.92) than with FreeSurfer (r = 0.75). When comparing the longitudinal atrophy rates between the two MS subgroups, the effect sizes between PPMS and RRMS were higher with DL+DiReCT both for non-enhanced (d = -0.304) and CE images (d = -0.169) than for FreeSurfer (non-enhanced d = -0.111, CE d = 0.085). In conclusion, brain morphometry can be derived reliably from contrast-enhanced MRI using DL-based morphometry tools, making additional cases available for analysis and potential future diagnostic morphometry tools
    corecore