28 research outputs found

    Triplanar 3D-to-2D networks with dense connections and dilated convolutions: application to the KITS 2019 challenge

    Get PDF
    We describe a method for the segmentation of kidney and kidney tumors based on computed tomography imaging, based on the KITS 2019 challenge dataset

    T1- Weighted MRI Image Segmentation

    Get PDF
    Growing evidence in recent years indicates that interest in the development of automated image analysis techniques for medical imaging, especially with regard to the discipline of magnetic resonance imaging. T1-weighted MRI scans are often used for both diagnosis and monitoring various neurological disorders, making accurate segmentation of these images crucial for effective treatment planning. In this work, we offer a new method for T1-weighted MRI image segmentation using patch densenet, an image segmentation-specific deep learning architecture. Our method aims to improve the accuracy and efficiency of segmentation, while also addressing some of the challenges associated with traditional segmentation methods. Traditional segmentation methods typically rely on features that are handcrafted and may struggle to accurately capture the intricate details present in MRI images. By utilizing patch densenet, our method automatically learn and extract relevant features from the T1-weighted MRI images and further enhance the accuracy and specificity of the segmentation results. Ultimately, we believe that our proposed approach can greatly improve diagnosis and treatment planning process for neurological disorders

    Vox2Vox: 3D-GAN for Brain Tumour Segmentation

    Full text link
    Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and non-enhancing tumour core. Although brain tumours can easily be detected using multi-modal MRI, accurate tumor segmentation is a challenging task. Hence, using the data provided by the BraTS Challenge 2020, we propose a 3D volume-to-volume Generative Adversarial Network for segmentation of brain tumours. The model, called Vox2Vox, generates realistic segmentation outputs from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm, 24.36mm, and 18.95mm for Hausdorff distance 95 percentile for the BraTS testing set after ensembling 10 Vox2Vox models obtained with a 10-fold cross-validation

    Are we using appropriate segmentation metrics? Identifying correlates of human expert perception for CNN training beyond rolling the DICE coefficient

    Full text link
    In this study, we explore quantitative correlates of qualitative human expert perception. We discover that current quality metrics and loss functions, considered for biomedical image segmentation tasks, correlate moderately with segmentation quality assessment by experts, especially for small yet clinically relevant structures, such as the enhancing tumor in brain glioma. We propose a method employing classical statistics and experimental psychology to create complementary compound loss functions for modern deep learning methods, towards achieving a better fit with human quality assessment. When training a CNN for delineating adult brain tumor in MR images, all four proposed loss candidates outperform the established baselines on the clinically important and hardest to segment enhancing tumor label, while maintaining performance for other label channels
    corecore