10 research outputs found

    Automatically Segmenting the Left Atrium from Cardiac Images Using Successive 3D U-Nets and a Contour Loss

    Get PDF
    International audienceRadiological imaging offers effective measurement of anatomy, which is useful in disease diagnosis and assessment. Previous study has shown that the left atrial wall remodeling can provide information to predict treatment outcome in atrial fibrillation. Nevertheless, the segmentation of the left atrial structures from medical images is still very time-consuming. Current advances in neural network may help creating automatic segmentation models that reduce the workload for clinicians. In this preliminary study, we propose automated, two-stage, three-dimensional U-Nets with convolutional neural network, for the challenging task of left atrial segmentation. Unlike previous two-dimensional image segmentation methods, we use 3D U-Nets to obtain the heart cavity directly in 3D. The dual 3D U-Net structure consists of, a first U-Net to coarsely segment and locate the left atrium, and a second U-Net to accurately segment the left atrium under higher resolution. In addition, we introduce a Contour loss based on additional distance information to adjust the final segmentation. We randomly split the data into training datasets (80 subjects) and validation datasets (20 subjects) to train multiple models, with different augmentation setting. Experiments show that the average Dice coefficients for validation datasets are around 0.91 - 0.92, the sensitivity around 0.90-0.94 and the specificity 0.99. Compared with traditional Dice loss, models trained with Contour loss in general offer smaller Hausdorff distance with similar Dice coefficient, and have less connected components in predictions. Finally, we integrate several trained models in an ensemble prediction to segment testing datasets

    A Deep Learning based Fast Signed Distance Map Generation

    Get PDF
    Signed distance map (SDM) is a common representation of surfaces in medical image analysis and machine learning. The computational complexity of SDM for 3D parametric shapes is often a bottleneck in many applications, thus limiting their interest. In this paper, we propose a learning based SDM generation neural network which is demonstrated on a tridimensional cochlea shape model parameterized by 4 shape parameters. The proposed SDM Neural Network generates a cochlea signed distance map depending on four input parameters and we show that the deep learning approach leads to a 60 fold improvement in the time of computation compared to more classical SDM generation methods. Therefore, the proposed approach achieves a good trade-off between accuracy and efficiency

    Estimation of imaging biomarker's progression in post-infarct patients using cross-sectional data

    Get PDF
    International audienceMany uncertainties remain about the relation between post-infarct scars and ventricular arrhythmia. Most post-infarct patients suffer scar-related arrhythmia several years after the infarct event suggesting that scar remodeling is a process that might require years until the affected tissue becomes arrhythmogenic. In clinical practice, a simple time-based rule is often used to assess risk and stratify patients. In other cases, left ventricular ejection fraction (LVEF) impairment is also taken into account but it is known to be suboptimal. More information is needed to better stratify patients and prescribe appropriate individualized treatments. In this paper we propose to use probabilistic disease progression modeling to obtain an image-based data-driven description of the in-farct maturation process. Our approach includes monotonic constraints in order to impose a regular behaviour on the biomarkers' trajectories. 49 post-MI patients underwent Computed Tomography (CT) and Late Gadolinium Enhanced Cardiac Magnetic Resonance (LGE-CMR) scans. Image-derived biomarkers were computed such as LVEF, LGE-CMR scar volume, fat volume, and size of areas with a different degree of left ven-tricular wall narrowing, from moderate to severe. We show that the model is able to estimate a plausible progression of post-infarct scar maturation. According to our results there is a progressive thinning process observable only with CT imaging; intramural fat appears in a late stage; LGE-CMR scar volume almost does not change and LVEF slightly changes during the scar maturation process

    Style Data Augmentation for Robust Segmentation of Multi-Modality Cardiac MRI

    Get PDF
    International audienceWe propose a data augmentation method to improve thesegmentation accuracy of the convolutional neural network on multi-modality cardiac magnetic resonance (CMR) dataset. The strategy aims to reduce over-fitting of the network toward any specific intensity or contrast of the training images by introducing diversity in these two aspects. The style data augmentation (SDA) strategy increases the size of the training dataset by using multiple image processing functions including adaptive histogram equalisation, Laplacian transformation, Sobel edge detection, intensity inversion and histogram matching. For the segmentation task, we developed the thresholded connection layer network (TCL-Net), a minimalist rendition of the U-Net architecture, which is designed to reduce convergence and computation time. We integrate the dual U-Net strategy to increase the resolution of the 3D segmentation target. Utilising these approaches on a multi-modality dataset, with SSFP and T2 weighted images as training and LGE as validation, we achieve 90% and 96% validation Dice coefficient for endocardium and epicardium segmentations. This result can be interpreted as a proof of concept for a generalised segmentation network that is robust to the quality or modality of the input images. When testing with our mono-centric LGE image dataset, the SDA method also improves the performance of the epicardium segmentation, with an increase from 87% to 90% for the single network segmentation

    A Two-stage Method with a Shared 3D U-Net for Left Atrial Segmentation of Late Gadolinium-Enhanced MRI Images

    Get PDF
    Objective: This study was aimed at validating the accuracy of a proposed algorithm for fully automatic 3D left atrial segmentation and to compare its performance with existing deep learning algorithms. Methods: A two-stage method with a shared 3D U-Net was proposed to segment the 3D left atrium. In this architecture, the 3D U-Net was used to extract 3D features, a two-stage strategy was used to decrease segmentation error caused by the class imbalance problem, and the shared network was designed to decrease model complexity. Model performance was evaluated with the DICE score, Jaccard index and Hausdorff distance. Results: Algorithm development and evaluation were performed with a set of 100 late gadolinium-enhanced cardiovascular magnetic resonance images. Our method achieved a DICE score of 0.918, a Jaccard index of 0.848 and a Hausdorff distance of 1.211, thus, outperforming existing deep learning algorithms. The best performance of the proposed model (DICE: 0.851; Jaccard: 0.750; Hausdorff distance: 4.382) was also achieved on a publicly available 2013 image data set. Conclusion: The proposed two-stage method with a shared 3D U-Net is an efficient algorithm for fully automatic 3D left atrial segmentation. This study provides a solution for processing large datasets in resource-constrained applications. Significance Statement: Studying atrial structure directly is crucial for comprehending and managing atrial fibrillation (AF). Accurate reconstruction and measurement of atrial geometry for clinical purposes remains challenging, despite potential improvements in the visibility of AF-associated structures with late gadolinium-enhanced magnetic resonance imaging. This difficulty arises from the varying intensities caused by increased tissue enhancement and artifacts, as well as variability in image quality. Therefore, an efficient algorithm for fully automatic 3D left atrial segmentation is proposed in the present study

    SoftSeg: Advantages of soft versus binary training for image segmentation

    Full text link
    Most image segmentation algorithms are trained on binary masks formulated as a classification task per pixel. However, in applications such as medical imaging, this "black-and-white" approach is too constraining because the contrast between two tissues is often ill-defined, i.e., the voxels located on objects' edges contain a mixture of tissues. Consequently, assigning a single "hard" label can result in a detrimental approximation. Instead, a soft prediction containing non-binary values would overcome that limitation. We introduce SoftSeg, a deep learning training approach that takes advantage of soft ground truth labels, and is not bound to binary predictions. SoftSeg aims at solving a regression instead of a classification problem. This is achieved by using (i) no binarization after preprocessing and data augmentation, (ii) a normalized ReLU final activation layer (instead of sigmoid), and (iii) a regression loss function (instead of the traditional Dice loss). We assess the impact of these three features on three open-source MRI segmentation datasets from the spinal cord gray matter, the multiple sclerosis brain lesion, and the multimodal brain tumor segmentation challenges. Across multiple cross-validation iterations, SoftSeg outperformed the conventional approach, leading to an increase in Dice score of 2.0% on the gray matter dataset (p=0.001), 3.3% for the MS lesions, and 6.5% for the brain tumors. SoftSeg produces consistent soft predictions at tissues' interfaces and shows an increased sensitivity for small objects. The richness of soft labels could represent the inter-expert variability, the partial volume effect, and complement the model uncertainty estimation. The developed training pipeline can easily be incorporated into most of the existing deep learning architectures. It is already implemented in the freely-available deep learning toolbox ivadomed (https://ivadomed.org)

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Medical Image Analysis on Left Atrial LGE MRI for Atrial Fibrillation Studies: A Review

    Full text link
    Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of scars provide important information of the pathophysiology and progression of atrial fibrillation (AF). Hence, LA scar segmentation and quantification from LGE MRI can be useful in computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineation can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail, and summarize the validation strategies applied in each task. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review shows that the research into this topic is still in early stages. Although several methods have been proposed, especially for LA segmentation, there is still large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.Comment: 23 page
    corecore