67 research outputs found

    Segmentation of heart chambers in 2-D heart ultrasounds with deep learning

    Get PDF
    Echocardiography is a non-invasive image diagnosis technique where ultrasound waves are used to obtain an image or sequence of the structure and function of the heart. The segmentation of the heart chambers on ultrasound images is a task usually performed by experienced cardiologists, in which they delineate and extract the shape of both atriums and ventricles to obtain important indexes of a patient’s heart condition. However, this task is usually hard to perform accurately due to the poor image quality caused by the equipment and techniques used and due to the variability across different patients and pathologies. Therefore, medical image processing is needed in this particular case to avoid inaccuracy and obtain proper results. Over the last decade, several studies have proved that deep learning techniques are a possible solution to this problem, obtaining good results in automatic segmentation. The major problem with deep learning techniques in medical image processing is the lack of available data to train and test these architectures. In this work we have trained, validated, and tested a convolutional neural network based on the architecture of U-Net for 2D echocardiogram chamber segmentation. The data used for the training of the convolutional neural network was the B-Mode 4-chamber apical view Echogan dataset with data augmentation techniques applied. The novelty of this work is the hyperparameter and architecture optimizations to reduce the computation time while obtaining significant training and testing accuraciesObjectius de Desenvolupament Sostenible::3 - Salut i Benesta

    Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding

    Get PDF
    The fetal echocardiogram is useful for monitoring and diagnosing cardiovascular diseases in the fetus in utero. Importantly, it can be used for assessing prenatal congenital heart disease, for which timely intervention can improve the unborn child's outcomes. In this regard, artificial intelligence (AI) can be used for the automatic analysis of fetal heart ultrasound images. This study reviews nondeep and deep learning approaches for assessing the fetal heart using standard four-chamber ultrasound images. The state-of-the-art techniques in the field are described and discussed. The compendium demonstrates the capability of automatic assessment of the fetal heart using AI technology. This work can serve as a resource for research in the field

    GL-Fusion: Global-Local Fusion Network for Multi-view Echocardiogram Video Segmentation

    Full text link
    Cardiac structure segmentation from echocardiogram videos plays a crucial role in diagnosing heart disease. The combination of multi-view echocardiogram data is essential to enhance the accuracy and robustness of automated methods. However, due to the visual disparity of the data, deriving cross-view context information remains a challenging task, and unsophisticated fusion strategies can even lower performance. In this study, we propose a novel Gobal-Local fusion (GL-Fusion) network to jointly utilize multi-view information globally and locally that improve the accuracy of echocardiogram analysis. Specifically, a Multi-view Global-based Fusion Module (MGFM) is proposed to extract global context information and to explore the cyclic relationship of different heartbeat cycles in an echocardiogram video. Additionally, a Multi-view Local-based Fusion Module (MLFM) is designed to extract correlations of cardiac structures from different views. Furthermore, we collect a multi-view echocardiogram video dataset (MvEVD) to evaluate our method. Our method achieves an 82.29% average dice score, which demonstrates a 7.83% improvement over the baseline method, and outperforms other existing state-of-the-art methods. To our knowledge, this is the first exploration of a multi-view method for echocardiogram video segmentation. Code available at: https://github.com/xmed-lab/GL-FusionComment: Accepted By MICCAI 202

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Surface loss for medical image segmentation

    Get PDF
    Last decades have witnessed an unprecedented expansion of medical data in various largescale and complex systems. While achieving a lot of successes in many complex medical problems, there are still some challenges to deal with. Class imbalance is one of the common problems of medical image segmentation. It occurs mostly when there is a severely unequal class distribution, for instance, when the size of target foreground region is several orders of magnitude less that the background region size. In such problems, typical loss functions used for convolutional neural networks (CNN) segmentation fail to deliver good performances. Widely used losses,e.g., Dice or cross-entropy, are based on regional terms. They assume that all classes are equally distributed. Thus, they tend to favor the majority class and misclassify the target class. To address this issue, the main objective of this work is to build a boundary loss, a distance based measure on the space of contours and not regions. We argue that a boundary loss can mitigate the problems of regional losses via introducing a complementary distance-based information. Our loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implemented with any existing deep network architecture for N-dimensional segmentation (N-D). Experiments were carried on three benchmark datasets corresponding to increasingly unbalanced segmentation problems: Multi modal brain tumor segmentation (BRATS17), the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8% improvement in Dice score and 10% improvement in Hausdorff score. It also yielded a more stable learning process

    AIDAN: An Attention-Guided Dual-Path Network for Pediatric Echocardiography Segmentation

    Get PDF
    Accurate segmentation of pediatric echocardiography images is essential for a wide range of diagnostic and pre-interventional planning, but remains challenging (e.g., low signal to noise ratio and internal variability in heart appearance). To address these problems, in this paper, we propose a novel Cardiac Attention-guided Dual-path Network (i.e., AIDAN). AIDAN comprises a convolutional block attention module (CBAM) attached to a spatial (i.e., SPA) and context paths (i.e., CPA), which can guide the network and learn the most discriminative features. The spatial path captures low-level spatial features, and the context path is designed to exploit high-level context. Finally, features learned from the two paths are fused efficiently using a specially designed feature fusion module (FFM), and these are used to predict the final segmentation map. We experiment on a self-collected dataset of 127 pediatric echocardiography cases which are videos containing at least a complete cardiac cycle, and obtain a Dice coefficient of 0.951 and 0.914, in the left ventricle and atrium segments, respectively. AIDAN outperforms other state-of-the-art methods and has great potential for pediatric echocardiography images analysis
    • …
    corecore