154 research outputs found

    MCAL: an anatomical knowledge learning model for myocardial segmentation in 2D echocardiography

    Get PDF
    Segmentation of the left ventricular (LV) myocardium in 2D echocardiography is essential for clinical decision making, especially in geometry measurement and index computation. However, segmenting the myocardium is a time-consuming process as well as challenging due to the fuzzy boundary caused by the low image quality. Previous methods based on deep Convolutional Neural Networks (CNN) employ the ground-truth label as class associations on the pixel-level segmentation, or use label information to regulate the shape of predicted outputs, works limit for effective feature enhancement for 2D echocardiography. We propose a training strategy named multi-constrained aggregate learning (referred as MCAL), which leverages anatomical knowledge learned through ground-truth labels to infer segmented parts and discriminate boundary pixels. The new framework encourages the model to focus on the features in accordance with the learned anatomical representations, and the training objectives incorporate a Boundary Distance Transform Weight (BDTW) to enforce a higher weight value on the boundary region, which helps to improve the segmentation accuracy. The proposed method is built as an end-to-end framework with a top-down, bottom-up architecture with skip convolution fusion blocks, and carried out on two datasets (our dataset and the public CAMUS dataset). The comparison study shows that the proposed network outperforms the other segmentation baseline models, indicating that our method is beneficial for boundary pixels discrimination in segmentation

    Automated Echocardiographic Image Interpretation Using Artificial Intelligence

    Get PDF
    In addition to remaining as one of the leading causes of global mortality, cardio vascular disease has a significant impact on overall health, well-being, and life expectancy. Therefore, early detection of anomalies in cardiac function has become essential for early treatment, and therefore reduction in mortalities. Echocardiography is the most commonly used modality for evaluating the structure and function of the heart. Analysis of echocardiographic images has an important role in the clinical practice in assessing the cardiac morphology and function and thereby reaching a diagnosis. The process of interpretation of echocardiographic images is considered challenging for several reasons. The manual annotation is still a daily work in the clinical routine due to the lack of reliable automatic interpretation methods. This can lead to time-consuming tasks that are prone to intra- and inter-observer variability. Echocardiographic images inherently suffer from a high level of noise and poor qualities. Therefore, although several studies have attempted automating the process, this re-mains a challenging task, and improving the accuracy of automatic echocardiography interpretation is an ongoing field. Advances in Artificial Intelligence and Deep Learning can help to construct an auto-mated, scalable pipeline for echocardiographic image interpretation steps, includingview classification, phase-detection, image segmentation with a focus on border detection, quantification of structure, and measurement of the clinical markers. This thesis aims to develop optimised automated methods for the three individual steps forming part of an echocardiographic exam, namely view classification, left ventricle segmentation, quantification, and measurement of left ventricle structure. Various Neural Architecture Search methods were employed to design efficient neural network architectures for the above tasks. Finally, an optimisation-based speckle tracking echocardiography algorithm was proposed to estimate the myocardial tissue velocities and cardiac deformation. The algorithm was adopted to measure cardiac strain which is used for detecting myocardial ischaemia. All proposed techniques were compared with the existing state-of-the-art methods. To this end, publicly available patients datasets, as well as two private datasets provided by the clinical partners to this project, were used for developments and comprehensive performance evaluations of the proposed techniques. Results demonstrated the feasibility of using automated tools for reliable echocardiographic image interpretations, which can be used as assistive tools to clinicians in obtaining clinical measurements

    MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation

    Full text link
    In recent years, the Segmentation Anything Model (SAM) has attracted considerable attention as a foundational model well-known for its robust generalization capabilities across various downstream tasks. However, SAM does not exhibit satisfactory performance in the realm of medical image analysis. In this study, we introduce the first study on adapting SAM on video segmentation, called MediViSTA-SAM, a novel approach designed for medical video segmentation. Given video data, MediViSTA, spatio-temporal adapter captures long and short range temporal attention with cross-frame attention mechanism effectively constraining it to consider the immediately preceding video frame as a reference, while also considering spatial information effectively. Additionally, it incorporates multi-scale fusion by employing a U-shaped encoder and a modified mask decoder to handle objects of varying sizes. To evaluate our approach, extensive experiments were conducted using state-of-the-art (SOTA) methods, assessing its generalization abilities on multi-vendor in-house echocardiography datasets. The results highlight the accuracy and effectiveness of our network in medical video segmentation

    MDA-Unet: A Multi-Scale Dilated Attention U-Net For Medical Image Segmentation

    Get PDF
    The advanced development of deep learning methods has recently made significant improvements in medical image segmentation. Encoder–decoder networks, such as U-Net, have addressed some of the challenges in medical image segmentation with an outstanding performance, which has promoted them to be the most dominating deep learning architecture in this domain. Despite their outstanding performance, we argue that they still lack some aspects. First, there is incompatibility in U-Net’s skip connection between the encoder and decoder features due to the semantic gap between low-processed encoder features and highly processed decoder features, which adversely affects the final prediction. Second, it lacks capturing multi-scale context information and ignores the contribution of all semantic information through the segmentation process. Therefore, we propose a model named MDA-Unet, a novel multi-scale deep learning segmentation model. MDA-Unet improves upon U-Net and enhances its performance in segmenting medical images with variability in the shape and size of the region of interest. The model is integrated with a multi-scale spatial attention module, where spatial attention maps are derived from a hybrid hierarchical dilated convolution module that captures multi-scale context information. To ease the training process and reduce the gradient vanishing problem, residual blocks are deployed instead of the basic U-net blocks. Through a channel attention mechanism, the high-level decoder features are used to guide the low-level encoder features to promote the selection of meaningful context information, thus ensuring effective fusion. We evaluated our model on 2 different datasets: a lung dataset of 2628 axial CT images and an echocardiographic dataset of 2000 images, each with its own challenges. Our model has achieved a significant gain in performance with a slight increase in the number of trainable parameters in comparison with the basic U-Net model, providing a dice score of 98.3% on the lung dataset and 96.7% on the echocardiographic dataset, where the basic U-Net has achieved 94.2% on the lung dataset and 93.9% on the echocardiographic dataset

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Deep learning tools for outcome prediction in a trial fibrilation from cardiac MRI

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2021Atrial fibrillation (AF), is the most frequent sustained cardiac arrhythmia, described by an irregular and rapid contraction of the two upper chambers of the heart (the atria). AF development is promoted and predisposed by atrial dilation, which is a consequence of atria adaptation to AF. However, it is not clear whether atrial dilation appears similarly over the cardiac cycle and how it affects ventricular volumes. Catheter ablation is arguably the AF gold standard treatment. In their current form, ablations are capable of directly terminating AF in selected patients but are only first-time effective in approximately 50% of the cases. In the first part of this work, volumetric functional markers of the left atrium (LA) and left ventricle (LV) of AF patients were studied. More precisely, a customised convolutional neural network (CNN) was proposed to segment, across the cardiac cycle, the LA from short axis CINE MRI images acquired with full cardiac coverage in AF patients. Using the proposed automatic LA segmentation, volumetric time curves were plotted and ejection fractions (EF) were automatically calculated for both chambers. The second part of the project was dedicated to developing classification models based on cardiac MR images. The EMIDEC STACOM 2020 challenge was used as an initial project and basis to create binary classifiers based on fully automatic classification neural networks (NNs), since it presented a relatively simple binary classification task (presence/absence of disease) and a large dataset. For the challenge, a deep learning NN was proposed to automatically classify myocardial disease from delayed enhancement cardiac MR (DE-CMR) and patient clinical information. The highest classification accuracy (100%) was achieved with Clinic-NET+, a NN that used information from images, segmentations and clinical annotations. For the final goal of this project, the previously referred NNs were re-trained to predict AF recurrence after catheter ablation (CA) in AF patients using pre-ablation LA short axis in CINE MRI images. In this task, the best overall performance was achieved by Clinic-NET+ with a test accuracy of 88%. This work shown the potential of NNs to interpret and extract clinical information from cardiac MRI. If more data is available, in the future, these methods can potentially be used to help and guide clinical AF prognosis and diagnosis
    • …
    corecore