153 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Two-layer ensemble of deep learning models for medical image segmentation. [Article]

    Get PDF
    One of the most important areas in medical image analysis is segmentation, in which raw image data is partitioned into structured and meaningful regions to gain further insights. By using Deep Neural Networks (DNN), AI-based automated segmentation algorithms can potentially assist physicians with more effective imaging-based diagnoses. However, since it is difficult to acquire high-quality ground truths for medical images and DNN hyperparameters require significant manual tuning, the results by DNN-based medical models might be limited. A potential solution is to combine multiple DNN models using ensemble learning. We propose a two-layer ensemble of deep learning models in which the prediction of each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weight-based scheme which is found by solving linear regression problems. To the best of our knowledge, our paper is the first work which proposes a two-layer ensemble of deep learning models with an augmented data technique in medical image segmentation. Experiments conducted on five different medical image datasets for diverse segmentation tasks show that proposed method achieves better results in terms of several performance metrics compared to some well-known benchmark algorithms. Our proposed two-layer ensemble of deep learning models for segmentation of medical images shows effectiveness compared to several benchmark algorithms. The research can be expanded in several directions like image classification

    Two layer Ensemble of Deep Learning Models for Medical Image Segmentation

    Get PDF
    In recent years, deep learning has rapidly become a method of choice for the segmentation of medical images. Deep Neural Network (DNN) architectures such as UNet have achieved state-of-the-art results on many medical datasets. To further improve the performance in the segmentation task, we develop an ensemble system which combines various deep learning architectures. We propose a two-layer ensemble of deep learning models for the segmentation of medical images. The prediction for each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weights-based scheme in which each model contributes differently to the combined result. The weights are found by solving linear regression problems. Experiments conducted on two popular medical datasets namely CAMUS and Kvasir-SEG show that the proposed method achieves better results concerning two performance metrics (Dice Coefficient and Hausdorff distance) compared to some well-known benchmark algorithms.Comment: 8 pages, 4 figure

    Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks

    Full text link
    A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201

    Two layer ensemble of deep learning models for medical image segmentation.

    Get PDF
    In recent years, deep learning has rapidly become a method of choice for the segmentation of medical images. Deep Neural Network (DNN) architectures such as UNet have achieved state-of-the-art results on many medical datasets. To further improve the performance in the segmentation task, we develop an ensemble system which combines various deep learning architectures. We propose a two-layer ensemble of deep learning models for the segmentation of medical images. The prediction for each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weights-based scheme in which each model contributes differently to the combined result. The weights are found by solving linear regression problems. Experiments conducted on two popular medical datasets namely CAMUS and Kvasir-SEG show that the proposed method achieves better results concerning two performance metrics (Dice Coefficient and Hausdorff distance) compared to some well-known benchmark algorithms

    Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques

    Get PDF
    The recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images.L'explosion rĂ©cente de donnĂ©es d'imagerie cardiaque a Ă©tĂ© phĂ©nomĂ©nale. L'utilisation intelligente des grandes bases de donnĂ©es annotĂ©es pourrait constituer une aide prĂ©cieuse au diagnostic et Ă  la planification de thĂ©rapie. En plus des dĂ©fis inhĂ©rents Ă  la grande taille de ces banques de donnĂ©es, elles sont difficilement utilisables en l'Ă©tat. Les donnĂ©es ne sont pas structurĂ©es, le contenu des images est variable et mal indexĂ©, et les mĂ©tadonnĂ©es ne sont pas standardisĂ©es. L'objectif de cette thĂšse est donc le traitement, l'analyse et l'interprĂ©tation automatique de ces bases de donnĂ©es afin de faciliter leur utilisation par les spĂ©cialistes de cardiologie. Dans ce but, la thĂšse explore les outils d'apprentissage automatique supervisĂ©, ce qui aide Ă  exploiter ces grandes quantitĂ©s d'images cardiaques et trouver de meilleures reprĂ©sentations. Tout d'abord, la visualisation et l'interprĂ©tation d'images est amĂ©liorĂ©e en dĂ©veloppant une mĂ©thode de reconnaissance automatique des plans d'acquisition couramment utilisĂ©s en imagerie cardiaque. La mĂ©thode se base sur l'apprentissage par forĂȘts alĂ©atoires et par rĂ©seaux de neurones Ă  convolution, en utilisant des larges banques d'images, oĂč des types de vues cardiaques sont prĂ©alablement Ă©tablies. La thĂšse s'attache dans un deuxiĂšme temps au traitement automatique des images cardiaques, avec en perspective l'extraction d'indices cliniques pertinents. La segmentation des structures cardiaques est une Ă©tape clĂ© de ce processus. A cet effet une mĂ©thode basĂ©e sur les forĂȘts alĂ©atoires qui exploite des attributs spatio-temporels originaux pour la segmentation automatique dans des images 3Det 3D+t est proposĂ©e. En troisiĂšme partie, l'apprentissage supervisĂ© de sĂ©mantique cardiaque est enrichi grĂące Ă  une mĂ©thode de collecte en ligne d'annotations d'usagers. Enfin, la derniĂšre partie utilise l'apprentissage automatique basĂ© sur les forĂȘts alĂ©atoires pour cartographier des banques d'images cardiaques, tout en Ă©tablissant les notions de distance et de voisinage d'images. Une application est proposĂ©e afin de retrouver dans une banque de donnĂ©es, les images les plus similaires Ă  celle d'un nouveau patient

    Deep learning tools for outcome prediction in a trial fibrilation from cardiac MRI

    Get PDF
    Tese de mestrado integrado em Engenharia BiomĂ©dica e BiofĂ­sica (Engenharia ClĂ­nica e Instrumentação MĂ©dica), Universidade de Lisboa, Faculdade de CiĂȘncias, 2021Atrial fibrillation (AF), is the most frequent sustained cardiac arrhythmia, described by an irregular and rapid contraction of the two upper chambers of the heart (the atria). AF development is promoted and predisposed by atrial dilation, which is a consequence of atria adaptation to AF. However, it is not clear whether atrial dilation appears similarly over the cardiac cycle and how it affects ventricular volumes. Catheter ablation is arguably the AF gold standard treatment. In their current form, ablations are capable of directly terminating AF in selected patients but are only first-time effective in approximately 50% of the cases. In the first part of this work, volumetric functional markers of the left atrium (LA) and left ventricle (LV) of AF patients were studied. More precisely, a customised convolutional neural network (CNN) was proposed to segment, across the cardiac cycle, the LA from short axis CINE MRI images acquired with full cardiac coverage in AF patients. Using the proposed automatic LA segmentation, volumetric time curves were plotted and ejection fractions (EF) were automatically calculated for both chambers. The second part of the project was dedicated to developing classification models based on cardiac MR images. The EMIDEC STACOM 2020 challenge was used as an initial project and basis to create binary classifiers based on fully automatic classification neural networks (NNs), since it presented a relatively simple binary classification task (presence/absence of disease) and a large dataset. For the challenge, a deep learning NN was proposed to automatically classify myocardial disease from delayed enhancement cardiac MR (DE-CMR) and patient clinical information. The highest classification accuracy (100%) was achieved with Clinic-NET+, a NN that used information from images, segmentations and clinical annotations. For the final goal of this project, the previously referred NNs were re-trained to predict AF recurrence after catheter ablation (CA) in AF patients using pre-ablation LA short axis in CINE MRI images. In this task, the best overall performance was achieved by Clinic-NET+ with a test accuracy of 88%. This work shown the potential of NNs to interpret and extract clinical information from cardiac MRI. If more data is available, in the future, these methods can potentially be used to help and guide clinical AF prognosis and diagnosis

    Contour-Driven Atlas-Based Segmentation

    Get PDF
    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images

    Diagnosis and Prognosis of Head and Neck Cancer Patients using Artificial Intelligence

    Full text link
    Cancer is one of the most life-threatening diseases worldwide, and head and neck (H&N) cancer is a prevalent type with hundreds of thousands of new cases recorded each year. Clinicians use medical imaging modalities such as computed tomography and positron emission tomography to detect the presence of a tumor, and they combine that information with clinical data for patient prognosis. The process is mostly challenging and time-consuming. Machine learning and deep learning can automate these tasks to help clinicians with highly promising results. This work studies two approaches for H&N tumor segmentation: (i) exploration and comparison of vision transformer (ViT)-based and convolutional neural network-based models; and (ii) proposal of a novel 2D perspective to working with 3D data. Furthermore, this work proposes two new architectures for the prognosis task. An ensemble of several models predicts patient outcomes (which won the HECKTOR 2021 challenge prognosis task), and a ViT-based framework concurrently performs patient outcome prediction and tumor segmentation, which outperforms the ensemble model.Comment: This is Masters thesis work submitted to MBZUA
    • 

    corecore