198 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

    Get PDF
    Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance. The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging. In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets. We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods

    From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications

    Get PDF
    Medical imaging is routinely performed in clinics worldwide for the diagnosis and treatment of numerous medical conditions in children and adults. With the advent of these medical imaging modalities, radiologists can visualize both the structure of the body as well as the tissues within the body. However, analyzing these high-dimensional (2D/3D/4D) images demands a significant amount of time and effort from radiologists. Hence, there is an ever-growing need for medical image computing tools to extract relevant information from the image data to help radiologists perform efficiently. Image analysis based on machine learning has pivotal potential to improve the entire medical imaging pipeline, providing support for clinical decision-making and computer-aided diagnosis. To be effective in addressing challenging image analysis tasks such as classification, detection, registration, and segmentation, specifically for medical imaging applications, deep learning approaches have shown significant improvement in performance. While deep learning has shown its potential in a variety of medical image analysis problems including segmentation, motion estimation, etc., generalizability is still an unsolved problem and many of these successes are achieved at the cost of a large pool of datasets. For most practical applications, getting access to a copious dataset can be very difficult, often impossible. Annotation is tedious and time-consuming. This cost is further amplified when annotation must be done by a clinical expert in medical imaging applications. Additionally, the applications of deep learning in the real-world clinical setting are still limited due to the lack of reliability caused by the limited prediction capabilities of some deep learning models. Moreover, while using a CNN in an automated image analysis pipeline, it’s critical to understand which segmentation results are problematic and require further manual examination. To this extent, the estimation of uncertainty calibration in a semi-supervised setting for medical image segmentation is still rarely reported. This thesis focuses on developing and evaluating optimized machine learning models for a variety of medical imaging applications, ranging from fully-supervised, single-task learning to semi-supervised, multi-task learning that makes efficient use of annotated training data. The contributions of this dissertation are as follows: (1) developing a fully-supervised, single-task transfer learning for the surgical instrument segmentation from laparoscopic images; and (2) utilizing supervised, single-task, transfer learning for segmenting and digitally removing the surgical instruments from endoscopic/laparoscopic videos to allow the visualization of the anatomy being obscured by the tool. The tool removal algorithms use a tool segmentation mask and either instrument-free reference frames or previous instrument-containing frames to fill in (inpaint) the instrument segmentation mask; (3) developing fully-supervised, single-task learning via efficient weight pruning and learned group convolution for accurate left ventricle (LV), right ventricle (RV) blood pool and myocardium localization and segmentation from 4D cine cardiac MR images; (4) demonstrating the use of our fully-supervised memory-efficient model to generate dynamic patient-specific right ventricle (RV) models from cine cardiac MRI dataset via an unsupervised learning-based deformable registration field; and (5) integrating a Monte Carlo dropout into our fully-supervised memory-efficient model with inherent uncertainty estimation, with the overall goal to estimate the uncertainty associated with the obtained segmentation and error, as a means to flag regions that feature less than optimal segmentation results; (6) developing semi-supervised, single-task learning via self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data; (7) proposing largely-unsupervised, multi-task learning to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two of the foremost critical tasks in medical imaging — segmentation of cardiac structures and reconstruction of the cine cardiac MR images; (8) demonstrating the use of 3D semi-supervised, multi-task learning for jointly learning multiple tasks in a single backbone module – uncertainty estimation, geometric shape generation, and cardiac anatomical structure segmentation of the left atrial cavity from 3D Gadolinium-enhanced magnetic resonance (GE-MR) images. This dissertation summarizes the impact of the contributions of our work in terms of demonstrating the adaptation and use of deep learning architectures featuring different levels of supervision to build a variety of image segmentation tools and techniques that can be used across a wide spectrum of medical image computing applications centered on facilitating and promoting the wide-spread computer-integrated diagnosis and therapy data science

    Medical Image Segmentation Combining Level Set Method and Deep Belief Networks

    Get PDF
    Medical image segmentation is an important step in medical image analysis, where the main goal is the precise delineation of organs and tumours from medical images. For instance there is evidence in the field that shows a positive correlation between the precision of these segmentations and the accuracy observed in classification systems that use these segmentations as their inputs. Over the last decades, a vast number of medical image segmentation models have been introduced, where these models can be divided into five main groups: 1) image-based approaches, 2) active contour methods, 3) machine learning techniques, 4) atlas-guided segmentation and registration and 5) hybrid models. Image-based approaches use only intensity value or texture for segmenting (i.e., thresholding technique) and they usually do not produce precise segmentation. Active contour methods can use an explicit representation (i.e., snakes) with the goal of minimizing an energy function that forces the contour to move towards strong edges and maintains the contour smoothness. The use of implicit representation in active contour methods (i.e., level set method) embeds the contour as zero level set of a higher dimensional surface (i.e., the curve representing the contour does not need to be parameterized as in the Snakes model). Although successful, the main issue with active contour methods is the fact that the energy function must contain terms describing all possible shape and appearance variations, which is a complicated task given that it is hard to design by hand all these terms. Also, this type of active contour methods may get stuck at image regions that do not belong to the object of interest. Machine learning techniques address this issue by automatically learning shape and appearance models using annotated training images. Nevertheless, in order to meet the high accuracy requirements of medical image analysis applications, machine learning methods usually need large and rich training sets and also face the complexity of the inference process. Atlas-guided segmentation and registration use an atlas image, which is constructed based on manually segmentation images. The new image is segmented by registering it with the atlas image. These techniques have been applied successfully in many applications, but they still face some issues, such as their ability to represent the variability of anatomical structure and scale in medical image, and the complexity of the registration algorithms. In this work, we propose a new hybrid segmentation approach by combining a level set method with a machine learning approach (deep belief network). Our main objective with this approach is to achieve segmentation accuracy results that are either comparable or better than the ones produced with machine learning methods, but using relatively smaller training sets. These weaker requirements on the size of training sets is compensated by the hand designed segmentation terms present in typical level set methods, that are used as prior information on the anatomy to be segmented (e.g., smooth contours, strong edges, etc.). In addition, we choose a machine learning methodology that typically requires smaller annotated training sets, compared to other methods proposed in this field. Specifically, we use deep belief networks, with training sets consisting to a large extent of un-annotated training images. In general, our hybrid segmentation approach uses the result produced by the deep belief network as a prior in the level set evolution. We validate this method on the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2009 left ventricle segmentation challenge database and on the Japanese Society of Radiological Technology (JSRT) lung segmentation dataset. The experiments show that our approach produces competitive results in the field in terms of segmentation accuracy. More specifically, we show that the use of our proposed methodology in a semi-automated segmentation system (i.e., using a manual initialization) produces the best result in the field in both databases above, and in the case of a fully automated system, our method shows results competitive with the current state of the art.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201

    An Overview of Techniques for Cardiac Left Ventricle Segmentation on Short-Axis MRI

    Full text link
    Nowadays, heart diseases are the leading cause of death. Left ventricle segmentation of a human heart in magnetic resonance images (MRI) is a crucial step in both cardiac diseases diagnostics and heart internal structure reconstruction. It allows estimating such important parameters as ejection faction, left ventricle myocardium mass, stroke volume, etc. In addition, left ventricle segmentation helps to construct the personalized heart computational models in order to conduct the numerical simulations. At present, the fully automated cardiac segmentation methods still do not meet the accuracy requirements. We present an overview of left ventricle segmentation algorithms on short-axis MRI. A wide variety of completely different approaches are used for cardiac segmentation, including machine learning, graph-based methods, deformable models, and low-level heuristics. The current state-of-the-art technique is a combination of deformable models with advanced machine learning methods, such as deep learning or Markov random fields. We expect that approaches based on deep belief networks are the most promising ones because the main training process of networks with this architecture can be performed on the unlabelled data. In order to improve the quality of left ventricle segmentation algorithms, we need more datasets with labelled cardiac MRI data in open access

    Real-Time Magnetic Resonance Imaging

    Get PDF
    Real‐time magnetic resonance imaging (RT‐MRI) allows for imaging dynamic processes as they occur, without relying on any repetition or synchronization. This is made possible by modern MRI technology such as fast‐switching gradients and parallel imaging. It is compatible with many (but not all) MRI sequences, including spoiled gradient echo, balanced steady‐state free precession, and single‐shot rapid acquisition with relaxation enhancement. RT‐MRI has earned an important role in both diagnostic imaging and image guidance of invasive procedures. Its unique diagnostic value is prominent in areas of the body that undergo substantial and often irregular motion, such as the heart, gastrointestinal system, upper airway vocal tract, and joints. Its value in interventional procedure guidance is prominent for procedures that require multiple forms of soft‐tissue contrast, as well as flow information. In this review, we discuss the history of RT‐MRI, fundamental tradeoffs, enabling technology, established applications, and current trends

    Cardiac magnetic resonance assessment of central and peripheral vascular function in patients undergoing renal sympathetic denervation as predictor for blood pressure response

    Get PDF
    Background: Most trials regarding catheter-based renal sympathetic denervation (RDN) describe a proportion of patients without blood pressure response. Recently, we were able to show arterial stiffness, measured by invasive pulse wave velocity (IPWV), seems to be an excellent predictor for blood pressure response. However, given the invasiveness, IPWV is less suitable as a selection criterion for patients undergoing RDN. Consequently, we aimed to investigate the value of cardiac magnetic resonance (CMR) based measures of arterial stiffness in predicting the outcome of RDN compared to IPWV as reference. Methods: Patients underwent CMR prior to RDN to assess ascending aortic distensibility (AAD), total arterial compliance (TAC), and systemic vascular resistance (SVR). In a second step, central aortic blood pressure was estimated from ascending aortic area change and flow sequences and used to re-calculate total arterial compliance (cTAC). Additionally, IPWV was acquired. Results: Thirty-two patients (24 responders and 8 non-responders) were available for analysis. AAD, TAC and cTAC were higher in responders, IPWV was higher in non-responders. SVR was not different between the groups. Patients with AAD, cTAC or TAC above median and IPWV below median had significantly better BP response. Receiver operating characteristic (ROC) curves predicting blood pressure response for IPWV, AAD, cTAC and TAC revealed areas under the curve of 0.849, 0.828, 0.776 and 0.753 (p = 0.004, 0.006, 0.021 and 0.035). Conclusions: Beyond IPWV, AAD, cTAC and TAC appear as useful outcome predictors for RDN in patients with hypertension. CMR-derived markers of arterial stiffness might serve as non-invasive selection criteria for RDN
    corecore