385 research outputs found

    Unsupervised Myocardial Segmentation for Cardiac BOLD

    Get PDF
    A fully automated 2-D+time myocardial segmentation framework is proposed for cardiac magnetic resonance (CMR) blood-oxygen-level-dependent (BOLD) data sets. Ischemia detection with CINE BOLD CMR relies on spatio-temporal patterns in myocardial intensity, but these patterns also trouble supervised segmentation methods, the de facto standard for myocardial segmentation in cine MRI. Segmentation errors severely undermine the accurate extraction of these patterns. In this paper, we build a joint motion and appearance method that relies on dictionary learning to find a suitable subspace.Our method is based on variational pre-processing and spatial regularization using Markov random fields, to further improve performance. The superiority of the proposed segmentation technique is demonstrated on a data set containing cardiac phase resolved BOLD MR and standard CINE MR image sequences acquired in baseline and is chemic condition across ten canine subjects. Our unsupervised approach outperforms even supervised state-of-the-art segmentation techniques by at least 10% when using Dice to measure accuracy on BOLD data and performs at par for standard CINE MR. Furthermore, a novel segmental analysis method attuned for BOLD time series is utilized to demonstrate the effectiveness of the proposed method in preserving key BOLD patterns

    Unsupervised Myocardial Segmentation for Cardiac MRI:Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015

    Get PDF
    Though unsupervised segmentation was a de-facto standard for cardiac MRI segmentation early on, recently cardiac MRI segmentation literature has favored fully supervised techniques such as Dictionary Learning and Atlas-based techniques. But, the benefits of unsupervised techniques e.g., no need for large amount of training data and better potential of handling variability in anatomy and image contrast, is more evident with emerging cardiac MR modalities. For example, CP-BOLD is a new MRI technique that has been shown to detect ischemia without any contrast at stress but also at rest conditions. Although CP-BOLD looks similar to standard CINE, changes in myocardial intensity patterns and shape across cardiac phases, due to the heart’s motion, BOLD effect and artifacts affect the underlying mechanisms of fully supervised segmentation techniques resulting in a significant drop in segmentation accuracy. In this paper, we present a fully unsupervised technique for segmenting myocardium from the background in both standard CINE MR and CP-BOLD MR. We combine appearance with motion information (obtained via Optical Flow) in a dictionary learning framework to sparsely represent important features in a low dimensional space and separate myocardium from background accordingly. Our fully automated method learns background-only models and one class classifier provides myocardial segmentation. The advantages of the proposed technique are demonstrated on a dataset containing CP-BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across 10 canine subjects, where our method outperforms state-of-the-art supervised segmentation techniques in CP-BOLD MR and performs at-par for standard CINE MR

    Multi-Estimator Full Left Ventricle Quantification through Ensemble Learning

    Full text link
    Cardiovascular disease accounts for 1 in every 4 deaths in United States. Accurate estimation of structural and functional cardiac parameters is crucial for both diagnosis and disease management. In this work, we develop an ensemble learning framework for more accurate and robust left ventricle (LV) quantification. The framework combines two 1st-level modules: direct estimation module and a segmentation module. The direct estimation module utilizes Convolutional Neural Network (CNN) to achieve end-to-end quantification. The CNN is trained by taking 2D cardiac images as input and cardiac parameters as output. The segmentation module utilizes a U-Net architecture for obtaining pixel-wise prediction of the epicardium and endocardium of LV from the background. The binary U-Net output is then analyzed by a separate CNN for estimating the cardiac parameters. We then employ linear regression between the 1st-level predictor and ground truth to learn a 2nd-level predictor that ensembles the results from 1st-level modules for the final estimation. Preliminary results by testing the proposed framework on the LVQuan18 dataset show superior performance of the ensemble learning model over the two base modules.Comment: Jiasha Liu, Xiang Li and Hui Ren contribute equally to this wor

    A Multi-scale Learning of Data-driven and Anatomically Constrained Image Registration for Adult and Fetal Echo Images

    Full text link
    Temporal echo image registration is a basis for clinical quantifications such as cardiac motion estimation, myocardial strain assessments, and stroke volume quantifications. Deep learning image registration (DLIR) is consistently accurate, requires less computing effort, and has shown encouraging results in earlier applications. However, we propose that a greater focus on the warped moving image's anatomic plausibility and image quality can support robust DLIR performance. Further, past implementations have focused on adult echo, and there is an absence of DLIR implementations for fetal echo. We propose a framework combining three strategies for DLIR for both fetal and adult echo: (1) an anatomic shape-encoded loss to preserve physiological myocardial and left ventricular anatomical topologies in warped images; (2) a data-driven loss that is trained adversarially to preserve good image texture features in warped images; and (3) a multi-scale training scheme of a data-driven and anatomically constrained algorithm to improve accuracy. Our experiments show that the shape-encoded loss and the data-driven adversarial loss are strongly correlated to good anatomical topology and image textures, respectively. They improve different aspects of registration performance in a non-overlapping way, justifying their combination. We show that these strategies can provide excellent registration results in both adult and fetal echo using the publicly available CAMUS adult echo dataset and our private multi-demographic fetal echo dataset, despite fundamental distinctions between adult and fetal echo images. Our approach also outperforms traditional non-DL gold standard registration approaches, including Optical Flow and Elastix. Registration improvements could also be translated to more accurate and precise clinical quantification of cardiac ejection fraction, demonstrating a potential for translation

    Unsupervised image registration towards enhancing performance and explainability in cardiac and brain image analysis

    Get PDF
    Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting

    MCAL: an anatomical knowledge learning model for myocardial segmentation in 2D echocardiography

    Get PDF
    Segmentation of the left ventricular (LV) myocardium in 2D echocardiography is essential for clinical decision making, especially in geometry measurement and index computation. However, segmenting the myocardium is a time-consuming process as well as challenging due to the fuzzy boundary caused by the low image quality. Previous methods based on deep Convolutional Neural Networks (CNN) employ the ground-truth label as class associations on the pixel-level segmentation, or use label information to regulate the shape of predicted outputs, works limit for effective feature enhancement for 2D echocardiography. We propose a training strategy named multi-constrained aggregate learning (referred as MCAL), which leverages anatomical knowledge learned through ground-truth labels to infer segmented parts and discriminate boundary pixels. The new framework encourages the model to focus on the features in accordance with the learned anatomical representations, and the training objectives incorporate a Boundary Distance Transform Weight (BDTW) to enforce a higher weight value on the boundary region, which helps to improve the segmentation accuracy. The proposed method is built as an end-to-end framework with a top-down, bottom-up architecture with skip convolution fusion blocks, and carried out on two datasets (our dataset and the public CAMUS dataset). The comparison study shows that the proposed network outperforms the other segmentation baseline models, indicating that our method is beneficial for boundary pixels discrimination in segmentation

    SearchMorph:Multi-scale Correlation Iterative Network for Deformable Registration

    Full text link
    Deformable image registration can obtain dynamic information about images, which is of great significance in medical image analysis. The unsupervised deep learning registration method can quickly achieve high registration accuracy without labels. However, these methods generally suffer from uncorrelated features, poor ability to register large deformations and details, and unnatural deformation fields. To address the issues above, we propose an unsupervised multi-scale correlation iterative registration network (SearchMorph). In the proposed network, we introduce a correlation layer to strengthen the relevance between features and construct a correlation pyramid to provide multi-scale relevance information for the network. We also design a deformation field iterator, which improves the ability of the model to register details and large deformations through the search module and GRU while ensuring that the deformation field is realistic. We use single-temporal brain MR images and multi-temporal echocardiographic sequences to evaluate the model's ability to register large deformations and details. The experimental results demonstrate that the method in this paper achieves the highest registration accuracy and the lowest folding point ratio using a short elapsed time to state-of-the-art
    corecore