742 research outputs found

    A Multi-scale Learning of Data-driven and Anatomically Constrained Image Registration for Adult and Fetal Echo Images

    Full text link
    Temporal echo image registration is a basis for clinical quantifications such as cardiac motion estimation, myocardial strain assessments, and stroke volume quantifications. Deep learning image registration (DLIR) is consistently accurate, requires less computing effort, and has shown encouraging results in earlier applications. However, we propose that a greater focus on the warped moving image's anatomic plausibility and image quality can support robust DLIR performance. Further, past implementations have focused on adult echo, and there is an absence of DLIR implementations for fetal echo. We propose a framework combining three strategies for DLIR for both fetal and adult echo: (1) an anatomic shape-encoded loss to preserve physiological myocardial and left ventricular anatomical topologies in warped images; (2) a data-driven loss that is trained adversarially to preserve good image texture features in warped images; and (3) a multi-scale training scheme of a data-driven and anatomically constrained algorithm to improve accuracy. Our experiments show that the shape-encoded loss and the data-driven adversarial loss are strongly correlated to good anatomical topology and image textures, respectively. They improve different aspects of registration performance in a non-overlapping way, justifying their combination. We show that these strategies can provide excellent registration results in both adult and fetal echo using the publicly available CAMUS adult echo dataset and our private multi-demographic fetal echo dataset, despite fundamental distinctions between adult and fetal echo images. Our approach also outperforms traditional non-DL gold standard registration approaches, including Optical Flow and Elastix. Registration improvements could also be translated to more accurate and precise clinical quantification of cardiac ejection fraction, demonstrating a potential for translation

    Multi-modality cardiac image computing: a survey

    Get PDF
    Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Augmenting CT cardiac roadmaps with segmented streaming ultrasound

    Get PDF
    Static X-ray computed tomography (CT) volumes are often used as anatomic roadmaps during catheter-based cardiac interventions performed under X-ray fluoroscopy guidance. These CT volumes provide a high-resolution depiction of soft-tissue structures, but at only a single point within the cardiac and respiratory cycles. Augmenting these static CT roadmaps with segmented myocardial borders extracted from live ultrasound (US) provides intra-operative access to real-time dynamic information about the cardiac anatomy. In this work, using a customized segmentation method based on a 3D active mesh, endocardial borders of the left ventricle were extracted from US image streams (4D data sets) at a frame rate of approximately 5 frames per second. The coordinate systems for CT and US modalities were registered using rigid body registration based on manually selected landmarks, and the segmented endocardial surfaces were overlaid onto the CT volume. The root-mean squared fiducial registration error was 3.80 mm. The accuracy of the segmentation was quantitatively evaluated in phantom and human volunteer studies via comparison with manual tracings on 9 randomly selected frames using a finite-element model (the US image resolutions of the phantom and volunteer data were 1.3 x 1.1 x 1.3 mm and 0.70 x 0.82 x 0.77 mm, respectively). This comparison yielded 3.70±2.5 mm (approximately 3 pixels) root-mean squared error (RMSE) in a phantom study and 2.58±1.58 mm (approximately 3 pixels) RMSE in a clinical study. The combination of static anatomical roadmap volumes and dynamic intra-operative anatomic information will enable better guidance and feedback for image-guided minimally invasive cardiac interventions

    Artificial intelligence and automation in valvular heart diseases

    Get PDF
    Artificial intelligence (AI) is gradually changing every aspect of social life, and healthcare is no exception. The clinical procedures that were supposed to, and could previously only be handled by human experts can now be carried out by machines in a more accurate and efficient way. The coming era of big data and the advent of supercomputers provides great opportunities to the development of AI technology for the enhancement of diagnosis and clinical decision-making. This review provides an introduction to AI and highlights its applications in the clinical flow of diagnosing and treating valvular heart diseases (VHDs). More specifically, this review first introduces some key concepts and subareas in AI. Secondly, it discusses the application of AI in heart sound auscultation and medical image analysis for assistance in diagnosing VHDs. Thirdly, it introduces using AI algorithms to identify risk factors and predict mortality of cardiac surgery. This review also describes the state-of-the-art autonomous surgical robots and their roles in cardiac surgery and intervention

    Post-processing approaches for the improvement of cardiac ultrasound B-mode images:a review

    Get PDF
    corecore