122 research outputs found

    Image-Based Cardiac Diagnosis With Machine Learning: A Review

    Get PDF
    Cardiac imaging plays an important role in the diagnosis of cardiovascular disease (CVD). Until now, its role has been limited to visual and quantitative assessment of cardiac structure and function. However, with the advent of big data and machine learning, new opportunities are emerging to build artificial intelligence tools that will directly assist the clinician in the diagnosis of CVDs. This paper presents a thorough review of recent works in this field and provide the reader with a detailed presentation of the machine learning methods that can be further exploited to enable more automated, precise and early diagnosis of most CVDs

    An improved classification approach for echocardiograms embedding temporal information

    Get PDF
    Cardiovascular disease is an umbrella term for all diseases of the heart. At present, computer-aided echocardiogram diagnosis is becoming increasingly beneficial. For echocardiography, different cardiac views can be acquired depending on the location and angulations of the ultrasound transducer. Hence, the automatic echocardiogram view classification is the first step for echocardiogram diagnosis, especially for computer-aided system and even for automatic diagnosis in the future. In addition, heart views classification makes it possible to label images especially for large-scale echo videos, provide a facility for database management and collection. This thesis presents a framework for automatic cardiac viewpoints classification of echocardiogram video data. In this research, we aim to overcome the challenges facing this investigation while analyzing, recognizing and classifying echocardiogram videos from 3D (2D spatial and 1D temporal) space. Specifically, we extend 2D KAZE approach into 3D space for feature detection and propose a histogram of acceleration as feature descriptor. Subsequently, feature encoding follows before the application of SVM to classify echo videos. In addition, comparison with the state of the art methodologies also takes place, including 2D SIFT, 3D SIFT, and optical flow technique to extract temporal information sustained in the video images. As a result, the performance of 2D KAZE, 2D KAZE with Optical Flow, 3D KAZE, Optical Flow, 2D SIFT and 3D SIFT delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively for the eight view classes of echo videos

    Developing Ultrasound-Based Computer-Aided Diagnostic Systems Through Statistical Pattern Recognition

    Get PDF
    Computer-aided diagnosis (CAD) is the use of a computer software to help physicians having a better interpretation of medical images. CAD systems can be viewed as pattern recognition algorithms that identify suspicious signs on a medical image and complement physicians' judgments, by reducing inter-/intra-observer variability and subjectivity. The proposed CAD systems in this thesis have been designed based on the statistical approach to pattern recognition as the most successfully used technique in practice. The main focus of this thesis has been on designing (new) feature extraction and classification algorithms for ultrasound-based CAD purposes. Ultrasound imaging has a broad range of usage in medical applications because it is a safe device which does not use harmful ionizing radiations, it provides clinicians with real-time images, it is portable and relatively cheap. The thesis was concerned with developing new ultrasound-based systems for the diagnosis of prostate cancer (PCa) and myocardial infarction (MI) where these issues have been addressed in two separate parts. In the first part, 1) a new CAD system was designed for prostate cancer biopsy by focusing on handling uncertainties in labels of the ground truth data, 2) the appropriateness of the independent component analysis (ICA) method for learning features from radiofrequency (RF) signals, backscattered from prostate tissues, was examined and, 3) a new ensemble scheme for learning ICA dictionaries from RF signals, backscattered from a tissue mimicking phantom, was proposed. In the second part, 1) principal component analysis (PCA) was used for the statistical modeling of the temporal deformation patterns of the left ventricle (LV) to detect abnormalities in its regional function, 2) a spatio-temporal representation of LV function based on PCA parameters was proposed to detect MI and, 3) a local-to-global statistical shape model based on PCA was presented to detect MI

    Diagnosis and prognosis of cardiovascular diseases by means of texture analysis in magnetic resonance imaging

    Get PDF
    Cardiovascular diseases constitute the leading global cause of morbidity and mortality. Magnetic resonance imaging (MRI) has become the gold standard technique for the assessment of patients with myocardial infarction. However, limitations still exist thus new alternatives are open to investigation. Texture analysis is a technique that aims to quantify the texture of the images that are not always perceptible by the human eye. It has been successfully applied in medical imaging but applications to cardiac MRI (CMR) are still scarce. Therefore, the purpose of this thesis was to apply texture analysis in conventional CMR images for the assessment of patients with myocardial infarction, as an alternative to current methods. Three applications of texture analysis and machine learning techniques were studied: i) Detection of infarcted myocardium in late gadolinium enhancement (LGE) CMR. Segmentation of the infarcted myocardium is routinely performed using image intensity thresholds. The inclusion of texture features to aid the segmentation was analyzed obtaining overall good results. The method was developed using 10 LGE CMR datasets and tested on a separate dataset comprising 5 cases that were acquired with a completely different scanner than that used for training. Therefore, this preliminary study showed the transferability of texture analysis which is important for clinical applicability. ii) Differentiation of acute and chronic myocardial infarction using LGE CMR and standard pre-contrast cine CMR. In this study, two different feature selection techniques and six different machine learning classifiers were studied and compared. The best classification was achieved using a polynomial SVM obtaining an overall AUC of 0.87 ± 0.06 in LGE CMR. Interestingly, results on cine CMR in which infarctions are visually imperceptible in most cases were also good (AUC = 0.83 ± 0.08). iii) Detection of infarcted non-viable segments in cine CMR. This study was motivated by the findings of the previous one. It demonstrated that texture analysis can be used to distinguish non-viable, viable and remote segments using standard pre-contrast cine CMR solely. This was the most relevant contribution of this thesis as it can be used as hypothesis for future work aiming to accurately delineate the infarcted myocardium as a gadolinium-free alternative that will have potential advantages. The three proposed applications were successfully performed obtaining promising results. In conclusion, texture analysis can be successfully applied to conventional CMR images and provides a potential quantitative alternative to existing methods

    Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology

    Get PDF
    The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
    • …
    corecore