36 research outputs found

    Automated Method for the Volumetric Evaluation of Myocardial Scar from Cardiac Magnetic Resonance Images

    Get PDF
    In most western countries cardiovascular diseases are the leading cause of death, and for the survivors of ischemic attack an accurate quantification of the extent of the damage is required to correctly assess its impact and for risk stratification, and to select the best treatment for the patient. Moreover, a fast and reliable tool for the assessment of the cardiac function and the measurement of clinical indexes is highly desirable. The aim of this thesis is to provide computational approaches to better detect and assess the presence of myocardial fibrosis in the heart, particularly but not only in the left ventricle, by performing a fusion of the information from different magnetic resonance imaging sequences. We also developed and provided a semiautomatic tool useful for the fast evaluation and quantification of clinical indexes derived from heart chambers volumes. The thesis is composed by five chapters. The first chapter introduces the most common cardiac diseases such as ischemic cardiomyopathy and describes in detail the cellular and structural remodelling phenomena stemming from heart failure. The second chapter regards the detection of the left ventricle through the development of a semi-automated approach for both endocardial and epicardial surfaces, and myocardial mask extraction. In the third chapter the workflow for scar assessment is presented, in which the previously described approach is used to obtain the 3D left ventricle patient-specific geometry; a registration algorithm is then used to superimpose the fibrosis information derived from the late gadolinium enhancement magnetic resonance imaging to obtain a patientspecific 3D map of fibrosis extension and location on the left ventricle myocardium. Focus of the fourth chapter is on the left atrium, and fibrotic tissue detection for gaining insight on atrial fibrillation. In the fifth chapter some conclusive remarks are presented with possible future developments of the presented work

    Artificial neural network for atrial fibrillation identification in portable devices

    Get PDF
    none6siAtrial fibrillation (AF) is a common cardiac disorder that can cause severe complications. AF diagnosis is typically based on the electrocardiogram (ECG) evaluation in hospitals or in clinical facilities. The aim of the present work is to propose a new artificial neural network for reliable AF identification in ECGs acquired through portable devices. A supervised fully connected artificial neural network (RSL_ANN), receiving 19 ECG features (11 morphological, 4 on F waves and 4 on heart-rate variability (HRV)) in input and discriminating between AF and non-AF classes in output, was created using the repeated structuring and learning (RSL) procedure. RSL_ANN was created and tested on 8028 (training: 4493; validation: 1125; testing: 2410) annotated ECGs belonging to the “AF Classification from a Short Single Lead ECG Recording” database and acquired with the portable KARDIA device by AliveCor. RSL_ANN performance was evaluated in terms of area under the curve (AUC) and confidence intervals (CIs) of the received operating characteristic. RSL_ANN performance was very good and very similar in training, validation and testing datasets. AUC was 91.1% (CI: 89.1%–93.0%), 90.2% (CI: 86.2%–94.3%) and 90.8% (CI: 88.1%–93.5%) for the training, validation and testing datasets, respectively. Thus, RSL_ANN is a promising tool for reliable identification of AF in ECGs acquired by portable devices.openMarinucci D.; Sbrollini A.; Marcantoni I.; Morettini M.; Swenne C.A.; Burattini L.Marinucci, D.; Sbrollini, A.; Marcantoni, I.; Morettini, M.; Swenne, C. A.; Burattini, L

    Electro-mechanical whole-heart digital twins: A fully coupled multi-physics approach

    Get PDF
    Mathematical models of the human heart are evolving to become a cornerstone of precision medicine and support clinical decision making by providing a powerful tool to understand the mechanisms underlying pathophysiological conditions. In this study, we present a detailed mathematical description of a fully coupled multi-scale model of the human heart, including electrophysiology, mechanics, and a closed-loop model of circulation. State-of-the-art models based on human physiology are used to describe membrane kinetics, excitation-contraction coupling and active tension generation in the atria and the ventricles. Furthermore, we highlight ways to adapt this framework to patient specific measurements to build digital twins. The validity of the model is demonstrated through simulations on a personalized whole heart geometry based on magnetic resonance imaging data of a healthy volunteer. Additionally, the fully coupled model was employed to evaluate the effects of a typical atrial ablation scar on the cardiovascular system. With this work, we provide an adaptable multi-scale model that allows a comprehensive personalization from ion channels to the organ level enabling digital twin modeling

    Extended segmented beat modulation method for cardiac beat classification and electrocardiogram denoising

    Get PDF
    none4noBeat classification and denoising are two challenging and fundamental operations when processing digital electrocardiograms (ECG). This paper proposes the extended segmented beat modulation method (ESBMM) as a tool for automatic beat classification and ECG denoising. ESBMM includes four main steps: (1) beat identification and segmentation into PQRS and TU segments; (2) wavelet-based time-frequency feature extraction; (3) convolutional neural network-based classification to discriminate among normal (N), supraventricular (S), and ventricular (V) beats; and (4) a template-based denoising procedure. ESBMM was tested using the MIT–BIH arrhythmia database available at Physionet. Overall, the classification accuracy was 91.5% while the positive predictive values were 92.8%, 95.6%, and 83.6%, for N, S, and V classes, respectively. The signal-to-noise ratio improvement after filtering was between 0.15 dB and 2.66 dB, with a median value equal to 0.99 dB, which is significantly higher than 0 (p < 0.05). Thus, ESBMM proved to be a reliable tool to classify cardiac beats into N, S, and V classes and to denoise ECG tracings.openNasim A.; Sbrollini A.; Morettini M.; Burattini L.Nasim, A.; Sbrollini, A.; Morettini, M.; Burattini, L

    Effects of Atrial Fibrillation on the Coronary Flow at Different Heart Rates: A Computational Approach

    Get PDF
    Atrial fibrillation (AF) has several effects on the cardiovascular system responses. This study focuses on the consequences of AF on the coronary blood flow, by exploiting a computational approach. 2000 heartbeat periods (RR) were simulated for 5 different mean heart rates (HR), ranging from 50 to 130 bpm. The resulting flow rate signals at the coronary level were analysed through a specific set of hemodynamic parameters. Three main results emerge during AF: (i) maximal coronary flow rates modify with HR, (ii) the coronary perfusion begins to be impaired when exceeding 90-110 bpm, and (iii) the coronary perfusion pressure is not a good estimate of the coronary blood flow at HRs higher than 90-110 bpm

    Non-invasive estimation of QLV from the standard 12-lead ECG in patients with left bundle branch block

    Get PDF
    Background: Cardiac resynchronization therapy (CRT) is a treatment for patients with heart failure and electrical dyssynchrony, i.e., left bundle branch block (LBBB) ECG pattern. CRT resynchronizes ventricular contraction with a right ventricle (RV) and a left ventricle (LV) pacemaker lead. Positioning the LV lead in the latest electrically activated region (measured from Q wave onset in the ECG to LV sensing by the left pacemaker electrode [QLV]) is associated with favorable outcome. However, optimal LV lead placement is limited by coronary venous anatomy and the inability to measure QLV non-invasively before implantation. We propose a novel non-invasive method for estimating QLV in sinus-rhythm from the standard 12-lead ECG. Methods: We obtained 12-lead ECG, LV electrograms and LV lead position in a standard LV 17-segment model from procedural recordings from 135 standard CRT recipients. QLV duration was measured post-operatively. Using a generic heart geometry and corresponding forward model for ECG computation, the electrical activation pattern of the heart was fitted to best match the 12-lead ECG in an iterative optimization procedure. This procedure initialized six activation sites associated with the His-Purkinje system. The initial timing of each site was based on the directions of the vectorcardiogram (VCG). Timing and position of the sites were then changed iteratively to improve the match between simulated and measured ECG. Noninvasive estimation of QLV was done by calculating the time difference between Q-onset on the computed ECG and the activation time corresponding to centroidal epicardial activation time of the segment where the LV electrode is positioned. The estimated QLV was compared to the measured QLV. Further, the distance between the actual LV position and the estimated LV position was computed from the generic ventricular model. Results: On average there was no difference between QLV measured from procedural recordings and non-invasive estimation of QLV ( [Formula: see text] ). Median distance between actual LV pacing site and the estimated pacing site was 18.6 mm (IQR 17.3 mm). Conclusion: Using the standard 12-lead ECG and a generic heart model it is possible to accurately estimate QLV. This method may potentially be used to support patient selection, optimize implant procedures, and to simulate optimal stimulation parameters prior to pacemaker implantation

    Assessment of Cardiorespiratory Interactions during Apneic Events in Sleep via Fuzzy Kernel Measures of Information Dynamics

    Get PDF
    Apnea and other breathing-related disorders have been linked to the development of hypertension or impairments of the cardiovascular, cognitive or metabolic systems. The combined assessment of multiple physiological signals acquired during sleep is of fundamental importance for providing additional insights about breathing disorder events and the associated impairments. In this work, we apply information-theoretic measures to describe the joint dynamics of cardiorespiratory physiological processes in a large group of patients reporting repeated episodes of hypopneas, apneas (central, obstructive, mixed) and respiratory effort related arousals (RERAs). We analyze the heart period as the target process and the airflow amplitude as the driver, computing the predictive information, the information storage, the information transfer, the internal information and the cross information, using a fuzzy kernel entropy estimator. The analyses were performed comparing the information measures among segments during, immediately before and after the respiratory event and with control segments. Results highlight a general tendency to decrease of predictive information and information storage of heart period, as well as of cross information and information transfer from respiration to heart period, during the breathing disordered events. The information-theoretic measures also vary according to the breathing disorder, and significant changes of information transfer can be detected during RERAs, suggesting that the latter could represent a risk factor for developing cardiovascular diseases. These findings reflect the impact of different sleep breathing disorders on respiratory sinus arrhythmia, suggesting overall higher complexity of the cardiac dynamics and weaker cardiorespiratory interactions which may have physiological and clinical relevance

    A unified methodology for heartbeats detection in seismocardiogram and ballistocardiogram signals

    Get PDF
    This work presents a methodology to analyze and segment both seismocardiogram (SCG) and ballistocardiogram (BCG) signals in a unified fashion. An unsupervised approach is followed to extract a template of SCG/BCG heartbeats, which is then used to fine-tune temporal waveform annotation. Rigorous performance assessment is conducted in terms of sensitivity, precision, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of annotation. The methodology is tested on four independent datasets, covering different measurement setups and time resolutions. A wide application range is therefore explored, which better characterizes the robustness and generality of the method with respect to a single dataset. Overall, sensitivity and precision scores are uniform across all datasets (p &gt; 0.05 from the Kruskal–Wallis test): the average sensitivity among datasets is 98.7%, with 98.2% precision. On the other hand, a slight yet significant difference in RMSE and MAE scores was found (p &lt; 0.01) in favor of datasets with higher sampling frequency. The best RMSE scores for SCG and BCG are 4.5 and 4.8 ms, respectively; similarly, the best MAE scores are 3.3 and 3.6 ms. The results were compared to relevant recent literature and are found to improve both detection performance and temporal annotation errors

    Transfer Learning Improving Predictive Mortality Models for Patients in End-Stage Renal Disease

    Get PDF
    Deep learning is becoming a fundamental piece in the paradigm shift from evidence-based to data-based medicine. However, its learning capacity is rarely exploited when working with small data sets. Through transfer learning (TL), information from a source domain is transferred to a target one to enhance a learning task in such domain. The proposed TL mechanisms are based on sample and feature space augmentation. Thus, deep autoencoders extract complex representations for the data in the TL approach. Their latent representations, the so-called codes, are handled to transfer information among domains. The transfer of samples is carried out by computing a latent space mapping matrix that links codes from both domains for later reconstruction. The feature space augmentation is based on the computation of the average of the most similar codes from one domain. Such an average augments the features in a target domain. The proposed framework is evaluated in the prediction of mortality in patients in end-stage renal disease, transferring information related to the mortality of patients with acute kidney injury from the massive database MIMIC-III. Compared to other TL mechanisms, the proposed approach improves 6-11% in previous mortality predictive models. The integration of TL approaches into learning tasks in pathologies with data volume issues could encourage the use of data-based medicine in a clinical setting
    corecore