3,190 research outputs found

    Heart Rate Variability: A possible machine learning biomarker for mechanical circulatory device complications and heart recovery

    Get PDF
    Cardiovascular disease continues to be the number one cause of death in the United States, with heart failure patients expected to increase to \u3e8 million by 2030. Mechanical circulatory support (MCS) devices are now better able to manage acute and chronic heart failure refractory to medical therapy, both as bridge to transplant or as bridge to destination. Despite significant advances in MCS device design and surgical implantation technique, it remains difficult to predict response to device therapy. Heart rate variability (HRV), measuring the variation in time interval between adjacent heartbeats, is an objective device diagnostic regularly recorded by various MCS devices that has been shown to have significant prognostic value for both sudden cardiac death as well as all-cause mortality in congestive heart failure (CHF) patients. Limited studies have examined HRV indices as promising risk factors and predictors of complication and recovery from left ventricular assist device therapy in end-stage CHF patients. If paired with new advances in machine learning utilization in medicine, HRV represents a potential dynamic biomarker for monitoring and predicting patient status as more patients enter the mechanotrope era of MCS devices for destination therapy

    Quantification of left ventricular longitudinal strain, strain rate, velocity and displacement in healthy horses by 2-dimensional speckle tracking

    Get PDF
    Background: The quantification of equine left ventricular (LV) function is generally limited to short-axis M-mode measurements. However, LV deformation is 3-dimensional (3D) and consists of longitudinal shortening, circumferential shortening, and radial thickening. In human medicine, longitudinal motion is the best marker of subtle myocardial dysfunction. Objectives: To evaluate the feasibility and reliability of 2-dimensional speckle tracking (2DST) for quantifying equine LV longitudinal function. Animals: Ten healthy untrained trotter horses; 9.6 +/- 4.4 years; 509 +/- 58 kg. Methods : Prospective study. Repeated echocardiographic examinations were performed by 2 observers from a modified 4-chamber view. Global, segmental, and averaged peak values and timing of longitudinal strain (SL), strain rate (SrL), velocity (VL), and displacement (DL) were measured in 4 LV wall segments. The inter- and intraobserver within- and between-day variability was assessed by calculating the coefficients of variation for repeated measurements. Results: 2DST analysis was feasible in each exam. The variability of peak systolic values and peak timing was low to moderate, whereas peak diastolic values showed a higher variability. Significant segmental differences were demonstrated. DL and VL presented a prominent base-to-midwall gradient. SL and SrL values were similar in all segments except the basal septal segment, which showed a significantly lower peak SL occurring about 60 ms later compared with the other segments. Conclusions and Clinical Importance 2DST is a reliable technique for measuring systolic LV longitudinal motion in healthy horses. This study provides preliminary reference values, which can be used when evaluating the technique in a clinical setting

    Development of models for predicting Torsade de Pointes cardiac arrhythmias using perceptron neural networks

    Full text link
    Blockage of some ion channels and in particular, the hERG cardiac potassium channel delays cardiac repolarization and can induce arrhythmia. In some cases it leads to a potentially life-threatening arrhythmia known as Torsade de Pointes (TdP). Therefore recognizing drugs with TdP risk is essential. Candidate drugs that are determined not to cause cardiac ion channel blockage are more likely to pass successfully through clinical phases II and III trials (and preclinical work) and not be withdrawn even later from the marketplace due to cardiotoxic effects. The objective of the present study is to develop an SAR model that can be used as an early screen for torsadogenic (causing TdP arrhythmias) potential in drug candidates. The method is performed using descriptors comprised of atomic NMR chemical shifts and corresponding interatomic distances which are combined into a 3D abstract space matrix. The method is called 3D-SDAR (3 dimensional spectral data-activity relationship) and can be interrogated to identify molecular features responsible for the activity, which can in turn yield simplified hERG toxicophores. A dataset of 55 hERG potassium channel inhibitors collected from Kramer et al. consisting of 32 drugs with TdP risk and 23 with no TdP risk was used for training the 3D-SDAR model.An ANN model with multilayer perceptron was used to define collinearities among the independent 3D-SDAR features. A composite model from 200 random iterations with 25% of the molecules in each case yielded the following figures of merit: training, 99.2 %; internal test sets, 66.7%; external (blind validation) test set, 68.4%. In the external test set, 70.3% of positive TdP drugs were correctly predicted. Moreover, toxicophores were generated from TdP drugs. A 3D-SDAR was successfully used to build a predictive model for drug-induced torsadogenic and non-torsadogenic drugs.Comment: Accepted for publication in BMC Bioinformatics (Springer) July 201

    Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

    Full text link
    Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models. However, existing approaches often rely on abstract, complex visualizations that poorly map to the task at hand or require non-trivial ML expertise to interpret. Here, we present two visual analytics modules that facilitate an intuitive assessment of model reliability. To help users better characterize and reason about a model's uncertainty, we visualize raw and aggregate information about a given input's nearest neighbors. Using an interactive editor, users can manipulate this input in semantically-meaningful ways, determine the effect on the output, and compare against their prior expectations. We evaluate our interface using an electrocardiogram beat classification case study. Compared to a baseline feature importance interface, we find that 14 physicians are better able to align the model's uncertainty with domain-relevant factors and build intuition about its capabilities and limitations

    Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation

    Get PDF
    Machine learning-based imaging diagnostics has recently reached or even superseded the level of clinical experts in several clinical domains. However, classification decisions of a trained machine learning system are typically non-transparent, a major hindrance for clinical integration, error tracking or knowledge discovery. In this study, we present a transparent deep learning framework relying on convolutional neural networks (CNNs) and layer-wise relevance propagation (LRP) for diagnosing multiple sclerosis (MS). MS is commonly diagnosed utilizing a combination of clinical presentation and conventional magnetic resonance imaging (MRI), specifically the occurrence and presentation of white matter lesions in T2-weighted images. We hypothesized that using LRP in a naive predictive model would enable us to uncover relevant image features that a trained CNN uses for decision-making. Since imaging markers in MS are well-established this would enable us to validate the respective CNN model. First, we pre-trained a CNN on MRI data from the Alzheimer's Disease Neuroimaging Initiative (n = 921), afterwards specializing the CNN to discriminate between MS patients and healthy controls (n = 147). Using LRP, we then produced a heatmap for each subject in the holdout set depicting the voxel-wise relevance for a particular classification decision. The resulting CNN model resulted in a balanced accuracy of 87.04% and an area under the curve of 96.08% in a receiver operating characteristic curve. The subsequent LRP visualization revealed that the CNN model focuses indeed on individual lesions, but also incorporates additional information such as lesion location, non-lesional white matter or gray matter areas such as the thalamus, which are established conventional and advanced MRI markers in MS. We conclude that LRP and the proposed framework have the capability to make diagnostic decisions of..

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Doctor of Philosophy

    Get PDF
    dissertationAtrial fibrillation (AF) is the leading cause of ischemic stroke and is the most commonly observed arrhythmia in clinical cardiology. Catheter ablation of AF, in which specific regions of cardiac anatomy associated with AF are intenionally injured to create scar tissue, has been honed over the last 15 years to become a relatively common and safe treatment option. However, the success of these anatomically driven ablation strategies, particularly in hearts that have been exposed to AF for extended periods, remains poor. AF induces changes in the electrical and structural properties of the cardiac tissue that further promotes the permanence of AF. In a process known as electroanatomical (EAM) mapping, clinicians record time signals known as electrograms (EGMs) from the heart and the locations of the recording sites to create geometric representations, or maps, of the electrophysiological properties of the heart. Analysis of the maps and the individual EGM morphologies can indicate regions of abnormal tissue, or substrates that facilitate arrhythmogenesis and AF perpetuation. Despite this progress, limitations in the control of devices currently used for EAM acquisition and reliance on suboptimal metrics of tissue viability appear to be hindering the potential of treatment guided by substrate mapping. In this research, we used computational models of cardiac excitation to evaluate param- eters of EAM that affect the performance of substrate mapping. These models, which have been validated with experimental and clinical studies, have yielded new insights into the limitations of current mapping systems, but more importantly, they guided us to develop new systems and metrics for robust substrate mapping. We report here on the progress in these simulation studies and on novel measurement approaches that have the potential to improve the robustness and precision of EAM in patients with arrhythmias. Appropriate detection of proarrhythmic substrates promises to improve ablation of AF beyond rudimentary destruction of anatomical targets to directed targeting of complicit tissues. Targeted treatment of AF sustaining tissues, based on the substrate mapping approaches described in this dissertation, has the potential to improve upon the efficacy of current AF treatment options
    corecore