1,598 research outputs found

    Probabilistic classification of acute myocardial infarction from multiple cardiac markers

    Get PDF
    Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78–0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1–6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI

    Automatic covariate selection in logistic models for chest pain diagnosis: A new approach

    Get PDF
    A newly established method for optimizing logistic models via a minorization-majorization procedure is applied to the problem of diagnosing acute coronary syndromes (ACS). The method provides a principled approach to the selection of covariates which would otherwise require the use of a suboptimal method owing to the size of the covariate set. A strategy for building models is proposed and two models optimized for performance and for simplicity are derived via ten-fold cross-validation. These models confirm that a relatively small set of covariates including clinical and electrocardiographic features can be used successfully in this task. The performance of the models is comparable with previously published models using less principled selection methods. The models prove to be portable when tested on data gathered from three other sites. Whilst diagnostic accuracy and calibration diminishes slightly for these new settings, it remains satisfactory overall. The prospect of building predictive models that are as simple as possible for a required level of performance is valuable if data-driven decision aids are to gain wide acceptance in the clinical situation owing to the need to minimize the time taken to gather and enter data at the bedside

    Advances in computational modelling for personalised medicine after myocardial infarction

    Get PDF
    Myocardial infarction (MI) is a leading cause of premature morbidity and mortality worldwide. Determining which patients will experience heart failure and sudden cardiac death after an acute MI is notoriously difficult for clinicians. The extent of heart damage after an acute MI is informed by cardiac imaging, typically using echocardiography or sometimes, cardiac magnetic resonance (CMR). These scans provide complex data sets that are only partially exploited by clinicians in daily practice, implying potential for improved risk assessment. Computational modelling of left ventricular (LV) function can bridge the gap towards personalised medicine using cardiac imaging in patients with post-MI. Several novel biomechanical parameters have theoretical prognostic value and may be useful to reflect the biomechanical effects of novel preventive therapy for adverse remodelling post-MI. These parameters include myocardial contractility (regional and global), stiffness and stress. Further, the parameters can be delineated spatially to correspond with infarct pathology and the remote zone. While these parameters hold promise, there are challenges for translating MI modelling into clinical practice, including model uncertainty, validation and verification, as well as time-efficient processing. More research is needed to (1) simplify imaging with CMR in patients with post-MI, while preserving diagnostic accuracy and patient tolerance (2) to assess and validate novel biomechanical parameters against established prognostic biomarkers, such as LV ejection fraction and infarct size. Accessible software packages with minimal user interaction are also needed. Translating benefits to patients will be achieved through a multidisciplinary approach including clinicians, mathematicians, statisticians and industry partners

    High-sensitivity troponin assays for the early rule-out or diagnosis of acute myocardial infarction in people with acute chest pain: a systematic review and cost-effectiveness analysis.

    Get PDF
    BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary hospital admissions and anxiety. OBJECTIVE: To assess the clinical effectiveness and cost-effectiveness of hs-cTn assays for the early (within 4 hours of presentation) rule-out of AMI in adults with acute chest pain. METHODS: Sixteen databases, including MEDLINE and EMBASE, research registers and conference proceedings, were searched to October 2013. Study quality was assessed using QUADAS-2. The bivariate model was used to estimate summary sensitivity and specificity for meta-analyses involving four or more studies, otherwise random-effects logistic regression was used. The health-economic analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different troponin (Tn) testing methods. The de novo model consisted of a decision tree and Markov model. A lifetime time horizon (60 years) was used. RESULTS: Eighteen studies were included in the clinical effectiveness review. The optimum strategy, based on the Roche assay, used a limit of blank (LoB) threshold in a presentation sample to rule out AMI [negative likelihood ratio (LR-) 0.10, 95% confidence interval (CI) 0.05 to 0.18]. Patients testing positive could then have a further test at 2 hours; a result above the 99th centile on either sample and a delta (Δ) of ≥ 20% has some potential for ruling in an AMI [positive likelihood ratio (LR+) 8.42, 95% CI 6.11 to 11.60], whereas a result below the 99th centile on both samples and a Δ of < 20% can be used to rule out an AMI (LR- 0.04, 95% CI 0.02 to 0.10). The optimum strategy, based on the Abbott assay, used a limit of detection (LoD) threshold in a presentation sample to rule out AMI (LR- 0.01, 95% CI 0.00 to 0.08). Patients testing positive could then have a further test at 3 hours; a result above the 99th centile on this sample has some potential for ruling in an AMI (LR+ 10.16, 95% CI 8.38 to 12.31), whereas a result below the 99th centile can be used to rule out an AMI (LR- 0.02, 95% CI 0.01 to 0.05). In the base-case analysis, standard Tn testing was both most effective and most costly. Strategies considered cost-effective depending upon incremental cost-effectiveness ratio thresholds were Abbott 99th centile (thresholds of < £6597), Beckman 99th centile (thresholds between £6597 and £30,042), Abbott optimal strategy (LoD threshold at presentation, followed by 99th centile threshold at 3 hours) (thresholds between £30,042 and £103,194) and the standard Tn test (thresholds over £103,194). The Roche 99th centile and the Roche optimal strategy [LoB threshold at presentation followed by 99th centile threshold and/or Δ20% (compared with presentation test) at 1-3 hours] were extendedly dominated in this analysis. CONCLUSIONS: There is some evidence to suggest that hs-CTn testing may provide an effective and cost-effective approach to early rule-out of AMI. Further research is needed to clarify optimal diagnostic thresholds and testing strategies. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005939. FUNDING: The National Institute for Health Research Health Technology Assessment programme

    Jefferson Digital Commons quarterly report: January-March 2020

    Get PDF
    This quarterly report includes: New Look for the Jefferson Digital Commons Articles COVID-19 Working Papers Educational Materials From the Archives Grand Rounds and Lectures JeffMD Scholarly Inquiry Abstracts Journals and Newsletters Master of Public Health Capstones Oral Histories Posters and Conference Presentations What People are Saying About the Jefferson the Digital Common

    Multi-scale molecular descriptions of human heart failure using single cell, spatial, and bulk transcriptomics

    Get PDF
    Molecular descriptions of human disease have relied on transcriptomics, the genome-wide measurement of gene expression. In the last years the emergence of capture-based technologies have enabled the transcriptomic profiling of single cells both from dissociated and intact tissues, providing a spatial and cell type specific context that complements the catalog of gene expression changes reported from bulk technologies. In the context of cardiovascular disease, these technologies open the opportunity to study the inter and intra-cellular mechanisms that regulate myocardial remodeling. In this thesis I present comprehensive descriptions of the transcriptional changes in acute and chronic human heart failure using bulk, single cell, and spatial technologies. First, I describe the creation of the Reference of the Heart Failure Transcriptome, a resource built from the meta-analysis of 16 independent studies of human heart failure transcriptomics. Then, I report the first spatial and single cell atlas of human myocardial infarction, and propose a computational strategy to identify compositional, organizational, and molecular tissue differences across distinct time points and physiological zones of damaged myocardium. Finally, I outline a methodology for the multicellular analysis of single cell data that allows for a better understanding of tissue responses and cell type coordination events in cardiovascular disease and that links the knowledge of independent studies at multiple scales. Overall my work demonstrates the importance of the generation of reliable molecular references of disease across scales

    Role of deep learning techniques in non-invasive diagnosis of human diseases.

    Get PDF
    Machine learning, a sub-discipline in the domain of artificial intelligence, concentrates on algorithms able to learn and/or adapt their structure (e.g., parameters) based on a set of observed data. The adaptation is performed by optimizing over a cost function. Machine learning obtained a great attention in the biomedical community because it offers a promise for improving sensitivity and/or specificity of detection and diagnosis of diseases. It also can increase objectivity of the decision making, decrease the time and effort on health care professionals during the process of disease detection and diagnosis. The potential impact of machine learning is greater than ever due to the increase in medical data being acquired, the presence of novel modalities being developed and the complexity of medical data. In all of these scenarios, machine learning can come up with new tools for interpreting the complex datasets that confront clinicians. Much of the excitement for the application of machine learning to biomedical research comes from the development of deep learning which is modeled after computation in the brain. Deep learning can help in attaining insights that would be impossible to obtain through manual analysis. Deep learning algorithms and in particular convolutional neural networks are different from traditional machine learning approaches. Deep learning algorithms are known by their ability to learn complex representations to enhance pattern recognition from raw data. On the other hand, traditional machine learning requires human engineering and domain expertise to design feature extractors and structure data. With increasing demands upon current radiologists, there are growing needs for automating the diagnosis. This is a concern that deep learning is able to address. In this dissertation, we present four different successful applications of deep learning for diseases diagnosis. All the work presented in the dissertation utilizes medical images. In the first application, we introduce a deep-learning based computer-aided diagnostic system for the early detection of acute renal transplant rejection. The system is based on the fusion of both imaging markers (apparent diffusion coefficients derived from diffusion-weighted magnetic resonance imaging) and clinical biomarkers (creatinine clearance and serum plasma creatinine). The fused data is then used as an input to train and test a convolutional neural network based classifier. The proposed system is tested on scans collected from 56 subjects from geographically diverse populations and different scanner types/image collection protocols. The overall accuracy of the proposed system is 92.9% with 93.3% sensitivity and 92.3% specificity in distinguishing non-rejected kidney transplants from rejected ones. In the second application, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aimed at achieving lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Using fully convolutional neural networks, we proposed novel methods for the extraction of a region of interest that contains the left ventricle, and the segmentation of the left ventricle. Following myocardial segmentation, functional and mass parameters of the left ventricle are estimated. Automated Cardiac Diagnosis Challenge dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. In the third application, we propose a novel deep learning approach for automated quantification of strain from cardiac cine MR images of mice. For strain analysis, we developed a Laplace-based approach to track the LV wall points by solving the Laplace equation between the LV contours of each two successive image frames over the cardiac cycle. Following tracking, the strain estimation is performed using the Lagrangian-based approach. This new automated system for strain analysis was validated by comparing the outcome of these analysis with the tagged MR images from the same mice. There were no significant differences between the strain data obtained from our algorithm using cine compared to tagged MR imaging. In the fourth application, we demonstrate how a deep learning approach can be utilized for the automated classification of kidney histopathological images. Our approach can classify four classes: the fat, the parenchyma, the clear cell renal cell carcinoma, and the unusual cancer which has been discovered recently, called clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole-slide kidney images were divided into patches with three different sizes to be inputted to the networks. Our approach can provide patch-wise and pixel-wise classification. Our approach classified the four classes accurately and surpassed other state-of-the-art methods such as ResNet (pixel accuracy: 0.89 Resnet18, 0.93 proposed). In conclusion, the results of our proposed systems demonstrate the potential of deep learning for the efficient, reproducible, fast, and affordable disease diagnosis

    Learning Better Clinical Risk Models.

    Full text link
    Risk models are used to estimate a patient’s risk of suffering particular outcomes throughout clinical practice. These models are important for matching patients to the appropriate level of treatment, for effective allocation of resources, and for fairly evaluating the performance of healthcare providers. The application and development of methods from the field of machine learning has the potential to improve patient outcomes and reduce healthcare spending with more accurate estimates of patient risk. This dissertation addresses several limitations of currently used clinical risk models, through the identification of novel risk factors and through the training of more effective models. As wearable monitors become more effective and less costly, the previously untapped predictive information in a patient’s physiology over time has the potential to greatly improve clinical practice. However translating these technological advances into real-world clinical impacts will require computational methods to identify high-risk structure in the data. This dissertation presents several approaches to learning risk factors from physiological recordings, through the discovery of latent states using topic models, and through the identification of predictive features using convolutional neural networks. We evaluate these approaches on patients from a large clinical trial and find that these methods not only outperform prior approaches to leveraging heart rate for cardiac risk stratification, but that they improve overall prediction of cardiac death when considered alongside standard clinical risk factors. We also demonstrate the utility of this work for learning a richer description of sleep recordings. Additionally, we consider the development of risk models in the presence of missing data, which is ubiquitous in real-world medical settings. We present a novel method for jointly learning risk and imputation models in the presence of missing data, and find significant improvements relative to standard approaches when evaluated on a large national registry of trauma patients.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113326/1/alexve_1.pd

    Machine learning techniques for arrhythmic risk stratification: a review of the literature

    Get PDF
    Ventricular arrhythmias (VAs) and sudden cardiac death (SCD) are significant adverse events that affect the morbidity and mortality of both the general population and patients with predisposing cardiovascular risk factors. Currently, conventional disease-specific scores are used for risk stratification purposes. However, these risk scores have several limitations, including variations among validation cohorts, the inclusion of a limited number of predictors while omitting important variables, as well as hidden relationships between predictors. Machine learning (ML) techniques are based on algorithms that describe intervariable relationships. Recent studies have implemented ML techniques to construct models for the prediction of fatal VAs. However, the application of ML study findings is limited by the absence of established frameworks for its implementation, in addition to clinicians’ unfamiliarity with ML techniques. This review, therefore, aims to provide an accessible and easy-to-understand summary of the existing evidence about the use of ML techniques in the prediction of VAs. Our findings suggest that ML algorithms improve arrhythmic prediction performance in different clinical settings. However, it should be emphasized that prospective studies comparing ML algorithms to conventional risk models are needed while a regulatory framework is required prior to their implementation in clinical practice

    Towards better clinical prediction models

    Get PDF
    Clinical prediction models provide risk estimates for the presence of disease (diagnosis) or an event in the future course of disease (prognosis) for individual patients. Although publications that present and evaluate such models are becoming more frequent, the methodology is often suboptimal. We propose that seven steps should be considered in developing prediction models: (i) consideration of the research question and initial data inspection; (ii) coding of predictors; (iii) model specification; (iv) model estimation; (v) evaluation of model performance; (vi) internal validation; and (vii) model presentation. The validity of a prediction model is ideally assessed in fully independent data, where we propose four key measures to evaluate model performance: calibration-in-the-large, or the model intercept (A); calibration slope (B); discrimination, with a concordance statistic (C); and clinical usefulness, with decision-curve analysis (D). As an application, we develop and validate prediction models for 30-day mortality in patients with an acute myocardial infarction. This illustrates the usefulness of the proposed framework to strengthen the methodological rigour and quality for prediction models in cardiovascular research
    corecore