8 research outputs found

    Applying Bayesian Growth Modeling In Machine Learning For Longitudinal Data

    Get PDF
    There has been increasing interest in the use of Bayesian growth modeling in machine learning environment to answer the questions relating to the patterns of change in trends of social and human behavior in longitudinal data. It is well understood that machine learning works properly with “big data,” because large sample sizes offer machines the better opportunity to “learn” the pattern/structure of data from a training data set to predict the performance in an unseen testing data set. Unfortunately, not all researchers have access to large samples and there is a lack of methodological research addressing the utility of using machine learning with longitudinal data based on small sample size. Additionally, there is limited methodological research conducted around moderation effect that priors have on other data conditions. Therefore, the purpose of the current study was to understand: (a) the interactive relationship between priors and sample sizes in longitudinal predictive modeling, (b) the interactive relationship between priors and number of waves of data, and (c) the interactive relationship between priors and the proportion of cases in the two levels of a dichotomous time-invariant predictor for Bayesian growth modeling in a machine learning environment. Monte Carlo simulation was adopted to answer assess the above aspects and data were generated based on alumni donation data from a university in the mid-Atlantic region where model parameters were set to mimic “real life” data as closely as possible. Results from the study show that although all main and interaction effects are statistically significant, only main effect of sample size, wave of data, and interaction between waves of data and sample sizes show meaningful effect size. Additionally, given the condition of prior of the study, informative priors did not show any higher prediction accuracy compared to non-informative priors. The reason behind indifferent between choices of informative and non-informative prior associated with model complexity, competition between strong informative and weakly informative prior. This study was one of the first known study to examine Bayesian estimation in the context of machine learning. Results of the current study suggest that capitalizing on the advantages offered jointly by these two modeling approaches shows promise. Although much is still unknown and in need of investigation regarding the conditions under which a combination of Bayesian modeling and machine learning affects prediction accuracy, the current dissertation provides a first step in that direction

    Towards a tricorder: clinical, health economic, and ethical investigation of point-of-care artificial intelligence electrocardiogram for heart failure

    Get PDF
    Heart failure (HF) is an international public health priority and a focus of the NHS Long Term Plan. There is a particular need in primary care for screening and early detection of heart failure with reduced ejection fraction (HFrEF) – the most common and serious HF subtype, and the only one with an abundant evidence base for effective therapies. Digital health technologies (DHTs) integrating artificial intelligence (AI) could improve diagnosis of HFrEF. Specifically, through a convergence of DHTs and AI, a single-lead electrocardiogram (ECG) can be recorded by a smart stethoscope and interrogated by AI (AI-ECG) to potentially serve as a point-of-care HFrEF test. However, there are concerning evidence gaps for such DHTs applying AI; across intersecting clinical, health economic, and ethical considerations. My thesis therefore investigates hypotheses that AI-ECG is 1.) Reliable, accurate, unbiased, and can be patient self-administered, 2.) Of justifiable health economic impact for primary care deployment, and 3.) Appropriate across ethical domains for deployment as a tool for patient self-administered screening. The theoretical basis for this work is presented in the Introduction (Chapter 1). Chapter 2 describes the first large-scale, multi-centre independent external validation study of AI-ECG, prospectively recruiting 1,050 patients and highlighting impressive performance: area under the curve, sensitivity, and specificity up to 0·91 (95% confidence interval: 0·88–0·95), 91·9% (78·1–98·3), and 80·2% (75·5–84·3) respectively; and absence of bias by age, sex, and ethnicity. Performance was independent of operator, and usability of the tool extended to patients being able to self-examine. Chapter 3 presents a clinical and health economic outcomes analysis using a contemporary digital repository of 2.5 million NHS patient records. A propensity-matched cohort was derived using all patients diagnosed with HF from 2015-2020 (n = 34,208). Novel findings included the unacceptable reality that 70% of index HF diagnoses are made through hospitalisation; where index diagnosis through primary care conferred a medium-term survival advantage and long-term cost saving (£2,500 per patient). This underpins a health economic model for the deployment of AI-ECG across primary care. Chapter 4 approaches a normative ethical analysis focusing on equity, agency, data rights, and responsibility for safe, effective, and trustworthy implementation of an unprecedented at-home patient self-administered AI-ECG screening programme. I propose approaches to mitigating any potential harms, towards preserving and promoting trust, patient engagement, and public health. Collectively, this thesis marks novel work highlighting AI-ECG as tool with the potential to address major cardiovascular public health priorities. Scrutiny through complimentary clinical, health economic, and ethical considerations can directly serve patients and health systems by blueprinting best-practice for the evaluation and implementation of DHTs integrating AI – building the conviction needed to realise the full potential of such technologies.Open Acces

    Preface

    Get PDF
    corecore