6,886 research outputs found

    Development and validation of risk prediction equations to estimate future risk of heart failure in patients with diabetes: a prospective cohort study

    Get PDF
    Objective: To develop and externally validate risk prediction equations to estimate the 10-year risk of heart failure in patients with diabetes, aged 25–84 years. Design: Cohort study using routinely collected data from general practices in England between 1998 and 2014 contributing to the QResearch and Clinical Research Practice Datalink (CPRD) databases. Setting: We used 763 QResearch practices to develop the equations. We validated it in 254 different QResearch practices and 357 CPRD practices. Participants: 437 806 patients in the derivation cohort; 137 028 in the QResearch validation cohort, and 197 905 in the CPRD validation cohort. Measurement: Incident diagnosis of heart failure recorded on the patients’ linked electronic General Practitioner (GP), mortality, or hospital record. Risk factors included age, body mass index (BMI), systolic blood pressure, cholesterol/ high-density lipoprotein (HDL) ratio, glycosylated haemoglobin (HbA1c), material deprivation, ethnicity, smoking, diabetes duration, type of diabetes, atrial fibrillation, cardiovascular disease, chronic renal disease, and family history of premature coronary heart disease. Methods: We used Cox proportional hazards models to derive separate risk equations in men and women for evaluation at 10 years. Measures of calibration, discrimination, and sensitivity were determined in 2 external validation cohorts. Results: We identified 25 480 cases of heart failure in the derivation cohort, 8189 in the QResearch validation cohort, and 11 311 in the CPRD cohort. The equations included: age, BMI, systolic blood pressure, cholesterol/HDL ratio, HbA1c, material deprivation, ethnicity, smoking, duration and type of diabetes, atrial fibrillation, cardiovascular disease, and chronic renal disease. The equations had good performance in CPRD for women (R2 of 41.2%; D statistic 1.71; and receiver operating characteristic curve (ROC) statistic 0.78) and men (38.7%, 1.63; and 0.77 respectively). Conclusions: We have developed and externally validated risk prediction equations to quantify absolute risk of heart failure in men and women with diabetes. These can be used to identify patients at high risk of heart failure for prevention or assessment of the disease

    A temporal prognostic model based on dynamic Bayesian networks: mining medical insurance data

    Get PDF
    A prognostic model is a formal combination of multiple predictors from which risk probability of a specific diagnosis can be modelled for patients. Prognostic models have become essential instruments in medicine. The models are used for prediction purposes of guiding doctors to make a smart diagnosis, patient-specific decisions or help in planning the utilization of resources for patient groups who have similar prognostic paths. Dynamic Bayesian networks theoretically provide a very expressive and flexible model to solve temporal problems in medicine. However, this involves various challenges due both to the nature of the clinical domain, and the nature of the DBN modelling and inference process itself. The challenges from the clinical domain include insufficient knowledge of temporal interactions of processes in the medical literature, the sparse nature and variability of medical data collection, and the difficulty in preparing and abstracting clinical data in a suitable format without losing valuable information in the process. Challenges about the DBN methodology and implementation include the lack of tools that allow easy modelling of temporal processes. Overcoming this challenge will help to solve various clinical temporal reasoning problems. In this thesis, we addressed these challenges while building a temporal network with explanations of the effects of predisposing factors, such as age and gender, and the progression information of all diagnoses using claims data from an insurance company in Kenya. We showed that our network could differentiate the possible probability exposure to a diagnosis given the age and gender and possible paths given a patient's history. We also presented evidence that the more patient history is provided, the better the prediction of future diagnosis

    A design science framework for research in health analytics

    Get PDF
    Data analytics provide the ability to systematically identify patterns and insights from a variety of data as organizations pursue improvements in their processes, products, and services. Analytics can be classified based on their ability to: explore, explain, predict, and prescribe. When applied to the field of healthcare, analytics presents a new frontier for business intelligence. In 2013 alone, the Centers for Medicare and Medicaid Services (CMS) reported that the national health expenditure was $2.9 trillion, representing 17.4% of the total United States GDP. The Patient Protection and Affordable Care Act of 2010 (ACA) requires all hospitals to implement electronic medical record (EMR) technologies by year 2014 (Patient Protection and Affordable Care Act, 2010). Moreover, the ACA makes healthcare process and outcomes more transparent by making related data readily available for research. Enterprising organizations are employing analytics and analytical techniques to find patterns in healthcare data (I. R. Bardhan & Thouin, 2013; Hansen, Miron-Shatz, Lau, & Paton, 2014). The goal is to assess the cost and quality of care and identify opportunities for improvement for organizations as well as the healthcare system as a whole. Yet, there remains a need for research to systematically understand, explain, and predict the sources and impacts of the widely observed variance in the cost and quality of care available. This is a driving motivation for research in healthcare. This dissertation conducts a design theoretic examination of the application of advanced data analytics in healthcare. Heart Failure is the number one cause of death and the biggest contributor healthcare costs in the United States. An exploratory examination of the application of predictive analytics is conducted in order to understand the cost and quality of care provided to heart failure patients. The specific research question is addressed: How can we improve and expand upon our understanding of the variances in the cost of care and the quality of care for heart failure? Using state level data from the State Health Plan of North Carolina, a standard readmission model was assessed as a baseline measure for prediction, and advanced analytics were compared to this baseline. This dissertation demonstrates that advanced analytics can improve readmission predictions as well as expand understanding of the profile of a patient readmitted for heart failure. Implications are assessed for academics and practitioners

    Developing Prediction Models for Kidney Stone Disease

    Get PDF
    Kidney stone disease has become more prevalent through the years, leading to high treatment cost and associated health risks. In this study, we explore a large medical database and machine learning methods to extract features and construct models for diagnosing kidney stone disease. Data of 46,250 patients and 58,976 hospital admissions were extracted and analyzed, including patients’ demographic information, diagnoses, vital signs, and laboratory measurements of the blood and urine. We compared the kidney stone (KDS) patients to patients with abdominal and back pain (ABP), patients diagnosed with nephritis, nephrosis, renal sclerosis, chronic kidney disease, or acute and unspecified renal failure (NCA), patients diagnosed with urinary tract infections and other diseases of the kidneys and the uterus (OKU), and patients with other conditions (OTH). We built logistic regression models and random forest models to determine the best prediction outcome. For the KDS vs. ABP group, a logistic regression model using the five variables including age, mean respiratory rate, blood chloride, blood creatinine, and blood CO2 levels from the patients’ first lab results gave the best prediction accuracy of 0.699. This model maximized sensitivity with a value of 0.726. For KDS vs. NCA we found that a logistic regression using the Elixhauser score and blood urea nitrogen (BUN) values from the first lab results for patients with first admittance produced the best outcome, with an accuracy of 0.883 and maximized specificity of 0.898. For KDS vs. OKU a logistic regression using the estimated glomerular filtration rate (EGFR) calculated from the average lab values gave the best outcome, with an accuracy of 0.852 and maximized specificity of 0.922. Finally, a logistic regression using age, EGFR, BUN, blood creatinine, and blood CO2 gave the best outcome for KDS vs. OTH, with an accuracy of 0.894 and maximized specificity of 0.903. This research gives the medical field models to potentially use on kidney stone patients. It also provides a steppingstone for researchers to build off if they want to build kidney stone models for a different population of patients

    "GOLD or lower limit of normal definition? a comparison with expert-based diagnosis of chronic obstructive pulmonary disease in a prospective cohort-study"

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Global initiative for chronic Obstructive Lung Disease (GOLD) defines COPD as a fixed post-bronchodilator ratio of forced expiratory volume in 1 second and forced vital capacity (FEV1/FVC) below 0.7. Age-dependent cut-off values below the lower fifth percentile (LLN) of this ratio derived from the general population have been proposed as an alternative. We wanted to assess the diagnostic accuracy and prognostic capability of the GOLD and LLN definition when compared to an expert-based diagnosis.</p> <p>Methods</p> <p>In a prospective cohort study, 405 patients aged ≥ 65 years with a general practitioner's diagnosis of COPD were recruited and followed up for 4.5 (median; quartiles 3.9; 5.1) years. Prevalence rates of COPD according to GOLD and three LLN definitions and diagnostic performance measurements were calculated. The reference standard was the diagnosis of COPD of an expert panel that used all available diagnostic information, including spirometry and bodyplethysmography.</p> <p>Results</p> <p>Compared to the expert panel diagnosis, 'GOLD-COPD' misclassified 69 (28%) patients, and the three LLNs misclassified 114 (46%), 96 (39%), and 98 (40%) patients, respectively. The GOLD classification led to more false positives, the LLNs to more false negative diagnoses. The main predictors beyond the FEV1/FVC ratio for an expert diagnosis of COPD were the FEV1 % predicted, and the residual volume/total lung capacity ratio (RV/TLC). Adding FEV1 and RV/TLC to GOLD or LLN improved the diagnostic accuracy, resulting in a significant reduction of up to 50% of the number of misdiagnoses. The expert diagnosis of COPD better predicts exacerbations, hospitalizations and mortality than GOLD or LLN.</p> <p>Conclusions</p> <p>GOLD criteria over-diagnose COPD, while LLN definitions under-diagnose COPD in elderly patients as compared to an expert panel diagnosis. Incorporating FEV1 and RV/TLC into the GOLD-COPD or LLN-based definition brings both definitions closer to expert panel diagnosis of COPD, and to daily clinical practice.</p

    Survival after liver transplantation in the United Kingdom and Ireland compared with the United States

    No full text
    &lt;b&gt;Background and Aim&lt;/b&gt;: Surgical mortality in the US is widely perceived to be superior to that in the UK. However, previous comparisons of surgical outcome in the two countries have often failed to take sufficient account of case-mix or examine long-term outcome. The standardised nature of liver transplantation practice makes it uniquely placed for undertaking reliable international comparisons of surgical outcome. The objective of this study is to undertake a risk-adjusted disease-specific comparison of both short- and long-term survival of liver transplant recipients in the UK and Ireland with that in the US. &lt;b&gt;Design, setting and participants&lt;/b&gt;: Multi-centre cohort study using two high quality national databases including all adults who underwent a first single organ liver transplant in the UK and Ireland (n=5,925) and the US (n=41,866) between March 1994 and March 2005. &lt;b&gt;Main outcome measures&lt;/b&gt;: Post-transplant mortality during the first 90 days, 90 days-1 year and beyond the first year, adjusted for donor and recipient characteristics. &lt;b&gt;Results&lt;/b&gt;: Risk-adjusted mortality in the UK and Ireland was generally higher than in the US during the first 90 days (hazard ratio 1.17 95%CI 1.07-1.29), both for patients transplanted for acute liver failure (hazard ratio 1.27 95%CI 1.01-1.60) as well as those transplanted for chronic liver disease (hazard ratio 1.18 95% CI 1.07- 1.31). Between 90 days and 1 year post-transplantation, no statistically significant differences in overall risk- adjusted mortality were noted between the two cohorts. Survivors of the first post-transplant year in the UK and Ireland had lower overall risk-adjusted mortality than those transplanted in the US (hazard ratio 0.88 95% CI 0.81- 0.96). This difference was observed among patients transplanted for chronic liver disease (hazard ratio 0.88 95%CI 0.81-0.96) but not those transplanted for acute liver failure (hazard ratio 1.02 95%CI 0.70- 1.50). &lt;b&gt;Conclusions&lt;/b&gt;: Whilst risk adjusted mortality is higher in the UK and Ireland during the first 90 days following liver transplantation, it is higher in the US among those liver transplant recipients who survived the first post- transplant year. Our results are consistent with the notion that the US has superior acute peri-operative care whereas the UK appears to provide better quality chronic care following liver transplantation surgery

    The Impact of Nearly Universal Insurance Coverage on Health Care Utilization and Health: Evidence from Medicare

    Get PDF
    We use the increases in health insurance coverage at age 65 generated by the rules of the Medicare program to evaluate the effects of health insurance coverage on health related behaviors and outcomes. The rise in overall coverage at age 65 is accompanied by a narrowing of disparities across race and education groups. Groups with bigger increases in coverage at 65 experience bigger reductions in the probability of delaying or not receiving medical care, and bigger increases in the probability of routine doctor visits. Hospital discharge records also show large increases in admission rates at age 65, especially for elective procedures like bypass surgery and joint replacement. The rises in hospitalization are bigger for whites than blacks, and for residents of areas with higher rates of insurance coverage prior to age 65, suggesting that the gains arise because of the relative generosity of Medicare, rather than the availability of insurance coverage. Finally, there are small impacts of reaching age 65 on self-reported health, with the largest gains among the groups that experience the largest gains in insurance coverage. In contrast we find no evidence of a shift in the rate of growth of mortality rates at age 65.
    • …
    corecore