136 research outputs found
Discovering disease-disease associations using electronic health records in The Guideline Advantage (TGA) dataset
Certain diseases have strong comorbidity and co-occurrence with others. Understanding disease-disease associations can potentially increase awareness among healthcare providers of co-occurring conditions and facilitate earlier diagnosis, prevention and treatment of patients. In this study, we utilized the valuable and large The Guideline Advantage (TGA) longitudinal electronic health record dataset from 70 outpatient clinics across the United States to investigate potential disease-disease associations. Specifically, the most prevalent 50 disease diagnoses were manually identified from 165,732 unique patients. To investigate the co-occurrence or dependency associations among the 50 diseases, the categorical disease terms were first mapped into numerical vectors based on disease co-occurrence frequency in individual patients using the Word2Vec approach. Then the novel and interesting disease association clusters were identified using correlation and clustering analyses in the numerical space. Moreover, the distribution of time delay (Δt) between pair-wise strongly associated diseases (correlation coefficients ≥ 0.5) were calculated to show the dependency among the diseases. The results can indicate the risk of disease comorbidity and complications, and facilitate disease prevention and optimal treatment decision-making
Women and ethnoracial minorities with poor cardiovascular health measures associated with a higher risk of developing mood disorder
BACKGROUND: Mood disorders (MDS) are a type of mental health illness that effects millions of people in the United States. Early prediction of MDS can give providers greater opportunity to treat these disorders. We hypothesized that longitudinal cardiovascular health (CVH) measurements would be informative for MDS prediction.
METHODS: To test this hypothesis, the American Heart Association\u27s Guideline Advantage (TGA) dataset was used, which contained longitudinal EHR from 70 outpatient clinics. The statistical analysis and machine learning models were employed to identify the associations of the MDS and the longitudinal CVH metrics and other confounding factors.
RESULTS: Patients diagnosed with MDS consistently had a higher proportion of poor CVH compared to patients without MDS, with the largest difference between groups for Body mass index (BMI) and Smoking. Race and gender were associated with status of CVH metrics. Approximate 46% female patients with MDS had a poor hemoglobin A1C compared to 44% of those without MDS; 62% of those with MDS had poor BMI compared to 47% of those without MDS; 59% of those with MDS had poor blood pressure (BP) compared to 43% of those without MDS; and 43% of those with MDS were current smokers compared to 17% of those without MDS.
CONCLUSIONS: Women and ethnoracial minorities with poor cardiovascular health measures were associated with a higher risk of development of MDS, which indicated the high utility for using routine medical records data collected in care to improve detection and treatment for MDS among patients with poor CVH
Time-series cardiovascular risk factors and receipt of screening for breast, cervical, and colon cancer: The Guideline Advantage
BACKGROUND: Cancer is the second leading cause of death in the United States. Cancer screenings can detect precancerous cells and allow for earlier diagnosis and treatment. Our purpose was to better understand risk factors for cancer screenings and assess the effect of cancer screenings on changes of Cardiovascular health (CVH) measures before and after cancer screenings among patients.
METHODS: We used The Guideline Advantage (TGA)-American Heart Association ambulatory quality clinical data registry of electronic health record data (n = 362,533 patients) to investigate associations between time-series CVH measures and receipt of breast, cervical, and colon cancer screenings. Long short-term memory (LSTM) neural networks was employed to predict receipt of cancer screenings. We also compared the distributions of CVH factors between patients who received cancer screenings and those who did not. Finally, we examined and quantified changes in CVH measures among the screened and non-screened groups.
RESULTS: Model performance was evaluated by the area under the receiver operator curve (AUROC): the average AUROC of 10 curves was 0.63 for breast, 0.70 for cervical, and 0.61 for colon cancer screening. Distribution comparison found that screened patients had a higher prevalence of poor CVH categories. CVH submetrics were improved for patients after cancer screenings.
CONCLUSION: Deep learning algorithm could be used to investigate the associations between time-series CVH measures and cancer screenings in an ambulatory population. Patients with more adverse CVH profiles tend to be screened for cancers, and cancer screening may also prompt favorable changes in CVH. Cancer screenings may increase patient CVH health, thus potentially decreasing burden of disease and costs for the health system (e.g., cardiovascular diseases and cancers)
Developing a risk stratification model for surgical site infection after abdominal hysterectomy
OBJECTIVE: The incidence of surgical site infection (SSI) ranges widely from 2-21% after hysterectomy. There is insufficient understanding of risk factors to build a specific risk stratification index. METHODS: Retrospective case-control study of 545 abdominal and 275 vaginal hysterectomies from 7/1/03 - 6/30/05 at four institutions. SSIs were defined using CDC/NNIS criteria. Independent risk factors for abdominal hysterectomy were identified by logistic regression. RESULTS: There were 13 deep incisional, 53 superficial incisional, and 18 organ-space SSI after abdominal and 14 organ-space SSI after vaginal hysterectomy. Because risk factors for organ-space SSI were different in univariate analysis, further analyses focused on incisional SSI after abdominal hysterectomy. The maximum serum glucose within 5 days after operation was highest in patients with deep incisional SSI, lower in patients with superficial incisional SSI and lowest in uninfected patients (median 189, 156, and 141mg/dL, p = .005). Independent risk factors for incisional SSI included blood transfusion (odds ratio (OR) 2.4) and morbid obesity (body mass index (BMI) > 35, OR 5.7). Duration of operation > 75th percentile (OR 1.7), obesity (BMI 30-35, OR 3.0), and lack of private health insurance (OR 1.7) were marginally associated with increased odds of SSI. CONCLUSIONS: Incisional SSI after abdominal hysterectomy was associated with increased BMI and blood transfusion. Longer operative time and lack of private health insurance were marginally associated with SSI. A specific risk stratification index could help to more accurately predict the risk of incisional SSI following abdominal hysterectomy
A multicenter study of Clostridium difficile infection-related colectomy, 2000-2006
BACKGROUND: The incidence of Clostridium difficile infection (CDI) has been increasing. Previous studies report that the number of colectomies for CDI is also rising. Outside of a few notable outbreaks, there are few published data documenting increasing severity of CDI. The specific aims of this multiyear, multicenter study were to assess CDI-related colectomy rates and compare CDI-related colectomy rates by CDI surveillance definition. METHODS: Cases of CDI and patients who underwent colectomy were identified electronically from 5 US tertiary-care centers from July 2000 through June 2006. Chart review was performed to determine if a colectomy was for CDI. Monthly CDI-related colectomy rates were calculated as the number of CDI-related colectomies per 1,000 CDI cases. Data between observational groups were compared using χ(2) and Mann-Whitney U tests. Logistic regression was performed to evaluate risk factors for CDI-related colectomy. RESULTS: 8569 cases of CDI were identified and 75 patients had CDI-related colectomy. The overall colectomy rate was 8.7/1,000 CDI cases. The CDI-related colectomy rate ranged from 0 to 23 per 1,000 CDI episodes across hospitals. The colectomy rates for healthcare facility (HCF)-onset CDI was 4.3/1000 CDI cases and 16.5 /1000 CDI cases for community-onset CDI (p <.05). There were significantly more CDI-related colectomies at hospitals B and C (p<.05). CONCLUSIONS: The overall CDI-related colectomy rate was low, and there was no significant change in the CDI-related colectomy rate over time. Onset of disease outside of the study hospital was an independent risk factor for colectomy
Deriving measures of intensive care unit antimicrobial use from computerized pharmacy data: Methods, validation, and overcoming barriers
OBJECTIVE: To outline methods for deriving and validating intensive care unit (ICU) antimicrobial utilization (AU) measures from computerized data and to describe programming problems that emerged. DESIGN: Retrospective evaluation of computerized pharmacy and administrative data. SETTING: ICUs from four academic medical centers over 36 months. INTERVENTIONS: Investigators separately developed and validated programming code to report AU measures in selected ICUs. Antibacterial and antifungal drugs for systemic administration were categorized and expressed as antimicrobial days (each day that each antimicrobial drug was given to each patient) and patient-days on antimicrobials (each day that any antimicrobial drug was given to each patient). Monthly rates were compiled and analyzed centrally with ICU patient-days as the denominator. Results were validated against data collected from manual medical record review. Frequent discussion among investigators aided identification and correction of programming problems. RESULTS: AU data were successfully programmed though a reiterative process of computer code revision. After identifying and resolving major programming errors, comparison of computerized patient-level data with data collected by manual medical record review revealed discrepancies in antimicrobial days and patient-days on antimicrobials ranging from <1% to 17.7%. The hospital for which numerator data were derived from electronic medication administration records had the least discrepant results. CONCLUSIONS: Computerized AU measures can be derived feasibly, but threats to validity must be sought and corrected. The magnitude of discrepancies between computerized AU data and a gold standard based on manual chart review varies, with electronic medication administration records providing maximal accuracy
Multicenter evaluation of computer automated versus traditional surveillance of hospital-acquired bloodstream infections
Objective.Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.Design.Retrospective cohort study.Setting.Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.Methods.Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).Results.We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.Conclusions.Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.Infect Control Hosp Epidemiol 2014;35(12):1483–1490</jats:sec
Multicenter study of the impact of community-onset Clostridium difficile infection on surveillance for C. difficile infection
OBJECTIVE: To evaluate the influence of community-onset/healthcare facility-associated cases on Clostridium difficile infection (CDI) incidence and outbreak detection. DESIGN: Retrospective cohort. SETTING: Five acute-care healthcare facilities in the United States. METHODS: Positive stool C. difficile toxin assays from July 2000 through June 2006 and healthcare facility exposure information were collected. CDI cases were classified as hospital-onset (HO) if they were diagnosed > 48 hours after admission or community-onset/healthcare facility-associated if they were diagnosed ≤ 48 hours from admission and had recently been discharged from the healthcare facility. Four surveillance definitions were compared: HO cases only and HO plus community-onset/healthcare facility-associated cases diagnosed within 30 (HCFA-30), 60 (HCFA-60) and 90 (HCFA-90) days after discharge from the study hospital. Monthly CDI rates were compared. Control charts were used to identify potential CDI outbreaks. RESULTS: The HCFA-30 rate was significantly higher than the HO rate at two healthcare facilities (p<0.01). The HCFA-30 rate was not significantly different from the HCFA-60 or HCFA-90 rates at any healthcare facility. The correlations between each healthcare facility’s monthly rates of HO and HCFA-30 CDI were almost perfect (range, 0.94–0.99, p<0.001). Overall, 12 time points had a CDI rate >3 SD above the mean, including 11 by the HO definition and 9 by the HCFA-30 definition, with discordant results at 4 time points (κ = 0.794, p<0.001). CONCLUSIONS: Tracking community-onset/healthcare facility-associated cases in addition to HO cases captures significantly more CDI cases but surveillance of HO CDI alone is sufficient to detect an outbreak
Implementing automated surveillance for tracking Clostridium difficile infection at multiple healthcare facilities
Automated surveillance utilizing electronically available data has been found to be accurate and save time. An automated CDI surveillance algorithm was validated at four CDC Prevention Epicenters hospitals. Electronic surveillance was highly sensitive, specific, and showed good to excellent agreement for hospital-onset; community-onset, study facility associated; indeterminate; and recurrent CDI
Use of Medicare diagnosis and procedure codes to improve detection of surgical site infections following hip arthroplasty, knee arthroplasty, and vascular surgery
ObjectiveTo evaluate the use of routinely collected electronic health data in Medicare claims to identify surgical site infections (SSIs) following hip arthroplasty, knee arthroplasty, and vascular surgery.DesignRetrospective cohort study.SettingFour academic hospitals that perform prospective SSI surveillance.MethodsWe developed lists of International Classification of Diseases, Ninth Revision, and Current Procedural Terminology diagnosis and procedure codes to identify potential SSIs. We then screened for these codes in Medicare claims submitted by each hospital on patients older than 65 years of age who had undergone 1 of the study procedures during 2007. Each site reviewed medical records of patients identified by either claims codes or traditional infection control surveillance to confirm SSI using Centers for Disease Control and Prevention/National Healthcare Safety Network criteria. We assessed the performance of both methods against all chart-confirmed SSIs identified by either method.ResultsClaims-based surveillance detected 1.8-4.7-fold more SSIs than traditional surveillance, including detection of all previously identified cases. For hip and vascular surgery, there was a 5-fold and 1.6-fold increase in detection of deep and organ/space infections, respectively, with no increased detection of deep and organ/space infections following knee surgery. Use of claims to trigger chart review led to confirmation of SSI in 1 out of 3 charts for hip arthroplasty, 1 out of 5 charts for knee arthroplasty, and 1 out of 2 charts for vascular surgery.ConclusionClaims-based SSI surveillance markedly increased the number of SSIs detected following hip arthroplasty, knee arthroplasty, and vascular surgery. It deserves consideration as a more effective approach to target chart reviews for identifying SSIs
- …