19 research outputs found

    Social Determinants of Stroke Hospitalization and Mortality in United States’ Counties

    Get PDF
    (1) Background: Stroke incidence and outcomes are influenced by socioeconomic status. There is a paucity of reported population-level studies regarding these determinants. The goal of this ecological analysis was to determine the county-level associations of social determinants of stroke hospitalization and death rates in the United States. (2) Methods: Publicly available data as of 9 April 2021, for the socioeconomic factors and outcomes, was extracted from the Centers for Disease Control and Prevention. The outcomes of interest were “all stroke hospitalization rates per 1000 Medicare beneficiaries” (SHR) and “all stroke death rates per 100,000 population” (SDR). We used a multivariate binomial generalized linear mixed model after converting the outcomes to binary based on their median values. (3) Results: A total of 3226 counties/county-equivalents of the states and territories in the US were analyzed. Heart disease prevalence (odds ratio, OR = 2.03, p \u3c 0.001), blood pressure medication nonadherence (OR = 2.02, p \u3c 0.001), age-adjusted obesity (OR = 1.24, p = 0.006), presence of hospitals with neurological services (OR = 1.9, p \u3c 0.001), and female head of household (OR = 1.32, p = 0.021) were associated with high SHR while cost of care per capita for Medicare patients with heart disease (OR = 0.5, p \u3c 0.01) and presence of hospitals (OR = 0.69, p \u3c 0.025) were associated with low SHR. Median household income (OR = 0.6, p \u3c 0.001) and park access (OR = 0.84, p = 0.016) were associated with low SDR while no college degree (OR = 1.21, p = 0.049) was associated with high SDR. (4) Conclusions: Several socioeconomic factors (e.g., education, income, female head of household) were found to be associated with stroke outcomes. Additional research is needed to investigate intermediate and potentially modifiable factors that can serve as targeted interventions

    Nations within a nation: variations in epidemiological transition across the states of India, 1990–2016 in the Global Burden of Disease Study

    Get PDF
    18% of the world's population lives in India, and many states of India have populations similar to those of large countries. Action to effectively improve population health in India requires availability of reliable and comprehensive state-level estimates of disease burden and risk factors over time. Such comprehensive estimates have not been available so far for all major diseases and risk factors. Thus, we aimed to estimate the disease burden and risk factors in every state of India as part of the Global Burden of Disease (GBD) Study 2016

    Novel Methods and Algorithms for Fitting and Equivalent Circuit Synthesis of Multi-port Systems' Frequency Response for Time-Domain Simulation

    No full text
    Interconnects in electrical/electronic systems are commonly modeled in frequency-domain. However, system level transient simulation typically needs its macromodel in circuit level representation. In this thesis, a novel iterative fitting method, Pole Residue Equivalent System Solver (PRESS), that approximates a multi-port frequency response to a set of poles and residues which can then be synthesized as an equivalent circuit netlist is proposed. The fitting method iteratively picks a few consecutive points from the frequency response and identifies a local transfer function matching their response; however, it tends to generate large number of poles/residues. To optimize the model order, improvements to the original PRESS algorithm are proposed. Experiments on multitude of test cases show that the performance of the resultant equivalent circuit matches closely to the given frequency-domain model, demonstrating the potential in the method for wide applications in signal and power integrity modeling and simulation of interconnect networks.masters, M.S., Electrical and Computer Engineering -- University of Idaho - College of Graduate Studies, 201

    What Do Employers Look for in ``Business Analytics'' Roles? \textendash A Skill Mining Analysis

    No full text
    Businesses constantly strive to build organizational capacity to use data strategically. As a result, there is a growing demand for business analytics professionals. While higher education systems worldwide have been adapting to build competencies, they must meet employees' expectations. Curriculum design for delivering business analytics competencies remains a challenge due to the rapidly evolving nature of business analytics as a discipline. The paper aims to decode the industry expectations for the Business Analytics profile. This study investigates the skills employers value by analyzing job descriptions. We use a text-mining approach to understand the weightage of different skills and mine skill clusters within business analytics roles. The core skill clusters are hard skills related to Big data, Business Intelligence, and analytical techniques. Results also suggest that traditional machine learning (ML) skills, typically expected in a data science profile, are also being sought after in a business analytics role. Surprisingly soft communication and stakeholder management skills are also emerging as essential skills for business analytics roles. This study provides a better understanding by investigating the interplay between the demand for skills in the job market and curriculum development. \textcopyright 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature

    Defining the Age of Young Ischemic Stroke Using Data-Driven Approaches

    No full text
    Introduction: The cut-point for defining the age of young ischemic stroke (IS) is clinically and epidemiologically important, yet it is arbitrary and differs across studies. In this study, we leveraged electronic health records (EHRs) and data science techniques to estimate an optimal cut-point for defining the age of young IS. Methods: Patient-level EHRs were extracted from 13 hospitals in Pennsylvania, and used in two parallel approaches. The first approach included ICD9/10, from IS patients to group comorbidities, and computed similarity scores between every patient pair. We determined the optimal age of young IS by analyzing the trend of patient similarity with respect to their clinical profile for different ages of index IS. The second approach used the IS cohort and control (without IS), and built three sets of machine-learning models—generalized linear regression (GLM), random forest (RF), and XGBoost (XGB)—to classify patients for seventeen age groups. After extracting feature importance from the models, we determined the optimal age of young IS by analyzing the pattern of comorbidity with respect to the age of index IS. Both approaches were completed separately for male and female patients. Results: The stroke cohort contained 7555 ISs, and the control included 31,067 patients. In the first approach, the optimal age of young stroke was 53.7 and 51.0 years in female and male patients, respectively. In the second approach, we created 102 models, based on three algorithms, 17 age brackets, and two sexes. The optimal age was 53 (GLM), 52 (RF), and 54 (XGB) for female, and 52 (GLM and RF) and 53 (RF) for male patients. Different age and sex groups exhibited different comorbidity patterns. Discussion: Using a data-driven approach, we determined the age of young stroke to be 54 years for women and 52 years for men in our mainly rural population, in central Pennsylvania. Future validation studies should include more diverse populations

    Imputation of missing values for electronic health record laboratory data

    No full text
    Laboratory data from Electronic Health Records (EHR) are often used in prediction models where estimation bias and model performance from missingness can be mitigated using imputation methods. We demonstrate the utility of imputation in two real-world EHR-derived cohorts of ischemic stroke from Geisinger and of heart failure from Sutter Health to: (1) characterize the patterns of missingness in laboratory variables; (2) simulate two missing mechanisms, arbitrary and monotone; (3) compare cross-sectional and multi-level multivariate missing imputation algorithms applied to laboratory data; (4) assess whether incorporation of latent information, derived from comorbidity data, can improve the performance of the algorithms. The latter was based on a case study of hemoglobin A1c under a univariate missing imputation framework. Overall, the pattern of missingness in EHR laboratory variables was not at random and was highly associated with patients’ comorbidity data; and the multi-level imputation algorithm showed smaller imputation error than the cross-sectional method

    Prediction of Long-Term Stroke Recurrence Using Machine Learning Models

    No full text
    Background: The long-term risk of recurrent ischemic stroke, estimated to be between 17% and 30%, cannot be reliably assessed at an individual level. Our goal was to study whether machine-learning can be trained to predict stroke recurrence and identify key clinical variables and assess whether performance metrics can be optimized. Methods: We used patient-level data from electronic health records, six interpretable algorithms (Logistic Regression, Extreme Gradient Boosting, Gradient Boosting Machine, Random Forest, Support Vector Machine, Decision Tree), four feature selection strategies, five prediction windows, and two sampling strategies to develop 288 models for up to 5-year stroke recurrence prediction. We further identified important clinical features and different optimization strategies. Results: We included 2091 ischemic stroke patients. Model area under the receiver operating characteristic (AUROC) curve was stable for prediction windows of 1, 2, 3, 4, and 5 years, with the highest score for the 1-year (0.79) and the lowest score for the 5-year prediction window (0.69). A total of 21 (7%) models reached an AUROC above 0.73 while 110 (38%) models reached an AUROC greater than 0.7. Among the 53 features analyzed, age, body mass index, and laboratory-based features (such as high-density lipoprotein, hemoglobin A1c, and creatinine) had the highest overall importance scores. The balance between specificity and sensitivity improved through sampling strategies. Conclusion: All of the selected six algorithms could be trained to predict the long-term stroke recurrence and laboratory-based variables were highly associated with stroke recurrence. The latter could be targeted for personalized interventions. Model performance metrics could be optimized, and models can be implemented in the same healthcare system as intelligent decision support for targeted intervention

    Increasing the density of laboratory measures for machine learning applications

    No full text
    Background. The imputation of missingness is a key step in Electronic Health Records (EHR) mining, as it can significantly affect the conclusions derived from the downstream analysis in translational medicine. The missingness of laboratory values in EHR is not at random, yet imputation techniques tend to disregard this key distinction. Consequently, the development of an adaptive imputation strategy designed specifically for EHR is an important step in improving the data imbalance and enhancing the predictive power of modeling tools for healthcare applications. Method. We analyzed the laboratory measures derived from Geisinger’s EHR on patients in three distinct cohorts—patients tested for Clostridioides difficile (Cdiff) infection, patients with a diagnosis of inflammatory bowel disease (IBD), and patients with a diagnosis of hip or knee osteoarthritis (OA). We extracted Logical Observation Identifiers Names and Codes (LOINC) from which we excluded those with 75% or more missingness. The comorbidities, primary or secondary diagnosis, as well as active problem lists, were also extracted. The adaptive imputation strategy was designed based on a hybrid approach. The comorbidity patterns of patients were transformed into latent patterns and then clustered. Imputation was performed on a cluster of patients for each cohort independently to show the generalizability of the method. The results were compared with imputation applied to the complete dataset without incorporating the information from comorbidity patterns. Results. We analyzed a total of 67,445 patients (11,230 IBD patients, 10,000 OA patients, and 46,215 patients tested for C. difficile infection). We extracted 495 LOINC and 11,230 diagnosis codes for the IBD cohort, 8160 diagnosis codes for the Cdiff cohort, and 2042 diagnosis codes for the OA cohort based on the primary/secondary diagnosis and active problem list in the EHR. Overall, the most improvement from this strategy was observed when the laboratory measures had a higher level of missingness. The best root mean square error (RMSE) difference for each dataset was recorded as −35.5 for the Cdiff, −8.3 for the IBD, and −11.3 for the OA dataset. Conclusions. An adaptive imputation strategy designed specifically for EHR that uses complementary information from the clinical profile of the patient can be used to improve the imputation of missing laboratory values, especially when laboratory codes with high levels of missingness are included in the analysis

    Artificial Intelligence: A Shifting Paradigm in Cardio-Cerebrovascular Medicine

    No full text
    The future of healthcare is an organic blend of technology, innovation, and human connection. As artificial intelligence (AI) is gradually becoming a go-to technology in healthcare to improve efficiency and outcomes, we must understand our limitations. We should realize that our goal is not only to provide faster and more efficient care, but also to deliver an integrated solution to ensure that the care is fair and not biased to a group of sub-population. In this context, the field of cardio-cerebrovascular diseases, which encompasses a wide range of conditions—from heart failure to stroke—has made some advances to provide assistive tools to care providers. This article aimed to provide an overall thematic review of recent development focusing on various AI applications in cardio-cerebrovascular diseases to identify gaps and potential areas of improvement. If well designed, technological engines have the potential to improve healthcare access and equitability while reducing overall costs, diagnostic errors, and disparity in a system that affects patients and providers and strives for efficiency
    corecore