275 research outputs found

    Weight of risk factors for mortality and short-term mortality displacement during the COVID-19 pandemic.

    Get PDF
    Background: We conducted a population-based cohort study to estimate mortality before, during and after the COVID-19 peak and to compare mortality in 2020 with rates reported in previous years, with a view to helping decision makers to apply containment measures for high-risk groups. Methods: All deaths were collected between 2015 and 2020 from municipal registry database. In 2020, weeks 1-26 were stratified in three periods: before, during and after the COVID mortality peak. The Poisson Generalized Linear regression Model showed the “harvesting effect”. Three logistic regressions for 8 dependent variables (age and comorbidities) and a t-test of  differences described all-cause mortality risk factors in 2019 and 2020 and differences between COVID and non-COVID patients. Results: A total of 47,876 deaths were collected. All-cause deaths increased by 38.5% during the COVID peak and decreased by 18% during the post-peak period in comparison with the average registered during the control period (2015-19), with significant mortality displacement in 2020. Except for chronic renal injuries in subjects aged 45-64 years, diabetes and chronic cardiovascular diseases in those aged 65-84 years, and neuropathies in those aged >84 years, the weight of comorbidities in deaths was similar or lower in COVID subjects than in non-COVID subjects. Discussions: Surprisingly, the weight of comorbidities in death, compared to weight in non-COVID subjects allows you to highlight some surprising results such as COPD, IBD and Cancer. The excess mortality that we observed in the entire period were modest in comparison with initial estimates during the peak, owing to the mild influenza season and the harvesting effect starting from the second half of May.   &nbsp

    Facilitating and Enhancing Biomedical Knowledge Translation: An in Silico Approach to Patient-centered Pharmacogenomic Outcomes Research

    Get PDF
    Current research paradigms such as traditional randomized control trials mostly rely on relatively narrow efficacy data which results in high internal validity and low external validity. Given this fact and the need to address many complex real-world healthcare questions in short periods of time, alternative research designs and approaches should be considered in translational research. In silico modeling studies, along with longitudinal observational studies, are considered as appropriate feasible means to address the slow pace of translational research. Taking into consideration this fact, there is a need for an approach that tests newly discovered genetic tests, via an in silico enhanced translational research model (iS-TR) to conduct patient-centered outcomes research and comparative effectiveness research studies (PCOR CER). In this dissertation, it was hypothesized that retrospective EMR analysis and subsequent mathematical modeling and simulation prediction could facilitate and accelerate the process of generating and translating pharmacogenomic knowledge on comparative effectiveness of anticoagulation treatment plan(s) tailored to well defined target populations which eventually results in a decrease in overall adverse risk and improve individual and population outcomes. To test this hypothesis, a simulation modeling framework (iS-TR) was proposed which takes advantage of the value of longitudinal electronic medical records (EMRs) to provide an effective approach to translate pharmacogenomic anticoagulation knowledge and conduct PCOR CER studies. The accuracy of the model was demonstrated by reproducing the outcomes of two major randomized clinical trials for individualizing warfarin dosing. A substantial, hospital healthcare use case that demonstrates the value of iS-TR when addressing real world anticoagulation PCOR CER challenges was also presented

    Using the Literature to Identify Confounders

    Get PDF
    Prior work in causal modeling has focused primarily on learning graph structures and parameters to model data generating processes from observational or experimental data, while the focus of the literature-based discovery paradigm was to identify novel therapeutic hypotheses in publicly available knowledge. The critical contribution of this dissertation is to refashion the literature-based discovery paradigm as a means to populate causal models with relevant covariates to abet causal inference. In particular, this dissertation describes a generalizable framework for mapping from causal propositions in the literature to subgraphs populated by instantiated variables that reflect observational data. The observational data are those derived from electronic health records. The purpose of causal inference is to detect adverse drug event signals. The Principle of the Common Cause is exploited as a heuristic for a defeasible practical logic. The fundamental intuition is that improbable co-occurrences can be “explained away” with reference to a common cause, or confounder. Semantic constraints in literature-based discovery can be leveraged to identify such covariates. Further, the asymmetric semantic constraints of causal propositions map directly to the topology of causal graphs as directed edges. The hypothesis is that causal models conditioned on sets of such covariates will improve upon the performance of purely statistical techniques for detecting adverse drug event signals. By improving upon previous work in purely EHR-based pharmacovigilance, these results establish the utility of this scalable approach to automated causal inference

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    J Biomed Inform

    Get PDF
    We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP.CC999999/ImCDC/Intramural CDC HHS/United States2019-11-20T00:00:00Z28729030PMC6864736694

    Applying Data Warehousing to a Phase III Clinical Trial From the Fondazione Italiana Linfomi Ensures Superior Data Quality and Improved Assessment of Clinical Outcomes

    Get PDF
    Data collection in clinical trials is becoming complex, with a huge number of variables that need to be recorded, verified, and analyzed to effectively measure clinical outcomes. In this study, we used data warehouse (DW) concepts to achieve this goal. A DW was developed to accommodate data from a large clinical trial, including all the characteristics collected. We present the results related to baseline variables with the following objectives: developing a data quality (DQ) control strategy and improving outcome analysis according to the clinical trial primary end points

    An Evaluation of the Use of a Clinical Research Data Warehouse and I2b2 Infrastructure to Facilitate Replication of Research

    Get PDF
    Replication of clinical research is requisite for forming effective clinical decisions and guidelines. While rerunning a clinical trial may be unethical and prohibitively expensive, the adoption of EHRs and the infrastructure for distributed research networks provide access to clinical data for observational and retrospective studies. Herein I demonstrate a means of using these tools to validate existing results and extend the findings to novel populations. I describe the process of evaluating published risk models as well as local data and infrastructure to assess the replicability of the study. I use an example of a risk model unable to be replicated as well as a study of in-hospital mortality risk I replicated using UNMC’s clinical research data warehouse. In these examples and other studies we have participated in, some elements are commonly missing or under-developed. One such missing element is a consistent and computable phenotype for pregnancy status based on data recorded in the EHR. I survey local clinical data and identify a number of variables correlated with pregnancy as well as demonstrate the data required to identify the temporal bounds of a pregnancy episode. Next, another common obstacle to replicating risk models is the necessity of linking to alternative data sources while maintaining data in a de-identified database. I demonstrate a pipeline for linking clinical data to socioeconomic variables and indices obtained from the American Community Survey (ACS). While these data are location-based, I provide a method for storing them in a HIPAA compliant fashion so as not to identify a patient’s location. While full and efficient replication of all clinical studies is still a future goal, the demonstration of replication as well as beginning the development of a computable phenotype for pregnancy and the incorporation of location based data in a de-identified data warehouse demonstrate how the EHR data and a research infrastructure may be used to facilitate this effort

    Preface

    Get PDF

    The integration of supply chain value stream mapping and discrete event simulation for lead time reduction of warehouse operations in a pharmaceutical organization

    Get PDF
    The supply chain lead time build-up that occurs due to inventory handling inside the warehouse that comprises of a set of waiting time, queuing time, and unwanted delays creates difficulties in meeting demand shocks and third party stakeholder requirements. These problems consistently prevail and tend to evolve no matter how sophisticated production planning is done. In that case, a pharmaceutical warehouse supply chain inventory would be a real challenge to study or map to find out ways for further improvements. In this research, a Malaysian pharmaceutical company’s warehouse was considered for the study. After a detailed field work and literature gap inferences, a case study approach was considered to be the best methodology for this study. It was applied to find effective ways to introduce lean integrated simulation modelling using Supply Chain Value Stream Mapping (SCVSM) and Discrete Event Simulation (DES) to capture, record, analyze, and reduce inventory waiting time, delays, queues and other wastes for a selected particular product family. After several lean suggestions in the future state SCVSM, the results of this study show that there is a considerable improvement in the warehouse lead time. The production lead time and total process time has decreased by 51.43% and 44.41 % respectively. The total value-added time has increased by 29.21 % the non-value added time has decreased by 31.86 %. In the second segment, there was a 20.22 % increase in the value-added time and a 23.17 % decrease in the non-value added time. DES models were then developed to replicate the entire operations for the purpose of present and future state simulation along with the suggestions for improvements. This study proved to possess strong managerial and practical implications that shall help in better decision making by deeply understanding the supply chain activities that occur as discrete events inside a warehouse

    From Mouse Models to Patients: A Comparative Bioinformatic Analysis of HFpEF and HFrEF

    Get PDF
    Heart failure (HF) represents an immense health burden with currently no curative therapeutic strategies. Study of HF patient heterogeneity has led to the recognition of HF with preserved (HFpEF) and reduced ejection fraction (HFrEF) as distinct syndromes regarding molecular characteristics and clinical presentation. Until the recent past, HFrEF represented the focus of research, reflected in the development of a number of therapeutic strategies. However, the pathophysiological concepts applicable to HFrEF may not be necessarily applicable to HFpEF. HF induces a series of ventricular modeling processes that involve, among others, hallmarks of hypertrophy, fibrosis, inflammation, all of which can be observed to some extent in HFpEF and HFrEF. Thus, by direct comparative analysis between HFpEF and HFrEF, distinctive features can be uncovered, possibly leading to improved pathophysiological understanding and opportunities for therapeutic intervention. Moreover, recent advances in biotechnologies, animal models, and digital infrastructure have enabled large-scale collection of molecular and clinical data, making it possible to conduct a bioinformatic comparative analysis of HFpEF and HFrEF. Here, I first evaluated the field of HF transcriptome research by revisiting published studies and data sets to provide a consensus gene expression reference. I discussed the patient clientele that was captured, revealing that HFpEF patients were not represented. Thus, I applied alternative approaches to study HFpEF. I utilized a mouse surrogate model of HFpEF and analyzed single cell transcriptomics to gain insights into the interstitial tissue remodeling. I contrasted this analysis by comparison of fibroblast activation patterns found in mouse models resembling HFrEF. The human reference was used to further demonstrate similarities between models and patients and a novel possible biomarker for HFpEF was introduced. Mouse models only capture selected aspects of HFpEF but largely fail to imitate the complex multi-factor and multi-organ syndrome present in humans. To account for this complexity, I performed a top-down analysis in HF patients by analyzing phenome-wide comorbidity patterns. I derived clinical insights by contrasting HFpEF and HFrEF patients and their comorbidity profiles. These profiles were then used to predict associated genetic profiles, which could be also recovered in the HFpEF mouse model, providing hypotheses about the molecular links of comorbidity profiles. My work provided novel insights into HFpEF and HFrEF syndromes and exemplified an interdisciplinary bioinformatic approach for a comparative analysis of both syndromes using different data modalities
    • …
    corecore