15 research outputs found

    Hypoxia induces dilated cardiomyopathy in the chick embryo: mechanism, intervention, and long-term consequences

    Get PDF
    Background: Intrauterine growth restriction is associated with an increased future risk for developing cardiovascular diseases. Hypoxia in utero is a common clinical cause of fetal growth restriction. We have previously shown that chronic hypoxia alters cardiovascular development in chick embryos. The aim of this study was to further characterize cardiac disease in hypoxic chick embryos. Methods: Chick embryos were exposed to hypoxia and cardiac structure was examined by histological methods one day prior to hatching (E20) and at adulthood. Cardiac function was assessed in vivo by echocardiography and ex vivo by contractility measurements in isolated heart muscle bundles and isolated cardiomyocytes. Chick embryos were exposed to vascular endothelial growth factor (VEGF) and its scavenger soluble VEGF receptor-1 (sFlt-1) to investigate the potential role of this hypoxia-regulated cytokine. Principal Findings: Growth restricted hypoxic chick embryos showed cardiomyopathy as evidenced by left ventricular (LV) dilatation, reduced ventricular wall mass and increased apoptosis. Hypoxic hearts displayed pump dysfunction with decreased LV ejection fractions, accompanied by signs of diastolic dysfunction. Cardiomyopathy caused by hypoxia persisted into adulthood. Hypoxic embryonic hearts showed increases in VEGF expression. Systemic administration of rhVEGF165 to normoxic chick embryos resulted in LV dilatation and a dose-dependent loss of LV wall mass. Lowering VEGF levels in hypoxic embryonic chick hearts by systemic administration of sFlt-1 yielded an almost complete normalization of the phenotype. Conclusions/Significance: Our data show that hypoxia causes a decreased cardiac performance and cardiomyopathy in chick embryos, involving a significant VEGF-mediated component. This cardiomyopathy persists into adulthood

    Turnaround time prediction for clinical chemistry samples using machine learning

    Get PDF
    Objectives: Turnaround time (TAT) is an essential performance indicator of a medical diagnostic laboratory. Accurate TAT prediction is crucial for taking timely action in case of prolonged TAT and is important for efficient organization of healthcare. The objective was to develop a model to accurately predict TAT, focusing on the automated pre-analytical and analytical phase. Methods: A total of 90,543 clinical chemistry samples from Erasmus MC were included and 39 features were analyzed, including priority level and workload in the different stages upon sample arrival. PyCaret was used to evaluate and compare multiple regression models, including the Extra Trees (ET) Regressor, Ridge Regression and K Neighbors Regressor, to determine the best model for TAT prediction. The relative residual and SHAP (SHapley Additive exPlanations) values were plotted for model evaluation. Results: The regression-tree-based method ET Regressor performed best with an R2 of 0.63, a mean absolute error of 2.42 min and a mean absolute percentage error of 7.35%, where the average TAT was 30.09 min. Of the test set samples, 77% had a relative residual error of at most 10%. SHAP value analysis indicated that TAT was mainly influenced by the workload in pre-analysis upon sample arrival and the number of modules visited. Conclusions: Accurate TAT predictions were attained with the ET Regressor and features with the biggest impact on TAT were identified, enabling the laboratory to take timely action in case of prolonged TAT and helping healthcare providers to improve planning of scarce resources to increase healthcare efficiency

    Design of fork-join networks of First-In-First-Out and infinite-server queues applied to clinical chemistry laboratories

    Get PDF
    This paper considers optimal design of queueing networks in which each node consists of a single-server FIFO queue and an infinite-server queue, which is referred to as incubation queue. Upon service completion at a FIFO queue, a job splits (forks) into two parts: the first part is routed to the next node on its route, and the second part is placed in the incubation queue. Routing of the jobs of multiple types is governed by a central decision maker that decides on the routes for each job type and aims to minimize the mean turnaround time of the jobs, i.e., the time spent in the system until service completion at the FIFO queue in the last node, and at all incubation queues on the job's route, which may be viewed as a join operation. We provide explicit results for the turnaround time when all service and inter-arrival time distributions are exponential and invoke the Queueing Network Analyzer when these distributions are general. We then develop a Simulated Annealing approach to find the optimal routing configuration. We apply our approach to determine the optimal routing configuration in a chemistry analyzer line

    A critical review of laboratory performance indicators

    Get PDF
    Healthcare budgets worldwide are under constant pressure to reduce costs while improving efficiency and quality. This phenomenon is also visible in clinical laboratories. Efficiency gains can be achieved by reducing the error rate and by improving the laboratory’s layout and logistics. Performance indicators (PIs) play a crucial role in this process as they allow for performance assessment. This review aids in the process for selecting laboratory PIs—which is not trivial—by providing an overview of frequently used PIs in the literature that can also be used in clinical laboratories. We conducted a systematic review of the laboratory medicine literature on PIs. As the testing process in clinical laboratories can be viewed as a production process, we also reviewed the production processes literature on PIs. The reviewed literature relates to the design, optimization or performance assessment of such processes. The most frequently cited PIs relate to pre-analytical errors, timeliness, resource utilization, cost, and the amount of congestion. Their citation frequency in the literature is used as a proxy for their importance. PIs are discussed in terms of their definition, measurability and impact. The use of suitable PIs is crucial in production processes, including clinical laboratories. By also reviewing the production processes literature, additional relevant PIs for clinical laboratories were found. The PIs in the laboratory medicine literature mostly relate to laboratory errors, while the PIs in the production processes literature relate to the amount of congestion in the process

    A critical review of laboratory performance indicators

    No full text
    Healthcare budgets worldwide are under constant pressure to reduce costs while improving efficiency and quality. This phenomenon is also visible in clinical laboratories. Efficiency gains can be achieved by reducing the error rate and by improving the laboratory’s layout and logistics. Performance indicators (PIs) play a crucial role in this process as they allow for performance assessment. This review aids in the process for selecting laboratory PIs—which is not trivial—by providing an overview of frequently used PIs in the literature that can also be used in clinical laboratories. We conducted a systematic review of the laboratory medicine literature on PIs. As the testing process in clinical laboratories can be viewed as a production process, we also reviewed the production processes literature on PIs. The reviewed literature relates to the design, optimization or performance assessment of such processes. The most frequently cited PIs relate to pre-analytical errors, timeliness, resource utilization, cost, and the amount of congestion. Their citation frequency in the literature is used as a proxy for their importance. PIs are discussed in terms of their definition, measurability and impact. The use of suitable PIs is crucial in production processes, including clinical laboratories. By also reviewing the production processes literature, additional relevant PIs for clinical laboratories were found. The PIs in the laboratory medicine literature mostly relate to laboratory errors, while the PIs in the production processes literature relate to the amount of congestion in the process

    Failure Mode and Effects Analysis (FMEA) at the preanalytical phase for POCT blood gas analysis : proposal for a shared proactive risk analysis model

    Get PDF
    Objectives: Proposal of a risk analysis model to diminish negative impact on patient care by preanalytical errors in blood gas analysis (BGA). Methods: Here we designed a Failure Mode and Effects Analysis (FMEA) risk assessment template for BGA, based on literature references and expertise of an international team of laboratory and clinical health care professionals. Results: The FMEA identifies pre-analytical process steps, errors that may occur whilst performing BGA (potential failure mode), possible consequences (potential failure effect) and preventive/corrective actions (current controls). Probability of failure occurrence (OCC), severity of failure (SEV) and probability of failure detection (DET) are scored per potential failure mode. OCC and DET depend on test setting and patient population e.g., they differ in primary community health centres as compared to secondary community hospitals and third line university or specialized hospitals. OCC and DET also differ between stand-alone and networked instruments, manual and automated patient identification, and whether results are automatically transmitted to the patients electronic health record. The risk priority number (RPN = SEV x OCC x DET) can be applied to determine the sequence in which risks are addressed. RPN can be recalculated after implementing changes to decrease OCC and/or increase DET. Key performance indicators are also proposed to evaluate changes. Conclusions: This FMEA model will help health care professionals manage and minimize the risk of preanalytical errors in BGA

    Development and validation of an early warning model for hospitalized COVID-19 patients: a multi-center retrospective cohort study

    No full text
    Background: Timely identification of deteriorating COVID-19 patients is needed to guide changes in clinical management and admission to intensive care units (ICUs). There is significant concern that widely used Early warning scores (EWSs) underestimate illness severity in COVID-19 patients and therefore, we developed an early warning model specifically for COVID-19 patients. Methods: We retrospectively collected electronic medical record data to extract predictors and used these to fit a random forest model. To simulate the situation in which the model would have been developed after the first and implemented during the second COVID-19 ‘wave’ in the Netherlands, we performed a temporal validation by splitting all included patients into groups admitted before and after August 1, 2020. Furthermore, we propose a method for dynamic model updating to retain model performance over time. We evaluated model discrimination and calibration, performed a decision curve analysis, and quantified the importance of predictors using SHapley Additive exPlanations values. Results: We included 3514 COVID-19 patient admissions from six Dutch hospitals between February 2020 and May 2021, and included a total of 18 predictors for model fitting. The model showed a higher discriminative performance in terms of partial area under the receiver operating characteristic curve (0.82 [0.80–0.84]) compared to the National early warning score (0.72 [0.69–0.74]) and the Modified early warning score (0.67 [0.65–0.69]), a greater net benefit over a range of clinically relevant model thresholds, and relatively good calibration (intercept = 0.03 [− 0.09 to 0.14], slope = 0.79 [0.73–0.86]). Conclusions: This study shows the potential benefit of moving from early warning models for the general inpatient population to models for specific patient groups. Further (independent) validation of the model is needed.Pattern Recognition and Bioinformatic

    Diagnostic Accuracy of Portable, Handheld Point-of-Care Tests vs Laboratory-Based Bilirubin Quantification in Neonates: A Systematic Review and Meta-analysis

    Get PDF
    IMPORTANCE: Quantification of bilirubin in blood is essential for early diagnosis and timely treatment of neonatal hyperbilirubinemia. Handheld point-of-care (POC) devices may overcome the current issues with conventional laboratory-based bilirubin (LBB) quantification. OBJECTIVE: To systematically evaluate the reported diagnostic accuracy of POC devices compared with LBB quantification. DATA SOURCES: A systematic literature search was conducted in 6 electronic databases (Ovid MEDLINE, Embase, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, CINAHL, and Google Scholar) up to December 5, 2022. STUDY SELECTION: Studies were included in this systematic review and meta-analysis if they had a prospective cohort, retrospective cohort, or cross-sectional design and reported on the comparison between POC device(s) and LBB quantification in neonates aged 0 to 28 days. Point-of-care devices needed the following characteristics: portable, handheld, and able to provide a result within 30 minutes. This study was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-analyses reporting guideline. DATA EXTRACTION AND SYNTHESIS: Data extraction was performed by 2 independent reviewers into a prespecified, customized form. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 tool. Meta-analysis was performed of multiple Bland-Altman studies using the Tipton and Shuster method for the main outcome. MAIN OUTCOMES AND MEASURES: The main outcome was mean difference and limits of agreement in bilirubin levels between POC device and LBB quantification. Secondary outcomes were (1) turnaround time (TAT), (2) blood volumes, and (3) percentage of failed quantifications. RESULTS: Ten studies met the inclusion criteria (9 cross-sectional studies and 1 prospective cohort study), representing 3122 neonates. Three studies were considered to have a high risk of bias. The Bilistick was evaluated as the index test in 8 studies and the BiliSpec in 2. A total of 3122 paired measurements showed a pooled mean difference in total bilirubin levels of -14 μmol/L, with pooled 95% CBs of -106 to 78 μmol/L. For the Bilistick, the pooled mean difference was -17 μmol/L (95% CBs, -114 to 80 μmol/L). Point-of-care devices were faster in returning results compared with LBB quantification, whereas blood volume needed was less. The Bilistick was more likely to have a failed quantification compared with LBB. CONCLUSIONS AND RELEVANCE: Despite the advantages that handheld POC devices offer, these findings suggest that the imprecision for measurement of neonatal bilirubin needs improvement to tailor neonatal jaundice management
    corecore