63 research outputs found

    On the mechanisms governing gas penetration into a tokamak plasma during a massive gas injection

    Get PDF
    A new 1D radial fluid code, IMAGINE, is used to simulate the penetration of gas into a tokamak plasma during a massive gas injection (MGI). The main result is that the gas is in general strongly braked as it reaches the plasma, due to mechanisms related to charge exchange and (to a smaller extent) recombination. As a result, only a fraction of the gas penetrates into the plasma. Also, a shock wave is created in the gas which propagates away from the plasma, braking and compressing the incoming gas. Simulation results are quantitatively consistent, at least in terms of orders of magnitude, with experimental data for a D 2 MGI into a JET Ohmic plasma. Simulations of MGI into the background plasma surrounding a runaway electron beam show that if the background electron density is too high, the gas may not penetrate, suggesting a possible explanation for the recent results of Reux et al in JET (2015 Nucl. Fusion 55 093013)

    Overview of the JET results in support to ITER

    Get PDF

    Big Data in Laboratory Medicine—FAIR Quality for AI?

    No full text
    Laboratory medicine is a digital science. Every large hospital produces a wealth of data each day—from simple numerical results from, e.g., sodium measurements to highly complex output of “-omics” analyses, as well as quality control results and metadata. Processing, connecting, storing, and ordering extensive parts of these individual data requires Big Data techniques. Whereas novel technologies such as artificial intelligence and machine learning have exciting application for the augmentation of laboratory medicine, the Big Data concept remains fundamental for any sophisticated data analysis in large databases. To make laboratory medicine data optimally usable for clinical and research purposes, they need to be FAIR: findable, accessible, interoperable, and reusable. This can be achieved, for example, by automated recording, connection of devices, efficient ETL (Extract, Transform, Load) processes, careful data governance, and modern data security solutions. Enriched with clinical data, laboratory medicine data allow a gain in pathophysiological insights, can improve patient care, or can be used to develop reference intervals for diagnostic purposes. Nevertheless, Big Data in laboratory medicine do not come without challenges: the growing number of analyses and data derived from them is a demanding task to be taken care of. Laboratory medicine experts are and will be needed to drive this development, take an active role in the ongoing digitalization, and provide guidance for their clinical colleagues engaging with the laboratory data in research. © 2022 by the authors

    Machine Learning Prediction of Hypoglycemia and Hyperglycemia from Electronic Health Records: Algorithm Development and Validation

    No full text
    Background: Acute blood glucose (BG) decompensations (hypoglycemia and hyperglycemia) represent a frequent and significant risk for inpatients and adversely affect patient outcomes and safety. The increasing need for BG management in inpatients poses a high demand on clinical staff and health care systems in addition. Objective: This study aimed to generate a broadly applicable multiclass classification model for predicting BG decompensation events from patients’ electronic health records to indicate where adjustments in patient monitoring and therapeutic interventions are required. This should allow for taking proactive measures before BG levels are derailed. Methods: A retrospective cohort study was conducted on patients who were hospitalized at a tertiary hospital in Bern, Switzerland. Using patient details and routine data from electronic health records, a multiclass prediction model for BG decompensation events (10, >13.9, or >16.7 mmol/L [representing different degrees of hyperglycemia]) was generated based on a second-level ensemble of gradient-boosted binary trees. Results: A total of 63,579 hospital admissions of 38,250 patients were included in this study. The multiclass prediction model reached specificities of 93.7%, 98.9%, and 93.9% and sensitivities of 67.1%, 59%, and 63.6% for the main categories of interest, which were nondecompensated cases, hypoglycemia, or hyperglycemia, respectively. The median prediction horizon was 7 hours and 4 hours for hypoglycemia and hyperglycemia, respectively. Conclusions: Electronic health records have the potential to reliably predict all types of BG decompensation. Readily available patient details and routine laboratory data can support the decisions for proactive interventions and thus help to reduce the detrimental health effects of hypoglycemia and hyperglycemia. ©Harald Witte, Christos Nakas, Lia Bally, Alexander Benedikt Leichtle

    Bootstrap-based testing approaches for the assessment of the diagnostic accuracy of biomarkers subject to a limit of detection

    No full text
    Assessment of the diagnostic accuracy of biomarkers through receiver operating characteristic curve analysis frequently involves a limit of detection imposed by the laboratory analytical system precision. As a consequence, measurements below a certain level are undetectable and ignoring these is known to lead to negatively biased estimates of the area under the receiver operating characteristic curve. In this article, we introduce two receiver operating characteristic curve-based parametric approaches that tackle the issue of correct assessment of diagnostic markers in the presence of a limit of detection. Proposed approaches are simulation-based utilising bootstrap methodology. Non-parametric alternatives that are naively used in the literature do not solve the inherent problem of limit of detection values which are treated as censored observations. However, the latter seems to perform adequately well in our simulation study. Nonparametric bootstrap was consistently used throughout, while other bootstrap alternatives performed similarly in our pilot simulation study. The simulation study involves the comparison of parametric and non-parametric options described here versus alternative strategies that are routinely used in the literature. We apply all methods to a study-setting resembling a chemical quasi-standard situation, where compound tumour biomarkers were searched within a multi-variable set of measurements to discriminate between two groups, namely colorectal cancer and controls. We focus in the assessment of glutamine and methionine. © The Author(s) 2018

    Effects of Freezing and Thawing Procedures on Selected Clinical Chemistry Parameters in Plasma

    No full text
    Introduction: Measurements from frozen sample collections are important key indicators in clinical studies. It is a prime concern of biobanks and laboratories to minimize preanalytical bias and variance through standardization. In this study, we aimed at assessing the effects of different freezing and thawing conditions on the reproducibility of medical routine parameters from frozen samples. Materials and Methods: In total, 12 pooled samples were generated from leftover lithium heparinized plasma samples from clinical routine testing. Aliquots of the pools were frozen using three freezing methods (in carton box at -80°C, flash freezing in liquid nitrogen, and controlled-rate freezing [CRF]) and stored at -80°C. After 3 days, samples were thawed using two methods (30 minutes at room temperature or water bath at 25°C for 3 minutes). Ten clinical chemistry laboratory parameters were measured before (baseline) and after freeze-thaw treatment: total calcium, potassium, sodium, alanine aminotransferase, lactate dehydrogenase (LDH), lipase, uric acid, albumin, c-reactive protein (CRP), and total protein. We evaluated the influence of the different preanalytical treatments on the test results and compared each condition with nonfrozen baseline measurements. Results: We found no significant differences between freezing methods for all tested parameters. Only LDH was significantly affected by thawing with fast-rate thawing being closer to baseline than slow-rate thawing. Potassium, LDH, lipase, uric acid, albumin, and CRP values were significantly changed after freezing and thawing compared with unfrozen samples. The least prominent changes compared with unfrozen baseline measurements were obtained when a CRF protocol of the local biobank and fast thawing was applied. However, the observed changes between baseline and frozen samples were smaller than the measurement uncertainty for 9 of the 10 parameters. Discussion: Changes introduced through freezing-thawing were small and not of clinical importance. A slight statistically based preference toward results from slow CRF and fast thawing of plasma being closest to unfrozen samples could be supported. © Copyright 2020, Mary Ann Liebert, Inc., publishers 2020

    Longitudinal Study of the Variation in Patient Turnover and Patient-to-Nurse Ratio: Descriptive Analysis of a Swiss University Hospital

    No full text
    Background: Variations in patient demand increase the challenge of balancing high-quality nursing skill mixes against budgetary constraints. Developing staffing guidelines that allow high-quality care at minimal cost requires first exploring the dynamic changes in nursing workload over the course of a day. Objective: Accordingly, this longitudinal study analyzed nursing care supply and demand in 30-minute increments over a period of 3 years. We assessed 5 care factors: patient count (care demand), nurse count (care supply), the patient-to-nurse ratio for each nurse group, extreme supply-demand mismatches, and patient turnover (ie, number of admissions, discharges, and transfers). Methods: Our retrospective analysis of data from the Inselspital University Hospital Bern, Switzerland included all inpatients and nurses working in their units from January 1, 2015 to December 31, 2017. Two data sources were used. The nurse staffing system (tacs) provided information about nurses and all the care they provided to patients, their working time, and admission, discharge, and transfer dates and times. The medical discharge data included patient demographics, further admission and discharge details, and diagnoses. Based on several identifiers, these two data sources were linked. Results: Our final dataset included more than 58 million data points for 128,484 patients and 4633 nurses across 70 units. Compared with patient turnover, fluctuations in the number of nurses were less pronounced. The differences mainly coincided with shifts (night, morning, evening). While the percentage of shifts with extreme staffing fluctuations ranged from fewer than 3% (mornings) to 30% (evenings and nights), the percentage within "normal" ranges ranged from fewer than 50% to more than 80%. Patient turnover occurred throughout the measurement period but was lowest at night. Conclusions: Based on measurements of patient-to-nurse ratio and patient turnover at 30-minute intervals, our findings indicate that the patient count, which varies considerably throughout the day, is the key driver of changes in the patient-to-nurse ratio. This demand-side variability challenges the supply-side mandate to provide safe and reliable care. Detecting and describing patterns in variability such as these are key to appropriate staffing planning. This descriptive analysis was a first step towards identifying time-related variables to be considered for a predictive nurse staffing model. © 2020 Journal of Medical Internet Research. All rights reserved

    The association between nurse staffing and inpatient mortality: A shift-level retrospective longitudinal study

    No full text
    Background: Worldwide, hospitals face pressure to reduce costs. Some respond by working with a reduced number of nurses or less qualified nursing staff. Objective: This study aims at examining the relationship between mortality and patient exposure to shifts with low or high nurse staffing. Methods: This longitudinal study used routine shift-, unit-, and patient-level data for three years (2015–2017) from one Swiss university hospital. Data from 55 units, 79,893 adult inpatients and 3646 nurses (2670 registered nurses, 438 licensed practical nurses, and 538 unlicensed and administrative personnel) were analyzed. After developing a staffing model to identify high- and low-staffed shifts, we fitted logistic regression models to explore associations between nurse staffing and mortality. Results: Exposure to shifts with high levels of registered nurses had lower odds of mortality by 8.7% [odds ratio 0.91 95% CI 0.89–0.93]. Conversely, low staffing was associated with higher odds of mortality by 10% [odds ratio 1.10 95% CI 1.07–1.13]. The associations between mortality and staffing by other groups was less clear. For example, both high and low staffing of unlicensed and administrative personnel were associated with higher mortality, respectively 1.03 [95% CI 1.01–1.04] and 1.04 [95% CI 1.03–1.06]. Discussion and implications: This patient-level longitudinal study suggests a relationship between registered nurses staffing levels and mortality. Higher levels of registered nurses positively impact patient outcome (i.e. lower odds of mortality) and lower levels negatively (i.e. higher odds of mortality). Contributions of the three other groups to patient safety is unclear from these results. Therefore, substitution of either group for registered nurses is not recommended. © 2021 The Author

    High sensitive cardiac troponin T: Testing the test

    No full text
    Background High sensitive cardiac troponin T (hs-TnT) found its way into everyday clinical routine to diagnose acute myocardial infarction (AMI). However, its levels vary considerably based on the underlying pathophysiology of the patients. Hence we sought to test the applicability of the currently only available hs-TnT assay (Roche Diagnostics, Switzerland) to diagnose acute myocardial infarction. Methods and patients Retrospectively, we analyzed the hs-TnT results of 1573 patients admitted to a level A university hospital emergency department. Overall 323 patients had an acute cardiac event defined as Non-ST Elevated Myocardial Infarction (NSTEMI) and 286 patients had a ST-Elevated Myocardial Infarction (STEMI). 964 patients served as controls, consisting of patients with other cardiac and non-cardiac morbidity. Results The sensitivity of hs-TnT for detecting an acute cardiac event was more than 92% overall. The specificity varied around 35% depending on the respective patient cohort. ROC curve analysis of the initial hs-TnT results showed that the AUC in total cardiac events (STEMI and NSTEMI) was 0.81. Detailed analysis resulted in an AUC of 0.79 in NSTEMI and 0.84 in STEMI patients detected via the initial hs-TnT. We further tested the ESC algorithm for detecting NSTEMI and obtained a sensitivity of about 83%, while 43% of all non-NSTEMIs are classified as NSTEMIs. Conclusion We show that the specificity of hs-TnT for AMI is very low and conclude that the current assay including its delta values represents myocardial damage of any origin. This damage alone does not substantiate an AMI diagnosis even when international algorithms are applied. © 201
    corecore