37 research outputs found

    Bayesian networks with imprecise datasets : application to oscillating water column

    Get PDF
    The Bayesian Network approach is a probabilistic method with an increasing use in the risk assessment of complex systems. It has proven to be a reliable and powerful tool with the flexibility to include different types of data (from experimental data to expert judgement). The incorporation of system reliability methods allows traditional Bayesian networks to work with random variables with discrete and continuous distributions. On the other hand, probabilistic uncertainty comes from the complexity of reality that scientists try to reproduce by setting a controlled experiment, while imprecision is related to the quality of the specific instrument making the measurements. This imprecision or lack of data can be taken into account by the use of intervals and probability boxes as random variables in the network. The resolution of the system reliability problems to deal with these kinds of uncertainties has been carried out adopting Monte Carlo simulations. However, the latter method is computationally expensive preventing from producing a real-time analysis of the system represented by the network. In this work, the line sampling algorithm is used as an effective method to improve the efficiency of the reduction process from enhanced to traditional Bayesian networks. This allows to preserve all the advantages without increasing excessively the computational cost of the analysis. As an application example, a risk assessment of an oscillating water column is carried out using data obtained in the laboratory. The proposed method is run using the multipurpose software OpenCossan

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study

    Get PDF
    Background While the Glasgow coma scale (GCS) is one of the strongest outcome predictors, the current classification of traumatic brain injury (TBI) as ‘mild’, ‘moderate’ or ‘severe’ based on this fails to capture enormous heterogeneity in pathophysiology and treatment response. We hypothesized that data-driven characterization of TBI could identify distinct endotypes and give mechanistic insights. Methods We developed an unsupervised statistical clustering model based on a mixture of probabilistic graphs for presentation (< 24 h) demographic, clinical, physiological, laboratory and imaging data to identify subgroups of TBI patients admitted to the intensive care unit in the CENTER-TBI dataset (N = 1,728). A cluster similarity index was used for robust determination of optimal cluster number. Mutual information was used to quantify feature importance and for cluster interpretation. Results Six stable endotypes were identified with distinct GCS and composite systemic metabolic stress profiles, distinguished by GCS, blood lactate, oxygen saturation, serum creatinine, glucose, base excess, pH, arterial partial pressure of carbon dioxide, and body temperature. Notably, a cluster with ‘moderate’ TBI (by traditional classification) and deranged metabolic profile, had a worse outcome than a cluster with ‘severe’ GCS and a normal metabolic profile. Addition of cluster labels significantly improved the prognostic precision of the IMPACT (International Mission for Prognosis and Analysis of Clinical trials in TBI) extended model, for prediction of both unfavourable outcome and mortality (both p < 0.001). Conclusions Six stable and clinically distinct TBI endotypes were identified by probabilistic unsupervised clustering. In addition to presenting neurology, a profile of biochemical derangement was found to be an important distinguishing feature that was both biologically plausible and associated with outcome. Our work motivates refining current TBI classifications with factors describing metabolic stress. Such data-driven clusters suggest TBI endotypes that merit investigation to identify bespoke treatment strategies to improve care

    Tracheal intubation in traumatic brain injury

    Get PDF
    Background: We aimed to study the associations between pre- and in-hospital tracheal intubation and outcomes in traumatic brain injury (TBI), and whether the association varied according to injury severity. Methods: Data from the international prospective pan-European cohort study, Collaborative European NeuroTrauma Effectiveness Research for TBI (CENTER-TBI), were used (n=4509). For prehospital intubation, we excluded self-presenters. For in-hospital intubation, patients whose tracheas were intubated on-scene were excluded. The association between intubation and outcome was analysed with ordinal regression with adjustment for the International Mission for Prognosis and Analysis of Clinical Trials in TBI variables and extracranial injury. We assessed whether the effect of intubation varied by injury severity by testing the added value of an interaction term with likelihood ratio tests. Results: In the prehospital analysis, 890/3736 (24%) patients had their tracheas intubated at scene. In the in-hospital analysis, 460/2930 (16%) patients had their tracheas intubated in the emergency department. There was no adjusted overall effect on functional outcome of prehospital intubation (odds ratio=1.01; 95% confidence interval, 0.79–1.28; P=0.96), and the adjusted overall effect of in-hospital intubation was not significant (odds ratio=0.86; 95% confidence interval, 0.65–1.13; P=0.28). However, prehospital intubation was associated with better functional outcome in patients with higher thorax and abdominal Abbreviated Injury Scale scores (P=0.009 and P=0.02, respectively), whereas in-hospital intubation was associated with better outcome in patients with lower Glasgow Coma Scale scores (P=0.01): in-hospital intubation was associated with better functional outcome in patients with Glasgow Coma Scale scores of 10 or lower. Conclusion: The benefits and harms of tracheal intubation should be carefully evaluated in patients with TBI to optimise benefit. This study suggests that extracranial injury should influence the decision in the prehospital setting, and level of consciousness in the in-hospital setting. Clinical trial registration: NCT02210221

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2,3,4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Identification of aerosol type over the Arabian Sea in the premonsoon season during the Integrated Campaign for Aerosols, Gases and Radiation Budget (ICARB)

    No full text
    A discrimination of the different aerosol types over the Arabian Sea (AS) during the Integrated Campaign for Aerosols, Gases and Radiation Budget (ICARB-06) is made using values of aerosol optical depth (AOD) at 500 nm (AOD500) and Ångström exponent (α) in the spectral band 340-1020 nm (α340-1020). For this purpose, appropriate thresholds for AOD500 and α340-1020 are applied. It is shown that a single aerosol type in a given location over the AS can exist only under specific conditions while the presence of mixed aerosols is the usual situation. Analysis indicates that the dominant aerosol types change significantly in the different regions (coastal, middle, and far) of AS. Thus the urban/industrial aerosols are mainly observed in coastal AS, the desert dust particles occur in the middle and northern AS, while clear maritime conditions mainly occur in far AS. Spectral AOD and Ångström exponent data were analyzed to obtain information about the adequacy of the simple use of the Ångström exponent and spectral variation of a for characterizing the aerosols. Using the least squares method, α is calculated in the spectral interval 340-1020 nm along with the coefficients a1 and a2 of the second-order polynomial fit to the plotted logarithm of AOD versus the logarithm of wavelength. The results show that the spectral curvature can effectively be used as a tool for their discrimination, since the fine mode aerosols exhibit negative curvature, while the coarse mode particles exhibit positive curvature. The correlation between the coefficients a1 and a2 with the Ångström exponent, and the atmospheric turbidity, is further investigated. Copyright 2009 by the American Geophysical Union
    corecore