222 research outputs found

    Intra-abdominal hypertension in patients with severe acute pancreatitis

    Get PDF
    INTRODUCTION: Abdominal compartment syndrome has been described in patients with severe acute pancreatitis, but its clinical impact remains unclear. We therefore studied patient factors associated with the development of intra-abdominal hypertension (IAH), the incidence of organ failure associated with IAH, and the effect on outcome in patients with severe acute pancreatitis (SAP). METHODS: We studied all patients admitted to the intensive care unit (ICU) because of SAP in a 4 year period. The incidence of IAH (defined as intra-abdominal pressure ≥ 15 mmHg) was recorded. The occurrence of organ dysfunction during ICU stay was recorded, as was the length of stay in the ICU and outcome. RESULTS: The analysis included 44 patients, and IAP measurements were obtained from 27 patients. IAH was found in 21 patients (78%). The maximum IAP in these patients averaged 27 mmHg. APACHE II and Ranson scores on admission were higher in patients who developed IAH. The incidence of organ dysfunction was high in patients with IAH: respiratory failure 95%, cardiovascular failure 91%, and renal failure 86%. Mortality in the patients with IAH was not significantly higher compared to patients without IAH (38% versus 16%, p = 0.63), but patients with IAH stayed significantly longer in the ICU and in the hospital. Four patients underwent abdominal decompression because of abdominal compartment syndrome, three of whom died in the early postoperative course. CONCLUSION: IAH is a frequent finding in patients admitted to the ICU because of SAP, and is associated with a high occurrence rate of organ dysfunction. Mortality is high in patients with IAH, and because the direct causal relationship between IAH and organ dysfunction is not proven in patients with SAP, surgical decompression should not routinely be performed

    Machine learning in infection management using routine electronic health records:tools, techniques, and reporting of future technologies

    Get PDF
    Background: Machine learning (ML) is increasingly being used in many areas of health care. Its use in infection management is catching up as identified in a recent review in this journal. We present here a complementary review to this work. Objectives: To support clinicians and researchers in navigating through the methodological aspects of ML approaches in the field of infection management. Sources: A Medline search was performed with the keywords artificial intelligence, machine learning, infection∗, and infectious disease∗ for the years 2014–2019. Studies using routinely available electronic hospital record data from an inpatient setting with a focus on bacterial and fungal infections were included. Content: Fifty-two studies were included and divided into six groups based on their focus. These studies covered detection/prediction of sepsis (n = 19), hospital-acquired infections (n = 11), surgical site infections and other postoperative infections (n = 11), microbiological test results (n = 4), infections in general (n = 2), musculoskeletal infections (n = 2), and other topics (urinary tract infections, deep fungal infections, antimicrobial prescriptions; n = 1 each). In total, 35 different ML techniques were used. Logistic regression was applied in 18 studies followed by random forest, support vector machines, and artificial neural networks in 18, 12, and seven studies, respectively. Overall, the studies were very heterogeneous in their approach and their reporting. Detailed information on data handling and software code was often missing. Validation on new datasets and/or in other institutions was rarely done. Clinical studies on the impact of ML in infection management were lacking. Implications: Promising approaches for ML use in infectious diseases were identified. But building trust in these new technologies will require improved reporting. Explainability and interpretability of the models used were rarely addressed and should be further explored. Independent model validation and clinical studies evaluating the added value of ML approaches are needed

    Application of longitudinal data analysis allows to detect differences in pre‐breeding growing curves of 24‐month calving Angus heifers under two pasture‐based system with differential puberty onset

    Get PDF
    Background. Longitudinal data analysis contributes to detect differences in the growing curve by exploiting all the information involved in repeated measurements, allowing to distinguish changes over time within individuals, from differences in the baseline levels among groups. In this research longitudinal and cross-sectional analysis were compared to evaluate differences in growth in Angus heifers under two different grazing conditions, ad libitum (AG) and controlled (CG) to gain 0.5 kg/day. Results. Longitudinal mixed models show differences in growing curves parameters between grazing conditions, that were not detected by cross sectional analysis. Differences (P < 0.05) in first derivative of growth curves (daily gain) until 289 days were observed between treatments, being AG higher than CG. Correspondingly, pubertal heifer proportion was also higher in AG at the end of rearing (AG 0.94; CG 0.67). Conclusion. In longitudinal studies, the power to detect differences between groups increases by exploiting the whole information of repeated measures, modelling the relation between measurements performed on the same individual. Under a proper analysis valid conclusion can be drawn with less animals in the trial, improving animal welfare and reducing investigation costs.Fil: Bonamy, Martin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico CONICET- La Plata. Instituto de Genética Veterinaria "Ing. Fernando Noel Dulout". Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias. Instituto de Genética Veterinaria; ArgentinaFil: de Iraola, Julieta Josefina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico CONICET- La Plata. Instituto de Genética Veterinaria "Ing. Fernando Noel Dulout". Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias. Instituto de Genética Veterinaria; ArgentinaFil: Prando, Alberto José. Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias; ArgentinaFil: Baldo, Andres. Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias; ArgentinaFil: Giovambattista, Guillermo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico CONICET- La Plata. Instituto de Genética Veterinaria "Ing. Fernando Noel Dulout". Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias. Instituto de Genética Veterinaria; ArgentinaFil: Rogberg Muñoz, Andres. Universidad de Buenos Aires. Facultad de Agronomía. Departamento de Producción Animal; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico CONICET- La Plata. Instituto de Genética Veterinaria "Ing. Fernando Noel Dulout". Universidad Nacional de La Plata. Facultad de Ciencias Veterinarias. Instituto de Genética Veterinaria; Argentin

    Perioperative factors determine outcome after surgery for severe acute pancreatitis

    Get PDF
    INTRODUCTION: There is evidence that postponing surgery in critically ill patients with severe acute pancreatitis (SAP) leads to improved survival, but previous reports included patients with both sterile and infected pancreatic necrosis who were operated on for various indications and with different degrees of organ dysfunction at the moment of surgery, which might be an important bias. The objective of this study is to analyze the impact of timing of surgery and perioperative factors (severity of organ dysfunction and microbiological status of the necrosis) on mortality in intensive care unit (ICU) patients undergoing surgery for SAP. METHODS: We retrospectively (January 1994 to March 2003) analyzed patients admitted to the ICU with SAP. Of 124 patients, 56 were treated surgically; these are the subject of this analysis. We recorded demographic characteristics and predictors of mortality at admission, timing of and indications for surgery, and outcome. We also studied the microbiological status of the necrosis and organ dysfunction at the moment of surgery. RESULTS: Patients' characteristics were comparable in patients undergoing early and late surgery, and there was a trend toward a higher mortality in patients who underwent early surgery (55% versus 29%, P = 0.06). In univariate analysis, patients who died were older, had higher organ dysfunction scores at the day of surgery, and had sterile necrosis more often; there was a trend toward earlier surgery in these patients. Logistic regression analysis showed that only age, organ dysfunction at the moment of surgery, and the presence of sterile necrosis were independent predictors of mortality. CONCLUSIONS: In this cohort of critically ill patients operated on for SAP, there was a trend toward higher mortality in patients operated on early in the course of the disease, but in multivariate analysis, only greater age, severity of organ dysfunction at the moment of surgery, and the presence of sterile necrosis, but not the timing of the surgical intervention, were independently associated with an increased risk for mortality

    Explainability in medicine in an era of AI-based clinical decision support systems

    Get PDF
    This is the final version. Available on open access from Frontiers Media via the DOI in this recordData availability statement: The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.The combination of "Big Data" and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This "opacity" problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an "epistemic warrant" for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find-that accuracy is sufficient justification for intervention for today's patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients.Research Foundation Flanders (FWO
    corecore