125 research outputs found
Predictive analytics for cardio-thoracic surgery duration as a stepstone towards data-driven capacity management
Effective capacity management of operation rooms is key to avoid surgery cancellations and prevent long waiting lists that negatively affect clinical and financial outcomes as well as patient and staff satisfaction. This requires optimal surgery scheduling, leveraging essential parameters like surgery duration, post-operative bed type and hospital length-of-stay. Common clinical practice is to use the surgeon’s average procedure time of the last N patients as a planned surgery duration for the next patient. A discrepancy between the actual and planned surgery duration may lead to suboptimal surgery schedule. We used deidentified data from 2294 cardio-thoracic surgeries to first calculate the discrepancy of the current model and second to develop new predictive models based on linear regression, random forest, and extreme gradient boosting. The new ensamble models reduced the RMSE for elective and acute surgeries by 19% (0.99 vs 0.80, p = 0.002) and 52% (1.87 vs 0.89, p < 0.001), respectively. Also, the elective and acute surgeries “behind schedule” were reduced by 28% (60% vs. 32%, p < 0.001) and 9% (37% vs. 28%, p = 0.003), respectively. These improvements were fueled by the patient and surgery features added to the models. Surgery planners can benefit from these predictive models as a patient flow AI decision support tool to optimize OR utilization.</p
Predictive analytics for cardio-thoracic surgery duration as a stepstone towards data-driven capacity management
Effective capacity management of operation rooms is key to avoid surgery cancellations and prevent long waiting lists that negatively affect clinical and financial outcomes as well as patient and staff satisfaction. This requires optimal surgery scheduling, leveraging essential parameters like surgery duration, post-operative bed type and hospital length-of-stay. Common clinical practice is to use the surgeon’s average procedure time of the last N patients as a planned surgery duration for the next patient. A discrepancy between the actual and planned surgery duration may lead to suboptimal surgery schedule. We used deidentified data from 2294 cardio-thoracic surgeries to first calculate the discrepancy of the current model and second to develop new predictive models based on linear regression, random forest, and extreme gradient boosting. The new ensamble models reduced the RMSE for elective and acute surgeries by 19% (0.99 vs 0.80, p = 0.002) and 52% (1.87 vs 0.89, p < 0.001), respectively. Also, the elective and acute surgeries “behind schedule” were reduced by 28% (60% vs. 32%, p < 0.001) and 9% (37% vs. 28%, p = 0.003), respectively. These improvements were fueled by the patient and surgery features added to the models. Surgery planners can benefit from these predictive models as a patient flow AI decision support tool to optimize OR utilization.</p
The correlation of urea and creatinine concentrations in sweat and saliva with plasma during hemodialysis:an observational cohort study
OBJECTIVES: Urea and creatinine concentrations in plasma are used to guide hemodialysis (HD) in patients with end-stage renal disease (ESRD). To support individualized HD treatment in a home situation, there is a clinical need for a non-invasive and continuous alternative to plasma for biomarker monitoring during and between cycles of HD. In this observational study, we therefore established the correlation of urea and creatinine concentrations between sweat, saliva and plasma in a cohort of ESRD patients on HD.METHODS: Forty HD patients were recruited at the Dialysis Department of the Catharina Hospital Eindhoven. Sweat and salivary urea and creatinine concentrations were analyzed at the start and at the end of one HD cycle and compared to the corresponding plasma concentrations.RESULTS: A decrease of urea concentrations during HD was observed in sweat, from 27.86 mmol/L to 12.60 mmol/L, and saliva, from 24.70 mmol/L to 5.64 mmol/L. Urea concentrations in sweat and saliva strongly correlated with the concentrations in plasma (ρ 0.92 [p<0.001] and 0.94 [p<0.001], respectively). Creatinine concentrations also decreased in sweat from 43.39 μmol/L to 19.69 μmol/L, and saliva, from 59.00 μmol/L to 13.70 μmol/L. However, for creatinine, correlation coefficients were lower than for urea for both sweat and saliva compared to plasma (ρ: 0.58 [p<0.001] and 0.77 [p<0.001], respectively).CONCLUSIONS: The results illustrate a proof of principle of urea measurements in sweat and saliva to monitor HD adequacy in a non-invasive and continuous manner. Biosensors enabling urea monitoring in sweat or saliva could fill in a clinical need to enable at-home HD for more patients and thereby decrease patient burden.</p
RoA:visual analytics support for deconfounded causal inference in observational studies
The gold standard in medical research to estimate the causal effect of a treatment is the Randomized Controlled Trial (RCT), but in many cases these are not feasible due to ethical, financial or practical issues. Observational studies are an alternative, but can easily lead to doubtful results, because of unbalanced selection bias and confounding. Moreover, RCTs often only apply to a specific subgroup and cannot readily be extrapolated. In response, we present Rod of Asclepius (RoA), a novel visual analytics method that integrates modern techniques designed for identification of causal effects and effect size estimation with subgroup analysis. The result is an interactive display designed to combine exploratory analysis with a robust set of techniques, including causal do-calculus, propensity score weighting, and effect estimation. It enables analysts to conduct observational studies in an exploratory, yet robust way. This is demonstrated by means of a use case involving patients undergoing surgery, for which we collaborated closely with clinical researchers
Lessons Learned from Telemonitoring in an Outpatient Bariatric Surgery Pathway-Secondary Outcomes of a Patient Preference Clinical Trial
Background: Remote monitoring is increasingly used to support postoperative care. This study aimed to describe the lessons learned from the use of telemonitoring in an outpatient bariatric surgery pathway. Materials and Methods: Patients were assigned based on their preference to an intervention cohort of same-day discharge after bariatric surgery. In total, 102 patients were monitored continuously for 7 days using a wearable monitoring device with a Continuous and Remote Early Warning Score–based notification protocol (CREWS). Outcome measures included missing data, course of postoperative heart and respiration rate, false positive notification and specificity analysis, and vital sign assessment during teleconsultation. Results: In 14.7% of the patients, data for heart rate was missing for > 8 h. A day-night-rhythm of heart rate and respiration rate reappeared on average on postoperative day 2 with heart rate amplitude increasing after day 3. CREWS notification had a specificity of 98%. Of the 17 notifications, 70% was false positive. Half of them occurred between day 4 and 7 and were accompanied with surrounding reassuring values. Comparable postoperative complaints were encountered between patients with normal and deviated data. Conclusion: Telemonitoring after outpatient bariatric surgery is feasible. It supports clinical decisions, however does not replace nurse or physician care. Although infrequent, the false notification rate was high. We suggested additional contact may not be necessary when notifications occur after restoration of circadian rhythm or when surrounding reassuring vital signs are present. CREWS supports ruling out serious complications, what may reduce in-hospital re-evaluations. Following these lessons learned, increased patients’ comfort and decreased clinical workload could be expected. Trial Registration: ClinicalTrials.gov. Identifier: NCT04754893. Graphical Abstract: [Figure not available: see fulltext.]</p
Prediction of postoperative patient deterioration and unanticipated intensive care unit admission using perioperative factors
BACKGROUND AND OBJECTIVES: Currently, no evidence-based criteria exist for decision making in the post anesthesia care unit (PACU). This could be valuable for the allocation of postoperative patients to the appropriate level of care and beneficial for patient outcomes such as unanticipated intensive care unit (ICU) admissions. The aim is to assess whether the inclusion of intra- and postoperative factors improves the prediction of postoperative patient deterioration and unanticipated ICU admissions. METHODS: A retrospective observational cohort study was performed between January 2013 and December 2017 in a tertiary Dutch hospital. All patients undergoing surgery in the study period were selected. Cardiothoracic surgeries, obstetric surgeries, catheterization lab procedures, electroconvulsive therapy, day care procedures, intravenous line interventions and patients under the age of 18 years were excluded. The primary outcome was unanticipated ICU admission. RESULTS: An unanticipated ICU admission complicated the recovery of 223 (0.9%) patients. These patients had higher hospital mortality rates (13.9% versus 0.2%, p<0.001). Multivariable analysis resulted in predictors of unanticipated ICU admissions consisting of age, body mass index, general anesthesia in combination with epidural anesthesia, preoperative score, diabetes, administration of vasopressors, erythrocytes, duration of surgery and post anesthesia care unit stay, and vital parameters such as heart rate and oxygen saturation. The receiver operating characteristic curve of this model resulted in an area under the curve of 0.86 (95% CI 0.83-0.88). CONCLUSIONS: The prediction of unanticipated ICU admissions from electronic medical record data improved when the intra- and early postoperative factors were combined with preoperative patient factors. This emphasizes the need for clinical decision support tools in post anesthesia care units with regard to postoperative patient allocation.</p
Evaluation of the image quality and validity of handheld echocardiography for stroke volume and left ventricular ejection fraction quantification:a method comparison study
Bedside quantification of stroke volume (SV) and left ventricular ejection fraction (LVEF) is valuable in hemodynamically compromised patients. Miniaturized handheld ultrasound (HAND) devices are now available for clinical use. However, the performance level of HAND devices for quantified cardiac assessment is yet unknown. The aim of this study was to compare the validity of HAND measurements with standard echocardiography (SE) and three-dimensional echocardiography (3DE). Thirty-six patients were scanned with HAND, SE and 3DE. LVEF and SV quantification was done with automated software for the HAND, SE and 3DE dataset. The image quality of HAND and SE was evaluated by scoring segmental endocardial border delineation (2 = good, 1 = poor, 0 = invisible). LVEF and SV of HAND was evaluated against SE and 3DE using correlation and Bland-Altman analysis. The correlation, bias, and limits of agreement (LOA) between HAND and SE were 0.68 [0.46:0.83], 1.60% [- 2.18:5.38], and 8.84% [- 9.79:12.99] for LVEF, and 0.91 [0.84:0.96], 1.32 ml [- 0.36:4.01], 15.54 ml [- 18.70:21.35] for SV, respectively. Correlation, bias, and LOA between HAND and 3DE were 0.55 [0.6:0.74], - 0.56% [- 2.27:1.1], and 9.88% [- 13.29:12.17] for LVEF, and 0.79 [0.62:0.89], 6.78 ml [2.34:11.21], 12.14 ml [- 26.32:39.87] for SV, respectively. The image quality scores were 9.42 ± 2.0 for the apical four chamber views of the HAND dataset and 10.49 ± 1.7 for the SE dataset and (P < 0.001). Clinically acceptable accuracy, precision, and image quality was demonstrated for HAND measurements compared to SE. In comparison to 3DE, HAND showed a clinically acceptable accuracy and precision for LVEF quantification.</p
- …