13,812 research outputs found

    Missing data imputation techniques for wireless continuous vital signs monitoring

    Get PDF
    Wireless vital signs sensors are increasingly used for remote patient monitoring, but data analysis is often challenged by missing data periods. This study explored the performance of various imputation techniques for continuous vital signs measurements. Wireless vital signs measurements (heart rate, respiratory rate, blood oxygen saturation, axillary temperature) from surgical ward patients were used for repeated random simulation of missing data periods (gaps) of 5–60 min in two-hour windows. Gaps were imputed using linear interpolation, spline interpolation, last observation- and mean carried forwards technique, and cluster-based prognosis. Imputation performance was evaluated using the mean absolute error (MAE) between original and imputed gap samples. Besides, effects on signal features (window’s slope, mean) and early warning scores (EWS) were explored. Gaps were simulated in 1743 data windows, obtained from 52 patients. Although MAE ranges overlapped, median MAE was structurally lowest for linear interpolation (heart rate: 0.9–2.6 beats/min, respiratory rate: 0.8–1.8 breaths/min, temperature: 0.04–0.17 °C, oxygen saturation: 0.3–0.7% for 5–60 min gaps) but up to twice as high for other techniques. Three techniques resulted in larger ranges of signal feature bias compared to no imputation. Imputation led to EWS misclassification in 1–8% of all simulations. Imputation error ranges vary between imputation techniques and increase with gap length. Imputation may result in larger signal feature bias compared to performing no imputation, and can affect patient risk assessment as illustrated by the EWS. Accordingly, careful implementation and selection of imputation techniques is warranted. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10877-023-00975-w

    Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial.

    Get PDF
    IntroductionSeveral methods have been developed to electronically monitor patients for severe sepsis, but few provide predictive capabilities to enable early intervention; furthermore, no severe sepsis prediction systems have been previously validated in a randomised study. We tested the use of a machine learning-based severe sepsis prediction system for reductions in average length of stay and in-hospital mortality rate.MethodsWe conducted a randomised controlled clinical trial at two medical-surgical intensive care units at the University of California, San Francisco Medical Center, evaluating the primary outcome of average length of stay, and secondary outcome of in-hospital mortality rate from December 2016 to February 2017. Adult patients (18+) admitted to participating units were eligible for this factorial, open-label study. Enrolled patients were assigned to a trial arm by a random allocation sequence. In the control group, only the current severe sepsis detector was used; in the experimental group, the machine learning algorithm (MLA) was also used. On receiving an alert, the care team evaluated the patient and initiated the severe sepsis bundle, if appropriate. Although participants were randomly assigned to a trial arm, group assignments were automatically revealed for any patients who received MLA alerts.ResultsOutcomes from 75 patients in the control and 67 patients in the experimental group were analysed. Average length of stay decreased from 13.0 days in the control to 10.3 days in the experimental group (p=0.042). In-hospital mortality decreased by 12.4 percentage points when using the MLA (p=0.018), a relative reduction of 58.0%. No adverse events were reported during this trial.ConclusionThe MLA was associated with improved patient outcomes. This is the first randomised controlled trial of a sepsis surveillance system to demonstrate statistically significant differences in length of stay and in-hospital mortality.Trial registrationNCT03015454

    Automation of Patient Trajectory Management: A deep-learning system for critical care outreach

    Full text link
    The application of machine learning models to big data has become ubiquitous, however their successful translation into clinical practice is currently mostly limited to the field of imaging. Despite much interest and promise, there are many complex and interrelated barriers that exist in clinical settings, which must be addressed systematically in advance of wide-spread adoption of these technologies. There is limited evidence of comprehensive efforts to consider not only their raw performance metrics, but also their effective deployment, particularly in terms of the ways in which they are perceived, used and accepted by clinicians. The critical care outreach team at St Vincent’s Public Hospital want to automatically prioritise their workload by predicting in-patient deterioration risk, presented as a watch-list application. This work proposes that the proactive management of in-patients at risk of serious deterioration provides a comprehensive case-study in which to understand clinician readiness to adopt deep-learning technology due to the significant known limitations of existing manual processes. Herein is described the development of a proof of concept application uses as its input the subset of real-time clinical data available in the EMR. This data set has the noteworthy challenge of not including any electronically recorded vital signs data. Despite this, the system meets or exceeds similar benchmark models for predicting in-patient death and unplanned ICU admission, using a recurrent neural network architecture, extended with a novel data-augmentation strategy. This augmentation method has been re-implemented in the public MIMIC-III data set to confirm its generalisability. The method is notable for its applicability to discrete time-series data. Furthermore, it is rooted in knowledge of how data entry is performed within the clinical record and is therefore not restricted in applicability to a single clinical domain, instead having the potential for wide-ranging impact. The system was presented to likely end-users to understand their readiness to adopt it into their workflow, using the Technology Adoption Model. In addition to confirming feasibility of predicting risk from this limited data set, this study investigates clinician readiness to adopt artificial intelligence in the critical care setting. This is done with a two-pronged strategy, addressing technical and clinically-focused research questions in parallel
    • …
    corecore