5,660 research outputs found

    A Systematic Approach to Manage Clinical Deterioration on Inpatient Units in the Health Care System

    Get PDF
    The transformation of health care delivery in the United States is accelerating at unbelievable speed. The acceleration is a result of many variables including health care reform as well as the covariation occurring with adjustments in regulations related to resident work hours. The evolving care delivery model has exposed a vulnerability of the health system, specifically in academic medical centers of the United States. Academic medical centers have established a care delivery model grounded and predicated in resident presence and performance. With changes in resident work expectations and reduced time spent in hospitals, an urgent need exists to evaluate and recreate a model of care that produces quality outcomes in an efficient, service driven organization. One potential care model that would stabilize organizations is infusion of APNs with the expanded skills and knowledge to instill practice continuity in the critical care environment. A Medicare demonstration project is proposed for funding an APN expanded role and alteration in the care delivery model. Formative and summative evaluation and impact of such an expanded practice role is included in the proposed project. An evolved partnership between the advanced practice nurse and physician will serve to fill some of the gap currently existing in the delivery system of today. As the complexity and acuity of the patients in the hospital escalates, innovation is demanded to ensure a care model that will foster achievement of the quality outcomes expected and deserved

    Rapid Response Teams versus Critical Care Outreach Teams: Unplanned Escalations in Care and Associated Outcomes

    Get PDF
    The incidence of unplanned escalations during hospitalization is undocumented, but estimates may be as high as 1.2 million occurrences per year in the United States. Rapid Response Teams (RRT) were developed for the early recognition and treatment of deteriorating patients to deliver time-sensitive interventions, but evidence related to optimal activation criteria and structure is limited. The purpose of this study is to determine if an Early Warning Score-based Critical Care Outreach (CCO) model is related to the frequency of unplanned intra-hospital escalations in care compared to a RRT system based on staff nurse identification of vital sign derangements and physical assessments. The RRT model, in which staff nurses identified vital sign derangements to active the system, was compared with the addition of a CCO model, in which rapid response nurses activated the system based on Early Warning Score line graphs of patient condition over time. Logistic regressions were used to examine retrospective data from administrative datasets at a 237-bed community non-teaching hospital during two periods: 1) baseline period, RRT model (n=5,875) (Phase 1: October 1, 2010 – March 31, 2011), and; 2) intervention period, RRT/CCO model (n=6,273). (Phase 2: October 1, 2011 – March 31, 2012). The strongest predictor of unplanned escalations to the Intensive Care Unit was the type of rapid response system model. Unplanned ICU transfers were 1.4 times more likely to occur during the Phase 1 RRT period. In contrast, the type of rapid response model was not a significant predictor when all unplanned escalations (any type) were grouped together (medical-surgical-to-intermediate, medical-surgical-to-ICU and intermediate-to-ICU). This is the first study to report a relationship between unplanned escalations and different rapid response models. Based on the findings of fewer unplanned ICU transfers in the setting of a CCO model, health services researchers and clinicians should consider using automated Early Warning score graphs for hospital-wide surveillance of patient condition as a safety strategy

    Accuracy and Efficiency of Recording Pediatric Early Warning Scores Using an Electronic Physiological Surveillance System Compared With Traditional Paper-Based Documentation

    Get PDF
    Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team

    Optimising paediatric afferent component early warning systems : a hermeneutic systematic literature review and model development

    Get PDF
    Objective: To identify the core components of successful early warning systems for detecting and initiating action in response to clinical deterioration in paediatric inpatients. Methods: A hermeneutic systematic literature review informed by translational mobilisation theory and normalisation process theory was used to synthesise 82 studies of paediatric and adult early warning systems and interventions to support the detection of clinical deterioration and escalation of care. This method, which is designed to develop understanding, enabled the development of a propositional model of an optimal afferent component early warning system. Results: Detecting deterioration and initiating action in response to clinical deterioration in paediatric inpatients involves several challenges, and the potential failure points in early warning systems are well documented. Track and trigger tools (TTT) are commonly used and have value in supporting key mechanisms of action but depend on certain preconditions for successful integration into practice. Several supplementary interventions have been proposed to improve the effectiveness of early warning systems but there is limited evidence to recommend their wider use, due to the weight and quality of the evidence; the extent to which systems are conditioned by the local clinical context; and the need to attend to system component relationships, which do not work in isolation. While it was not possible to make empirical recommendations for practice, the review methodology generated theoretical inferences about the core components of an optimal system for early warning systems. These are presented as a propositional model conceptualised as three subsystems: detection, planning and action. Conclusions: There is a growing consensus of the need to think beyond TTTs in improving action to detect and respond to clinical deterioration. Clinical teams wishing to improve early warning systems can use the model to consider systematically the constellation of factors necessary to support detection, planning and action and consider how these arrangements can be implemented in their local context

    Managing the Prevention of In-Hospital Resuscitation by Early Detection and Treatment of High-Risk Patients

    Get PDF
    In hospitalized patients, cardiorespiratory collapse mostly occurs after a distinct period of deterioration. This deterioration can be discovered by a systematic quantification of a set of clinical parameters. The combination of such a detection system—to identify patients at risk in an early stage —and a rapid response team—which can intervene immediately—can be implemented to prevent life-threatening situations and reduce the incidence of in-hospital cardiac arrests outside the intensive care setting. The effectiveness of both of these systems is influenced by the used trigger criteria, the number of rapid response team (RRT) activations, the in- or exclusion of patients with a DNR code >3, proactive rounding, the team composition, and its response time. Each of those elements should be optimized for maximal efficacy, and both systems need to work in tandem with little delay between patient deterioration, accurate detection, and swift intervention. Dependable diagnostics and scoring protocols must be implemented, as well as the organization of a 24/7 vigilant and functional experienced RRT. This implies a significant financial investment to provide an only sporadically required fast intervention and sustained alertness of the people involved

    Early indication of decompensated heart failure in patients on home-telemonitoring: a comparison of prediction algorithms based on daily weight and noninvasive transthoracic bio-impedance

    Get PDF
    Background: Heart Failure (HF) is a common reason for hospitalization. Admissions might be prevented by early detection of and intervention for decompensation. Conventionally, changes in weight, a possible measure of fluid accumulation, have been used to detect deterioration. Transthoracic impedance may be a more sensitive and accurate measure of fluid accumulation. Objective: In this study, we review previously proposed predictive algorithms using body weight and noninvasive transthoracic bio-impedance (NITTI) to predict HF decompensations. Methods: We monitored 91 patients with chronic HF for an average of 10 months using a weight scale and a wearable bio-impedance vest. Three algorithms were tested using either simple rule-of-thumb differences (RoT), moving averages (MACD), or cumulative sums (CUSUM). Results: Algorithms using NITTI in the 2 weeks preceding decompensation predicted events (P&lt;.001); however, using weight alone did not. Cross-validation showed that NITTI improved sensitivity of all algorithms tested and that trend algorithms provided the best performance for either measurement (Weight-MACD: 33%, NITTI-CUSUM: 60%) in contrast to the simpler rules-of-thumb (Weight-RoT: 20%, NITTI-RoT: 33%) as proposed in HF guidelines. Conclusions: NITTI measurements decrease before decompensations, and combined with trend algorithms, improve the detection of HF decompensation over current guideline rules; however, many alerts are not associated with clinically overt decompensation

    Predicting out of intensive care unit cardiopulmonary arrest or death using electronic medical record data

    Get PDF
    BACKGROUND: Accurate, timely and automated identification of patients at high risk for severe clinical deterioration using readily available clinical information in the electronic medical record (EMR) could inform health systems to target scarce resources and save lives. METHODS: We identified 7,466 patients admitted to a large, public, urban academic hospital between May 2009 and March 2010. An automated clinical prediction model for out of intensive care unit (ICU) cardiopulmonary arrest and unexpected death was created in the derivation sample (50% randomly selected from total cohort) using multivariable logistic regression. The automated model was then validated in the remaining 50% from the total cohort (validation sample). The primary outcome was a composite of resuscitation events, and death (RED). RED included cardiopulmonary arrest, acute respiratory compromise and unexpected death. Predictors were measured using data from the previous 24 hours. Candidate variables included vital signs, laboratory data, physician orders, medications, floor assignment, and the Modified Early Warning Score (MEWS), among other treatment variables. RESULTS: RED rates were 1.2% of patient-days for the total cohort. Fourteen variables were independent predictors of RED and included age, oxygenation, diastolic blood pressure, arterial blood gas and laboratory values, emergent orders, and assignment to a high risk floor. The automated model had excellent discrimination (c-statistic=0.85) and calibration and was more sensitive (51.6% and 42.2%) and specific (94.3% and 91.3%) than the MEWS alone. The automated model predicted RED 15.9 hours before they occurred and earlier than Rapid Response Team (RRT) activation (5.7 hours prior to an event, p=0.003) CONCLUSION: An automated model harnessing EMR data offers great potential for identifying RED and was superior to both a prior risk model and the human judgment-driven RRT

    Rapid Response Team Utilization of Modified Early Warning Scores to Improve Patient Outcomes

    Get PDF
    This retrospective, descriptive study was designed to (a) determine if the Modified Early Warning Score risk assessment tool identified moderate to high risk patients prior to the activation of the Rapid Response Team (b) determine how much time occurred from the onset of clinical deterioration until activation of the Rapid Response Team. A Modified Early Warning Score (MEWS) was applied to the documented vital signs in the medical records of a convenience sample of 108 adult patients between the ages of 19 and 99 years of age who had experienced an activation of the Rapid Response Team (RRT). A risk assessment score was given for the time of the RRT activation as well as every previously documented instance of vital signs prior to the RRT call until the MEWS score reached a low risk score of 0 to 1. Of the 108 subjects, 36 subjects had a low risk (score 0 to 1) MEWS at the time of the RRT activation; 72 subjects had a moderate (score of 2 to 3) or high (score 4 or greater) risk MEWS score at the time of the RRT activation. Ten (10.14) hours was the average amount of time earlier deterioration could have been detected if a MEWS system had been in place. The data from this study indicate a need for more frequent observation and documentation of vital signs by nursing staff as the overall average length of time between vital signs collected (MEWS applied) was 291.60 minutes (4.86 hours) when clinical deterioration was evident. These data show that there is a delay in activation of the Rapid Response Team and that implementation of the MEWS system would increase RRT awareness of patients with critically abnormal vital signs so that they can be assessed and clinical deterioration treated to prevent a catastrophic event from occurring

    Missing data imputation techniques for wireless continuous vital signs monitoring

    Get PDF
    Wireless vital signs sensors are increasingly used for remote patient monitoring, but data analysis is often challenged by missing data periods. This study explored the performance of various imputation techniques for continuous vital signs measurements. Wireless vital signs measurements (heart rate, respiratory rate, blood oxygen saturation, axillary temperature) from surgical ward patients were used for repeated random simulation of missing data periods (gaps) of 5–60 min in two-hour windows. Gaps were imputed using linear interpolation, spline interpolation, last observation- and mean carried forwards technique, and cluster-based prognosis. Imputation performance was evaluated using the mean absolute error (MAE) between original and imputed gap samples. Besides, effects on signal features (window’s slope, mean) and early warning scores (EWS) were explored. Gaps were simulated in 1743 data windows, obtained from 52 patients. Although MAE ranges overlapped, median MAE was structurally lowest for linear interpolation (heart rate: 0.9–2.6 beats/min, respiratory rate: 0.8–1.8 breaths/min, temperature: 0.04–0.17 °C, oxygen saturation: 0.3–0.7% for 5–60 min gaps) but up to twice as high for other techniques. Three techniques resulted in larger ranges of signal feature bias compared to no imputation. Imputation led to EWS misclassification in 1–8% of all simulations. Imputation error ranges vary between imputation techniques and increase with gap length. Imputation may result in larger signal feature bias compared to performing no imputation, and can affect patient risk assessment as illustrated by the EWS. Accordingly, careful implementation and selection of imputation techniques is warranted. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10877-023-00975-w
    • …
    corecore