25 research outputs found

    Physician satisfaction with a multi-platform digital scheduling system

    Get PDF
    Objective: Physician shift schedules are regularly created manually, using paper or a shared online spreadsheet. Mistakes are not unusual, leading to last minute scrambles to cover a shift. We developed a web-based shift scheduling system and a mobile application tool to facilitate both the monthly scheduling and shift exchanges between physicians. The primary objective was to compare physician satisfaction before and after the mobile application implementation. Methods: Over a 9-month period, three surveys, using the 4-point Likert type scale were performed to assess the physician satisfaction. The first survey was conducted three months prior mobile application release, a second survey three months after implementation and the last survey six months after. Results: 51 (77%) of the physicians answered the baseline survey. Of those, 32 (63%) were males with a mean age of 37.8 ± 5.5 years. Prior to the mobile application implementation, 36 (70%) of the responders were using more than one method to carry out shift exchanges and only 20 (40%) were using the official department report sheet to document shift exchanges. The second and third survey were answered by 48 (73%) physicians. Forty-eight (98%) of them found the mobile application easy or very easy to install and 47 (96%) did not want to go back to the previous method. Regarding physician satisfaction, at baseline 37% of the physicians were unsatisfied or very unsatisfied with shift scheduling. After the mobile application was implementation, only 4% reported being unsatisfied (OR = 0.11, p < 0.001). The satisfaction level improved from 63% to 96% between the first and the last survey. Satisfaction levels significantly increased between the three time points (OR = 13.33, p < 0.001). Conclusion: Our web and mobile phone-based scheduling system resulted in better physician satisfaction

    Catheter Related Bloodstream Infection (CR-BSI) in ICU Patients: Making the Decision to Remove or Not to Remove the Central Venous Catheter

    Get PDF
    Background Approximately 150 million central venous catheters (CVC) are used each year in the United States. Catheter-related bloodstream infections (CR-BSI) are one of the most important complications of the central venous catheters (CVCs). Our objective was to compare the in-hospital mortality when the catheter is removed or not removed in patients with CR-BSI. Methods We reviewed all episodes of CR-BSI that occurred in our intensive care unit (ICU) from January 2000 to December 2008. The standard method was defined as a patient with a CVC and at least one positive blood culture obtained from a peripheral vein and a positive semi quantitative (\u3e15 CFU) culture of a catheter segment from where the same organism was isolated. The conservative method was defined as a patient with a CVC and at least one positive blood culture obtained from a peripheral vein and one of the following: (1) differential time period of CVC culture versus peripheral culture positivity of more than 2 hours, or (2) simultaneous quantitative blood culture with 5:1 ratio (CVC versus peripheral). Results 53 CR-BSI (37 diagnosed by the standard method and 16 by the conservative method) were diagnosed during the study period. There was a no statistically significant difference in the in-hospital mortality for the standard versus the conservative method (57% vs. 75%, p = 0.208) in ICU patients. Conclusion In our study there was a no statistically significant difference between the standard and conservative methods in-hospital mortality

    Data from: INACCURACY OF VENOUS POINT-OF-CARE GLUCOSE MEASUREMENTS IN CRITICALLY ILL PATIENTS: A CROSS-SECTIONAL STUDY

    No full text
    Data collected and presented here include arterial measurements obtained with Precision PCx (Abbott®, USA), arterial, fingerstick and venous measurements obtained with Accu-chek® Advantage II (Roche®, Switzerland) and arterial measurements obtained at the central lab (reference). Sequential Organ Failure Assessment score (SOFA score), Acute Physiology and Chronic Health Evaluation II (APACHE II score), mean arterial blood pressure, peripheral body temperature, hematocrit level, arterial pH, arterial oxygen saturation, room temperature and humidity, total bilirubin levels, triglycerides, use of vasopressors (norepinephrine and dopamine), acetaminophen, ascorbic acid, mannitol and mechanical ventilation are also provided

    Inaccuracy of Venous Point-of-Care Glucose Measurements in Critically Ill Patients: A Cross-Sectional Study.

    No full text
    Current guidelines and consensus recommend arterial and venous samples as equally acceptable for blood glucose assessment in point-of-care devices, but there is limited evidence to support this recommendation. We evaluated the accuracy of two devices for bedside point-of-care blood glucose measurements using arterial, fingerstick and catheter venous blood samples in ICU patients, and assessed which factors could impair their accuracy.145 patients from a 41-bed adult mixed-ICU, in a tertiary care hospital were prospectively enrolled. Fingerstick, central venous (catheter) and arterial blood (indwelling catheter) samples were simultaneously collected, once per patient. Arterial measurements obtained with Precision PCx, and arterial, fingerstick and venous measurements obtained with Accu-chek Advantage II were compared to arterial central lab measurements. Agreement between point-of-care and laboratory measurements were evaluated with Bland-Altman, and multiple linear regression models were used to investigate interference of associated factors.Mean difference between Accu-chek arterial samples versus central lab was 10.7 mg/dL (95% LA -21.3 to 42.7 mg/dL), and between Precision PCx versus central lab was 18.6 mg/dL (95% LA -12.6 to 49.5 mg/dL). Accu-chek fingerstick versus central lab arterial samples presented a similar bias (10.0 mg/dL) but a wider 95% LA (-31.8 to 51.8 mg/dL). Agreement between venous samples with arterial central lab was the poorest (mean bias 15.1 mg/dL; 95% LA -51.7 to 81.9). Hyperglycemia, low hematocrit, and acidosis were associated with larger differences between arterial and venous blood measurements with the two glucometers and central lab. Vasopressor administration was associated with increased error for fingerstick measurements.Sampling from central venous catheters should not be used for glycemic control in ICU patients. In addition, reliability of the two evaluated glucometers was insufficient. Error with Accu-chek Advantage II increases mostly with central venous samples. Hyperglycemia, lower hematocrit, acidosis, and vasopressor administration increase measurement error

    SEVERITAS: An externally validated mortality prediction for critically ill patients in low and middle-income countries

    No full text
    Objective: Severity of illness scores used in critical care for benchmarking, quality assurance and risk stratification have been mainly created in high-income countries. In low and middle-income countries (LMICs), they cannot be widely utilized due to the demand for large amounts of data that may not be available (e.g. laboratory results). We attempt to create a new severity prognostication model using fewer variables that are easier to collect in an LMIC. Setting: Two intensive care units, one private and one public, from São Paulo, Brazil Patients: An ICU for the first time. Interventions: None. Measurements and Mains results: The dataset from the private ICU was used as a training set for model development to predict in-hospital mortality. Three different machine learning models were applied to five different blocks of candidate variables. The resulting 15 models were then validated on a separate dataset from the public ICU, and discrimination and calibration compared to identify the best model. The best performing model used logistic regression on a small set of 10 variables: highest respiratory rate, lowest systolic blood pressure, highest body temperature and Glasgow Coma Scale during the first hour of ICU admission; age; prior functional capacity; type of ICU admission; source of ICU admission; and length of hospital stay prior to ICU admission. On the validation dataset, our new score, named SEVERITAS, had an area under the receiver operating curve of 0.84 (0.82 – 0.86) and standardized mortality ratio of 1.00 (0.91–1.08). Moreover, SEVERITAS had similar discrimination compared to SAPS-3 and better discrimination than the simplified TropICS and R-MPM. Conclusions: Our study proposes a new ICU mortality prediction model using simple logistic regression on a small set of easily collected variables may be better suited than currently available models for use in low and middle-income countries

    Four-point Likert satisfaction scale.

    No full text
    <p>S1: survey done 3 months before the mobile application implementation, S2: survey 3 months after the implementation, S3: survey done 6 months after the implementation, *p<0.001, **p = 0.52, *** p<0.001</p
    corecore