11 research outputs found

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care

    Factors influencing healthcare provider respondent fatigue answering a globally administered in-app survey

    Full text link
    Background Respondent fatigue, also known as survey fatigue, is a common problem in the collection of survey data. Factors that are known to influence respondent fatigue include survey length, survey topic, question complexity, and open-ended question type. There is a great deal of interest in understanding the drivers of physician survey responsiveness due to the value of information received from these practitioners. With the recent explosion of mobile smartphone technology, it has been possible to obtain survey data from users of mobile applications (apps) on a question-by-question basis. The author obtained basic demographic survey data as well as survey data related to an anesthesiology-specific drug called sugammadex and leveraged nonresponse rates to examine factors that influenced respondent fatigue. Methods Primary data were collected between December 2015 and February 2017. Surveys and in-app analytics were collected from global users of a mobile anesthesia calculator app. Key independent variables were user country, healthcare provider role, rating of importance of the app to personal practice, length of time in practice, and frequency of app use. Key dependent variable was the metric of respondent fatigue. Results Provider role and World Bank country income level were predictive of the rate of respondent fatigue for this in-app survey. Importance of the app to the provider and length of time in practice were moderately associated with fatigue. Frequency of app use was not associated. This study focused on a survey with a topic closely related to the subject area of the app. Respondent fatigue rates will likely change dramatically if the topic does not align closely. Discussion Although apps may serve as powerful platforms for data collection, responses rates to in-app surveys may differ on the basis of important respondent characteristics. Studies should be carefully designed to mitigate fatigue as well as powered with the understanding of the respondent characteristics that may have higher rates of respondent fatigue

    Determination of ED50 and time to effectiveness for intrathecal hydromorphone in laboring patients using Dixon’s up-and-down sequential allocation method

    Full text link
    Abstract Background With the increasing occurrence of drug shortages, understanding the pharmacokinetics of alternative intrathecal opioid administration has gained importance. In particular, additional data are needed to comprehensively evaluate the analgesic properties of intrathecal hydromorphone in the laboring patient. In a phase 2 clinical trial, we set out to determine the median effective dose (ED50) and time to effectiveness for this drug in this population. Methods Using Dixon’s up-and-down sequential allocation method, twenty women presenting for labor analgesia were prospectively enrolled. A combined spinal-epidural technique was used to deliver the determined dose of intrathecal hydromorphone. Visual analog pain scores were obtained assessing peak pain scores during serial uterine contractions. Effective pain relief was defined as achieving a pain score of less than or equal to 3 out of 10. The dose was deemed to be ineffective if the patient failed to achieve this level of relief after 30 min. Results The ED50 of hydromorphone in our population was 10.9 μg (95% confidence interval 5.6–16.2 μg). Amongst patients for whom the dose was effective, the median time to pain relief was 24 min. One patient experienced both nausea and pruritus. No other complications were noted. Conclusion Due to the prolonged time to onset, hydromorphone cannot be recommended in favor of substantively better alternatives such as sufentanil and fentanyl. Trial registration Clinicaltrials.gov registration number: NCT01598506

    Prediction of American Society of Anesthesiologists Physical Status Classification from preoperative clinical text narratives using natural language processing

    Full text link
    Abstract Background Electronic health records (EHR) contain large volumes of unstructured free-form text notes that richly describe a patient’s health and medical comorbidities. It is unclear if perioperative risk stratification can be performed directly from these notes without manual data extraction. We conduct a feasibility study using natural language processing (NLP) to predict the American Society of Anesthesiologists Physical Status Classification (ASA-PS) as a surrogate measure for perioperative risk. We explore prediction performance using four different model types and compare the use of different note sections versus the whole note. We use Shapley values to explain model predictions and analyze disagreement between model and human anesthesiologist predictions. Methods Single-center retrospective cohort analysis of EHR notes from patients undergoing procedures with anesthesia care spanning all procedural specialties during a 5 year period who were not assigned ASA VI and also had a preoperative evaluation note filed within 90 days prior to the procedure. NLP models were trained for each combination of 4 models and 8 text snippets from notes. Model performance was compared using area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Shapley values were used to explain model predictions. Error analysis and model explanation using Shapley values was conducted for the best performing model. Results Final dataset includes 38,566 patients undergoing 61,503 procedures with anesthesia care. Prevalence of ASA-PS was 8.81% for ASA I, 31.4% for ASA II, 43.25% for ASA III, and 16.54% for ASA IV-V. The best performing models were the BioClinicalBERT model on the truncated note task (macro-average AUROC 0.845) and the fastText model on the full note task (macro-average AUROC 0.865). Shapley values reveal human-interpretable model predictions. Error analysis reveals that some original ASA-PS assignments may be incorrect and the model is making a reasonable prediction in these cases. Conclusions Text classification models can accurately predict a patient’s illness severity using only free-form text descriptions of patients without any manual data extraction. They can be an additional patient safety tool in the perioperative setting and reduce manual chart review for medical billing. Shapley feature attributions produce explanations that logically support model predictions and are understandable to clinicians

    Supplement – Supplemental material for Crowdsourcing sugammadex adverse event rates using an in-app survey: feasibility assessment from an observational study

    Full text link
    <p>Supplemental material, Supplement for Crowdsourcing sugammadex adverse event rates using an in-app survey: feasibility assessment from an observational study by Craig S. Jabaley, Francis A. Wolf, Grant C. Lynde and Vikas N. O’Reilly-Shah in Therapeutic Advances in Drug Safety</p
    corecore