33 research outputs found

    Antibiotic prescribing in an intensive care unit: Findings from a public Malaysian setting

    Get PDF
    Introduction: Data on antibiotic prescribing together with its knowledge and perception in Malaysian ICU is lacking. Objectives: To explore knowledge, perception, and antibiotic prescribing among specialists and advanced trainees in Malaysian ICU. Materials and Methods: A cross-sectional survey was employed which consisted of three sections namely knowledge, perception, and practice. Three case vignettes consisted of hospital-acquired pneumonia (HAP), infected necrotising pancreatitis (INP), and catheter-related bloodstream infection (CRBSI) were presented in the practice section to gather information on prescribing practice. Results: About 868 respondents were approached but only 104 responded (12.0% response rate). Seven different classes of antibiotics giving a total of 390 were empirically prescribed for the three cases combined. Antibiotic prescribing compliance which indicates correct choice of antibiotics and dosing were 66.3%, 56.7%, and 19.2% for HAI, INP, and CRBSI respectively. In perception, 97.2% and 85.6% of respondents conceded that antibiotic concentration is inadequate, and that dosing be based on MIC respectively. Majority (94.2%) perceived that antibiotic dosing follows PK/PD profile but only half (50.9%) agreed that therapeutic drug monitoring be routinely performed. Comprehension on antibiotics showed that all respondents acknowledged PK/PD profile of antibiotics but only 64.4% able to correlate given antibiotic with their respective PK/PD. Only 13.5% of respondents able to identify the best PD approach for Î-lactam antibiotics in sepsis patients. Conclusion: Antibiotic prescribing was somewhat appropriate in Malaysian ICU. Prolong therapy and inadequate coverage are the hallmark need to be considered especially in CRBSI. Clinicians are conversant with available antibiotics but apprehension in its PK/PD is scan

    Beta-Lactam Infusion in Severe Sepsis (BLISS): a prospective, two-centre, open-labelled randomised controlled trial of continuous versus intermittent beta-lactam infusion in critically ill patients with severe sepsis

    Get PDF
    This study aims to determine if continuous infusion (CI) is associated with better clinical and pharmacokinetic/pharmacodynamic (PK/PD) outcomes compared to intermittent bolus (IB) dosing in critically ill patients with severe sepsis. This was a two-centre randomised controlled trial of CI versus IB dosing of beta-lactam antibiotics, which enrolled critically ill participants with severe sepsis who were not on renal replacement therapy (RRT). The primary outcome was clinical cure at 14 days after antibiotic cessation. Secondary outcomes were PK/PD target attainment, ICU-free days and ventilator-free days at day 28 post-randomisation, 14- and 30-day survival, and time to white cell count normalisation. A total of 140 participants were enrolled with 70 participants each allocated to CI and IB dosing. CI participants had higher clinical cure rates (56 versus 34 %, p = 0.011) and higher median ventilator-free days (22 versus 14 days, p MIC than the IB arm on day 1 (97 versus 70 %, p < 0.001) and day 3 (97 versus 68 %, p < 0.001) post-randomisation. There was no difference in 14-day or 30-day survival between the treatment arms. In critically ill patients with severe sepsis not receiving RRT, CI demonstrated higher clinical cure rates and had better PK/PD target attainment compared to IB dosing of beta-lactam antibiotics. Continuous beta-lactam infusion may be mostly advantageous for critically ill patients with high levels of illness severity and not receiving RRT

    A single-center prospective observational study comparing resting energy expenditure in different phases of critical illness: indirect calorimetry versus predictive equations

    Get PDF
    Objectives: Several predictive equations have been developed for estimation of resting energy expenditure, but no study has been done to compare predictive equations against indirect calorimetry among critically ill patients at different phases of critical illness. This study aimed to determine the degree of agreement and accuracy of predictive equations among ICU patients during acute phase (≀ 5 d), late phase (6–10 d), and chronic phase (≄ 11 d). Design: This was a single-center prospective observational study that compared resting energy expenditure estimated by 15 commonly used predictive equations against resting energy expenditure measured by indirect calorimetry at different phases. Degree of agreement between resting energy expenditure calculated by predictive equations and resting energy expenditure measured by indirect calorimetry was analyzed using intraclass correlation coefficient and Bland-Altman analyses. Resting energy expenditure values calculated from predictive equations differing by ± 10% from resting energy expenditure measured by indirect calorimetry was used to assess accuracy. A score ranking method was developed to determine the best predictive equations. Setting: General Intensive Care Unit, University of Malaya Medical Centre. Patients: Mechanically ventilated critically ill patients. Interventions: None. Measurements and Main Results: Indirect calorimetry was measured thrice during acute, late, and chronic phases among 305, 180, and 91 ICU patients, respectively. There were significant differences (F = 3.447; p = 0.034) in mean resting energy expenditure measured by indirect calorimetry among the three phases. Pairwise comparison showed mean resting energy expenditure measured by indirect calorimetry in late phase (1,878 ± 517 kcal) was significantly higher than during acute phase (1,765 ± 456 kcal) (p = 0.037). The predictive equations with the best agreement and accuracy for acute phase was Swinamer (1990), for late phase was Brandi (1999) and Swinamer (1990), and for chronic phase was Swinamer (1990). None of the resting energy expenditure calculated from predictive equations showed very good agreement or accuracy. Conclusions: Predictive equations tend to either over- or underestimate resting energy expenditure at different phases. Predictive equations with “dynamic” variables and respiratory data had better agreement with resting energy expenditure measured by indirect calorimetry compared with predictive equations developed for healthy adults or predictive equations based on “static” variables. Although none of the resting energy expenditure calculated from predictive equations had very good agreement, Swinamer (1990) appears to provide relatively good agreement across three phases and could be used to predict resting energy expenditure when indirect calorimetry is not available

    Insulin sensitivity and blood glucose level of sepsis patients in the intensive care unit

    Get PDF
    Sepsis and hyperglycemia are highly associated with increases in mortality rates, particularly in the critically ill patients. Sepsis diagnosis has been proven challenging due to delay in getting the blood culture results. Thus, often clinical experiences overrule the protocol to prevent the worsening outcome of the patients. In some cases, the erroneous clinical judgement cause antibiotic resistance and even adverse clinical outcomes. This paper investigates the correlation between two parameters; insulin sensitivity and blood glucose level among sepsis patients. The blood glucose level is measured at the bedside during the patient's stay, whereas insulin sensitivity is obtained using the validated glucose-insulin model. Thus, the insulin sensitivity is a specific parameter of the patient, unregimented of the protocol given to the patient. The same parameters, blood glucose and insulin sensitivity, are also compared to the non-sepsis patients to establish a relationship that can be used for sepsis diagnosis. Given the availability of these two parameters that can be captured rapidly and instantly, a significant relationship can, therefore, help clinicians to identify sepsis at an early stage without second-guessing

    The effect of higher versus lower protein delivery in critically ill patients: a systematic review and meta-analysis of randomized controlled trials

    Get PDF
    Background: The optimal protein dose in critical illness is unknown. We aim to conduct a systematic review of randomized controlled trials (RCTs) to compare the effect of higher versus lower protein delivery (with similar energy delivery between groups) on clinical and patient-centered outcomes in critically ill patients. Methods: We searched MEDLINE, EMBASE, CENTRAL and CINAHL from database inception through April 1, 2021.We included RCTs of (1) adult (age ≄ 18) critically ill patients that (2) compared higher vs lower protein with (3) similar energy intake between groups, and (4) reported clinical and/or patient-centered outcomes. We excluded studies on immunonutrition. Two authors screened and conducted quality assessment independently and in duplicate. Random-effect meta-analyses were conducted to estimate the pooled risk ratio (dichotomized outcomes) or mean difference (continuous outcomes). Results: Nineteen RCTs were included (n = 1731). Sixteen studies used primarily the enteral route to deliver protein. Intervention was started within 72 h of ICU admission in sixteen studies. The intervention lasted between 3 and 28 days. In 11 studies that reported weight-based nutrition delivery, the pooled mean protein and energy received in higher and lower protein groups were 1.31 ± 0.48 vs 0.90 ± 0.30 g/kg and 19.9 ± 6.9 versus 20.1 ± 7.1 kcal/kg, respectively. Higher vs lower protein did not significantly affect overall mortality [risk ratio 0.91, 95% confidence interval (CI) 0.75-1.10, p = 0.34] or other clinical or patient-centered outcomes. In 5 small studies, higher protein significantly attenuated muscle loss (MD -3.44% per week, 95% CI -4.99 to -1.90; p < 0.0001). Conclusion: In critically ill patients, a higher daily protein delivery was not associated with any improvement in clinical or patient-centered outcomes. Larger, and more definitive RCTs are needed to confirm the effect of muscle loss attenuation associated with higher protein delivery. PROSPERO registration number: CRD42021237530

    Changing predominant SARS-CoV-2 lineages drives successive COVID-19 waves in Malaysia, February 2020 to March 2021

    Get PDF
    Malaysia has experienced three waves of coronavirus disease 2019 (COVID-19) as of March 31, 2021. We studied the associated molecular epidemiology and SARS-CoV-2 seroprevalence during the third wave. We obtained 60 whole-genome SARS-CoV-2 sequences between October 2020 and January 2021 in Kuala Lumpur/Selangor and analyzed 989 available Malaysian sequences. We tested 653 residual serum samples collected between December 2020 to April 2021 for anti-SARS-CoV-2 total antibodies, as a proxy for population immunity. The first wave (January 2020) comprised sporadic imported cases from China of early Pango lineages A and B. The second wave (March–June 2020) was associated with lineage B.6. The ongoing third wave (from September 2020) was propagated by a state election in Sabah. It is due to lineage B.1.524 viruses containing spike mutations D614G and A701V. Lineages B.1.459, B.1.470, and B.1.466.2 were likely imported from the region and confined to Sarawak state. Direct age-standardized seroprevalence in Kuala Lumpur/Selangor was 3.0%. The second and third waves were driven by super-spreading events and different circulating lineages. Malaysia is highly susceptible to further waves, especially as alpha (B.1.1.7) and beta (B.1.351) variants of concern were first detected in December 2020/January 2021. Increased genomic surveillance is critical

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Faster onset time of supraclavicular brachial plexus block using local anesthetic diluted with dextrose

    Get PDF
    Abstract Background and objectives: A high sodium concentration is known to antagonize local anesthetics when infiltrated around neural tissue. Thus, we hypothesized that the onset time for sensory and motor blockade, in supraclavicular brachial plexus block using ropivacaine diluted with dextrose would be shorter than with saline. Methods: Patients scheduled for upper limb surgery were randomized to receive ultrasound guided supraclavicular brachial plexus block with 0.5% ropivacaine. Evaluation of sensory and motor blockade was performed every 5 min for 60 min. Patients were followed-up on postoperative day 1, and between days 7 and 10 for the presence of any complications. Twenty-five patients in each group were analyzed. Results: Mean time for onset of analgesia for the dextrose group was 37.6 ± 12.9 min while the mean time for the saline group was 45.2 ± 13.9 min with a p-value of 0.05. The effect size was 0.567, which was moderate to large. No major complications were observed. Conclusion: We conclude that there was a decrease in onset time of analgesia when dextrose was used as a diluent instead of saline for ultrasound guided supraclavicular block
    corecore