445 research outputs found

    The initial Mayo Clinic experience using high-frequency oscillatory ventilation for adult patients: a retrospective study

    Get PDF
    BACKGROUND: High-frequency oscillatory ventilation (HFOV) was introduced in our institution in June 2003. Since then, there has been no protocol to guide the use of HFOV, and all decisions regarding ventilation strategies and settings of HFOV were made by the treating intensivist. The aim of this study is to report our first year of experience using HFOV. METHODS: In this retrospective study, we reviewed all 14 adult patients, who were consecutively ventilated with HFOV in the intensive care units of a tertiary medical center, from June 2003 to July 2004. RESULTS: The mean age of the patients was 56 years, 10 were males, and all were whites. The first day median APACHE II score and its predicted hospital mortality were 35 and 83%, respectively, and the median SOFA score was 11.5. Eleven patients had ARDS, two unilateral pneumonia with septic shock, and one pulmonary edema. Patients received conventional ventilation for a median of 1.8 days before HFOV. HFOV was used 16 times for a median of 3.2 days. Improvements in oxygenation parameters were observed after 24 hours of HFOV (mean PaO(2)/FIO(2 )increased from 82 to 107, P < 0.05; and the mean oxygenation index decreased from 42 to 29; P < 0.05). In two patients HFOV was discontinued, in one because of equipment failure and in another because of severe hypotension that was unresponsive to fluids. No change in mean arterial pressure, or vasopressor requirements was noted after the initiation of HFOV. Eight patients died (57 %, 95% CI: 33–79); life support was withdrawn in six and two suffered cardiac arrest. CONCLUSION: During our first year of experience, HFOV was used as a rescue therapy in very sick patients with refractory hypoxemia, and improvement in oxygenation was observed after 24 hours of this technique. HFOV is a reasonable alternative when a protective lung strategy could not be achieved on conventional ventilation

    Performance of point-of-care HbA1c test devices: implications for use in clinical practice – a systematic review and meta-analysis

    Get PDF
    Regular monitoring of glycated hemoglobin subfraction A1c (HbA1c) in people with diabetes and treatment with glucose-lowering medications to improve glycaemic control can reduce the risk of developing complications [1]. In 2011, a World Health Organization consultation concluded that HbA1cat a threshold of 6.5% (48 mmol/mol) can be used as a diagnostic test for diabetes [2]. HbA1c monitoring often requires the patient to attend the health center twice: once to have blood taken and then returning to get test results and receive adjustments to medication. Point-of-care (POC) analysers are bench-top instruments that use a finger-prick blood sample and are designed for use in a treatment room or at the bed-side. They provide a test result within a few minutes allowing clinical decisions and medication changes to take place immediately. The suitability of many of these devices for the accurate measurement of HbA1c has been questioned, with some POC HbA1c test devices reported not to meet accepted accuracy and precision criteria [3]. Ideal imprecision goals for HbA1c should be coefficient of variation (CV) of <2% for HbA1c reported in % units (or <3% in SI units, mmol/mol) [4], [5], [6]. Most evaluations of POC HbA1c devices have taken place in laboratory settings [7], [8]; fewer studies have assessed device performance in a POC setting or with clinicians performing the tests [9], [10]. The only published review that has attempted to combine data from accuracy studies identified five studies covering three devices and compared correlation coefficients [11]. Systematically reporting and pooling data estimates of bias and precision between POC HbA1c devices and laboratory measurements would enable end users to assess which analysers best meet their analytical performance needs. This may be of particular importance for clinicians in primary care settings where much of the management of diabetes patients takes place. The comparison of accuracy between devices over the entire therapeutic range would need to be carried out by combining data on measurement error (bias) between POC and laboratory tests [12]. The aim of this study was to compare accuracy and precision of POC HbA1c devices with the local laboratory method based on data from published studies and discuss the clinical implications of the findings

    Reducing Unnecessary Testing in the Intensive Care Unit by Choosing Wisely

    Get PDF
    Overuse of laboratory and X-ray testing is common in the intensive care unit (ICU). This review highlights focused strategies for critical care clinicians as outlined by the Critical Care Societies Collaborative (CCSC) as part of the American Board of Internal Medicine Foundation’s Choosing Wisely® campaign. The campaign aims to promote the use of judicious testing and decrease unnecessary treatment measures in the ICU. The CCSC outlines five specific recommendations for reducing unnecessary testing in the ICU. First, reduce the use of daily or regular interval diagnostic testing. Second, do not transfuse red blood cells in hemodynamically stable, non-bleeding ICU patients with a hemoglobin concentration greater than 7 mg/dl. Third, do not use parenteral nutrition in adequately nourished critically ill patients within the first 7 days of ICU stay. Fourth, do not deeply sedate mechanically ventilated patients without a specific indication and without daily attempts to lighten sedation. Finally, do not continue life support for patients at high risk of death without offering patients and their families the alternative of comfort focused care. A number of strategies can be used to reduce unnecessary testing in the ICU, including educational campaigns, audit and feedback, and implementing prompts in the electronic ordering system to allow only acceptable indications when ordering routine testing. Greater awareness of the lack of outcome benefit and associated costs can prompt clinicians to be more mindful of ordering tests and procedures in order to reduce unnecessary testing in the ICU

    Comparison of a nurse initiated insulin infusion protocol for intensive insulin therapy between adult surgical trauma, medical and coronary care intensive care patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Sustained hyperglycemia is a known risk factor for adverse outcomes in critically ill patients. The specific aim was to determine if a nurse initiated insulin infusion protocol (IIP) was effective in maintaining blood glucose values (BG) within a target goal of 100–150 mg/dL across different intensive care units (ICUs) and to describe glycemic control during the 48 hours after protocol discontinuation.</p> <p>Methods</p> <p>A descriptive, retrospective review of 366 patients having 28,192 blood glucose values in three intensive care units, Surgical Trauma Intensive Care Unit (STICU), Medical (MICU) and Coronary Care Unit (CCU) in a quaternary care hospital was conducted. Patients were > 15 years of age, admitted to STICU (n = 162), MICU (n = 110) or CCU (n = 94) over 8 months; October 2003-June 2004 and who had an initial blood glucose level > 150 mg/dL. We summarized the effectiveness and safety of a nurse initiated IIP, and compared these endpoints among STICU, MICU and CCU patients.</p> <p>Results</p> <p>The median blood glucose values (mg/dL) at initiation of insulin infusion protocol were lower in STICU (188; IQR, 162–217) than in MICU, (201; IQR, 170–268) and CCU (227; IQR, 178–313); <it>p </it>< 0.0001. Mean time to achieving a target glucose level (100–150 mg/dL) was similar between the three units: 4.6 hours in STICU, 4.7 hours in MICU and 4.9 hours in CCU (<it>p </it>= 0.27). Hypoglycemia (BG < 60 mg/dL) occurred in 7% of STICU, 5% of MICU, and 5% of CCU patients (<it>p </it>= 0.85). Protocol violations were uncommon in all three ICUs. Mean blood glucose 48 hours following IIP discontinuation was significantly different for each population: 142 mg/dL in STICU, 167 mg/dL in MICU, and 160 mg/dL in CCU (<it>p </it>< 0.0001).</p> <p>Conclusion</p> <p>The safety and effectiveness of nurse initiated IIP was similar across different ICUs in our hospital. Marked variability in glucose control after the protocol discontinuation suggests the need for further research regarding glucose control in patients transitioning out of the ICU.</p

    Raptor Interactions with Wind Energy: Case Studies from Around the World

    Get PDF
    The global potential for wind power generation is vast, and the number of installations is increasing rapidly. We review case studies from around the world of the effects on raptors of wind-energy development. Collision mortality, displacement, and habitat loss have the potential to cause population-level effects, especially for species that are rare or endangered. The impact on raptors has much to do with their behavior, so careful siting of wind-energy developments to avoid areas suited to raptor breeding, foraging, or migration would reduce these effects. At established wind farms that already conflict with raptors, reduction of fatalities may be feasible by curtailment of turbines as raptors approach, and offset through mitigation of other human causes of mortality such as electrocution and poisoning, provided the relative effects can be quantified. Measurement of raptor mortality at wind farms is the subject of intense effort and study, especially where mitigation is required by law, with novel statistical approaches recently made available to improve the notoriously difficult-to-estimate mortality rates of rare and hard-to-detect species. Global standards for wind farm placement, monitoring, and effects mitigation would be a valuable contribution to raptor conservation worldwide.publishedVersio

    Hepatic glucose uptake and disposition during short-term high-fat vs. high-fructose feeding

    Get PDF
    In dogs consuming a high-fat and -fructose diet (52 and 17% of total energy, respectively) for 4 wk, hepatic glucose uptake (HGU) in response to hyperinsulinemia, hyperglycemia, and portal glucose delivery is markedly blunted with reduction in glucokinase (GK) protein and glycogen synthase (GS) activity. The present study compared the impact of selective increases in dietary fat and fructose on liver glucose metabolism. Dogs consumed weight-maintaining chow (CTR) or hypercaloric high-fat (HFA) or high-fructose (HFR) diets diet for 4 wk before undergoing clamp studies with infusion of somatostatin and intraportal insulin (3–4 times basal) and glucagon (basal). The hepatic glucose load (HGL) was doubled during the clamp using peripheral vein (Pe) glucose infusion in the first 90 min (P1) and portal vein (4 mg·kg−1·min−1) plus Pe glucose infusion during the final 90 min (P2). During P2, HGU was 2.8 ± 0.2, 1.0 ± 0.2, and 0.8 ± 0.2 mg·kg−1·min−1 in CTR, HFA, and HFR, respectively (P < 0.05 for HFA and HFR vs. CTR). Compared with CTR, hepatic GK protein and catalytic activity were reduced (P < 0.05) 35 and 56%, respectively, in HFA, and 53 and 74%, respectively, in HFR. Liver glycogen concentrations were 20 and 38% lower in HFA and HFR than CTR (P < 0.05). Hepatic Akt phosphorylation was decreased (P < 0.05) in HFA (21%) but not HFR. Thus, HFR impaired hepatic GK and glycogen more than HFA, whereas HFA reduced insulin signaling more than HFR. HFA and HFR effects were not additive, suggesting that they act via the same mechanism or their effects converge at a saturable step

    What is the real impact of acute kidney injury?

    Get PDF
    Background: Acute kidney injury (AKI) is a common clinical problem. Studies have documented the incidence of AKI in a variety of populations but to date we do not believe the real incidence of AKI has been accurately documented in a district general hospital setting. The aim here was to describe the detected incidence of AKI in a typical general hospital setting in an unselected population, and describe associated short and long-term outcomes. Methods: A retrospective observational database study from secondary care in East Kent (adult catchment population of 582,300). All adult patients (18 years or over) admitted between 1st February 2009 and 31st July 2009, were included. Patients receiving chronic renal replacement therapy (RRT), maternity and day case admissions were excluded. AKI was defined by the acute kidney injury network (AKIN) criteria. A time dependent risk analysis with logistic regression and Cox regression was used for the analysis of in-hospital mortality and survival. Results: The incidence of AKI in the 6 month period was 15,325 pmp/yr (adults) (69% AKIN1, 18% AKIN2 and 13% AKIN3). In-hospital mortality, length of stay and ITU utilisation all increased with severity of AKI. Patients with AKI had an increase in care on discharge and an increase in hospital readmission within 30 days. Conclusions: This data comes closer to the real incidence and outcomes of AKI managed in-hospital than any study published in the literature to date. Fifteen percent of all admissions sustained an episode of AKI with increased subsequent short and long term morbidity and mortality, even in those with AKIN1. This confers an increased burden and cost to the healthcare economy, which can now be quantified. These results will furnish a baseline for quality improvement projects aimed at early identification, improved management, and where possible prevention, of AKI

    Biological Variation of Plasma and Urinary Markers of Acute Kidney Injury in Patients with Chronic Kidney Disease

    Get PDF
    BACKGROUND: Identification of acute kidney injury (AKI) is predominantly based on changes in plasma creatinine concentration, an insensitive marker. Alternative biomarkers have been proposed. The reference change value (RCV), the point at which biomarker change can be inferred to have occurred with statistical certainty, provides an objective assessment of change in serial tests results in an individual. METHODS: In 80 patients with chronic kidney disease, weekly measurements of blood and urinary biomarker concentrations were undertaken over 6 weeks. Variability was determined and compared before and after adjustment for urinary creatinine and across subgroups stratified by level of kidney function, proteinuria, and presence or absence of diabetes. RESULTS: RCVs were determined for whole blood, plasma, and urinary neutrophil gelatinase-associated lipocalin (111%, 59%, and 693%, respectively), plasma cystatin C (14%), creatinine (17%), and urinary kidney injury molecule 1 (497%), tissue inhibitor of metalloproteinases 2 (454%), N-acetyl-?-d-glucosaminidase (361%), interleukin-18 (819%), albumin (430%), and ?1-microglobulin (216%). Blood biomarkers exhibited lower variability than urinary biomarkers. Generally, adjusting urinary biomarker concentrations for creatinine reduced (P < 0.05) within-individual biological variability (CVI). For some markers, variation differed (P < 0.05) between subgroups. CONCLUSIONS: These data can form a basis for application of these tests in clinical practice and research studies and are applicable across different levels of kidney function and proteinuria and in the presence or absence of diabetes. Most of the studied biomarkers have relatively high CVI (noise) but also have reported large concentration changes in response to renal insult (signal); thus progressive change should be detectable (high signal-to-noise ratio) when baseline data are available

    Do acute elevations of serum creatinine in primary care engender an increased mortality risk?

    Get PDF
    Background: The significant impact Acute Kidney Injury (AKI) has on patient morbidity and mortality emphasizes the need for early recognition and effective treatment. AKI presenting to or occurring during hospitalisation has been widely studied but little is known about the incidence and outcomes of patients experiencing acute elevations in serum creatinine in the primary care setting where people are not subsequently admitted to hospital. The aim of this study was to define this incidence and explore its impact on mortality. Methods: The study cohort was identified by using hospital data bases over a six month period. Inclusion criteria: People with a serum creatinine request during the study period, 18 or over and not on renal replacement therapy. The patients were stratified by a rise in serum creatinine corresponding to the Acute Kidney Injury Network (AKIN) criteria for comparison purposes. Descriptive and survival data were then analysed. Ethical approval was granted from National Research Ethics Service (NRES) Committee South East Coast and from the National Information Governance Board. Results: The total study population was 61,432. 57,300 subjects with ‘no AKI’, mean age 64.The number (mean age) of acute serum creatinine rises overall were, ‘AKI 1’ 3,798 (72), ‘AKI 2’ 232 (73), and ‘AKI 3’ 102 (68) which equates to an overall incidence of 14,192 pmp/year (adult). Unadjusted 30 day survival was 99.9% in subjects with ‘no AKI’, compared to 98.6%, 90.1% and 82.3% in those with ‘AKI 1’, ‘AKI 2’ and ‘AKI 3’ respectively. After multivariable analysis adjusting for age, gender, baseline kidney function and co-morbidity the odds ratio of 30 day mortality was 5.3 (95% CI 3.6, 7.7), 36.8 (95% CI 21.6, 62.7) and 123 (95% CI 64.8, 235) respectively, compared to those without acute serum creatinine rises as defined. Conclusions: People who develop acute elevations of serum creatinine in primary care without being admitted to hospital have significantly worse outcomes than those with stable kidney function
    • …
    corecore