18 research outputs found

    Health state utilities associated with attributes of treatments for hepatitis C

    Get PDF
    BACKGROUND: Cost-utility analyses are frequently conducted to compare treatments for hepatitis C, which are often associated with complex regimens and serious adverse events. Thus, the purpose of this study was to estimate the utility associated with treatment administration and adverse events of hepatitis C treatments. DESIGN: Health states were drafted based on literature review and clinician interviews. General population participants in the UK valued the health states in time trade-off (TTO) interviews with 10- and 1-year time horizons. The 14 health states described hepatitis C with variations in treatment regimen and adverse events. RESULTS: A total of 182 participants completed interviews (50 % female; mean age = 39.3 years). Utilities for health states describing treatment regimens without injections ranged from 0.80 (1 tablet) to 0.79 (7 tablets). Utilities for health states describing oral plus injectable regimens were 0.77 (7 tablets), 0.75 (12 tablets), and 0.71 (18 tablets). Addition of a weekly injection had a disutility of −0.02. A requirement to take medication with fatty food had a disutility of −0.04. Adverse events were associated with substantial disutilities: mild anemia, −0.12; severe anemia, −0.32; flu-like symptoms, −0.21; mild rash, −0.13; severe rash, −0.48; depression, −0.47. One-year TTO scores were similar to these 10-year values. CONCLUSIONS: Adverse events and greater treatment regimen complexity were associated with lower utility scores, suggesting a perceived decrease in quality of life beyond the impact of hepatitis C. The resulting utilities may be used in models estimating and comparing the value of treatments for hepatitis C. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10198-014-0649-6) contains supplementary material, which is available to authorized users

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Mechanisms of sodium channel clustering and its influence on axonal impulse conduction

    Get PDF

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Far-Field Modeling of a Deep-Sea Blowout: Sensitivity Studies of Initial Conditions, Biodegradation, Sedimentation, and Subsurface Dispersant Injection on Surface Slicks and Oil Plume Concentrations

    No full text
    Modeling of large-scale oil transport and fate resulting from deep-sea oil spills is highly complex due to a number of bio-chemo-geophysical interactions, which are often empirically based. Predicting mass-conserved total petroleum hydrocarbon concentrations is thus still a challenge for most oil spill models. In addition, dynamic quantification and visualization of spilled oil concentrations are necessary both for first response and basin-wide impact studies. This chapter presents a new implementation of the Connectivity Modeling System (CMS) oil application that tracks individual multi-fraction oil droplets and estimates oil concentrations and oil mass in a 3D space grid. We used the Deepwater Horizon (DWH) blowout as a case study and performed a sensitivity analysis of several modeling key factors, such as biodegradation, sedimentation, and alternative initial conditions, including droplet size distribution (DSD) corresponding to an untreated and treated live oil from subsurface dispersant injection (SSDI) predicted experimentally under high pressure and by the VDROP-J jet-droplet formation model. This quantitative analysis enabled the reconstruction of a time evolving three-dimensional (3D) oil plume in the ocean interior, the rising and spreading of oil on the ocean surface, and the effect of SSDI in shifting the oil to deeper waters while conserving the mass balance. Our modeling framework and analyses are thus important technical advances for understanding and mitigating deep-sea blowouts

    A Synthesis of Top-Down and Bottom-Up Impacts of the Deepwater Horizon Oil Spill Using Ecosystem Modeling

    No full text
    The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico (GoM) triggered the largest response to a spill in US history (Levy and Gopalakrishnan, J Nat Resources Pol Res, 2(3):297–315, 2010; Barron, Toxicol Pathol 40(2):315–320, 2012). The cumulative research from this response has resulted in hundreds of publications describing the range of impacts from the DWH event on various components of the system. An ecosystem-based approach to assessing the consequences of the DWH oil spill can help to address non-linear and ecosystem-level interactions (reviewed by Curtin and Prellezo, Mar Policy 34(5):821–830, 2010) and would be a key step toward integrating the knowledge gained from research efforts. Whereas Ainsworth et al. (PLoS One 13(1):e0190840, 2018) tested top-down effects of the oil spill on fish abundance and mortality, this chapter represents a synthesis of bottom-up and top-down effects across a broader range of taxa. Bottom-up effects relate to the accumulation of detrital biomass and oil on the seafloor as a result of marine oil snow sedimentation and flocculent accumulation (MOSSFA)
    corecore