40 research outputs found
Cost-effectiveness of HBV and HCV screening strategies:a systematic review of existing modelling techniques
Introduction:
Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches.
Methods:
A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions.
Results:
The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology.
Conclusion:
When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers
Nitrogen-neutrality: a step towards sustainability
We propose a novel indicator measuring one dimension of the sustainability of an entity in modern
societies: Nitrogen-neutrality. N-neutrality strives to offset Nr releases an entity exerts on the
environment from the release of reactive nitrogen (Nr) to the environment by reducing it and by
offsetting the Nr releases elsewhere. N-neutrality also aims to increase awareness about the
consequences of unintentional releases of nitrogen to the environment. N-neutrality is composed of
two quantified elements: Nr released by an entity (e.g. on the basis of the N footprint) and Nr
reduction from management and offset projects (N offset). It includes management strategies to
reduce nitrogen losses before they occur (e.g., through energy conservation). Each of those
elements faces specific challenges with regard to data availability and conceptual development.
Impacts of Nr releases to the environment are manifold, and the impact profile of one unit of Nr
release depends strongly on the compound released and the local susceptibility to Nr. As such, Nneutrality
is more difficult to conceptualize and calculate than C-neutrality. We developed a
workable conceptual framework for N-neutrality which was adapted for the 6th International
Nitrogen Conference (N2013, Kampala, November 2013). Total N footprint of the surveyed meals
at N2013 was 66 kg N. A total of US$ 3050 was collected from the participants and used to offset
the conferenceâs N footprint by supporting the UN Millennium Village cluster Ruhiira in South-
Western Uganda. The concept needs further development in particular to better incorporate the
spatio-temporal variability of impacts and to standardize the methods to quantify the required N
offset to neutralize the Nr releases impact. Criteria for compensation projects need to be sharply
defined to allow the development of a market for N offset certificates
Online supplementary data available from stacks.iop.org/ERL/9/115001/mmediainfo:eu-repo/semantics/publishedVersio
Continuous glucose monitoring in pregnant women with type 1 diabetes (CONCEPTT): a multicentre international randomised controlled trial.
BACKGROUND: Pregnant women with type 1 diabetes are a high-risk population who are recommended to strive for optimal glucose control, but neonatal outcomes attributed to maternal hyperglycaemia remain suboptimal. Our aim was to examine the effectiveness of continuous glucose monitoring (CGM) on maternal glucose control and obstetric and neonatal health outcomes. METHODS: In this multicentre, open-label, randomised controlled trial, we recruited women aged 18-40 years with type 1 diabetes for a minimum of 12 months who were receiving intensive insulin therapy. Participants were pregnant (â€13 weeks and 6 days' gestation) or planning pregnancy from 31 hospitals in Canada, England, Scotland, Spain, Italy, Ireland, and the USA. We ran two trials in parallel for pregnant participants and for participants planning pregnancy. In both trials, participants were randomly assigned to either CGM in addition to capillary glucose monitoring or capillary glucose monitoring alone. Randomisation was stratified by insulin delivery (pump or injections) and baseline glycated haemoglobin (HbA1c). The primary outcome was change in HbA1c from randomisation to 34 weeks' gestation in pregnant women and to 24 weeks or conception in women planning pregnancy, and was assessed in all randomised participants with baseline assessments. Secondary outcomes included obstetric and neonatal health outcomes, assessed with all available data without imputation. This trial is registered with ClinicalTrials.gov, number NCT01788527. FINDINGS: Between March 25, 2013, and March 22, 2016, we randomly assigned 325 women (215 pregnant, 110 planning pregnancy) to capillary glucose monitoring with CGM (108 pregnant and 53 planning pregnancy) or without (107 pregnant and 57 planning pregnancy). We found a small difference in HbA1c in pregnant women using CGM (mean difference -0·19%; 95% CI -0·34 to -0·03; p=0·0207). Pregnant CGM users spent more time in target (68% vs 61%; p=0·0034) and less time hyperglycaemic (27% vs 32%; p=0·0279) than did pregnant control participants, with comparable severe hypoglycaemia episodes (18 CGM and 21 control) and time spent hypoglycaemic (3% vs 4%; p=0·10). Neonatal health outcomes were significantly improved, with lower incidence of large for gestational age (odds ratio 0·51, 95% CI 0·28 to 0·90; p=0·0210), fewer neonatal intensive care admissions lasting more than 24 h (0·48; 0·26 to 0·86; p=0·0157), fewer incidences of neonatal hypoglycaemia (0·45; 0·22 to 0·89; p=0·0250), and 1-day shorter length of hospital stay (p=0·0091). We found no apparent benefit of CGM in women planning pregnancy. Adverse events occurred in 51 (48%) of CGM participants and 43 (40%) of control participants in the pregnancy trial, and in 12 (27%) of CGM participants and 21 (37%) of control participants in the planning pregnancy trial. Serious adverse events occurred in 13 (6%) participants in the pregnancy trial (eight [7%] CGM, five [5%] control) and in three (3%) participants in the planning pregnancy trial (two [4%] CGM and one [2%] control). The most common adverse events were skin reactions occurring in 49 (48%) of 103 CGM participants and eight (8%) of 104 control participants during pregnancy and in 23 (44%) of 52 CGM participants and five (9%) of 57 control participants in the planning pregnancy trial. The most common serious adverse events were gastrointestinal (nausea and vomiting in four participants during pregnancy and three participants planning pregnancy). INTERPRETATION: Use of CGM during pregnancy in patients with type 1 diabetes is associated with improved neonatal outcomes, which are likely to be attributed to reduced exposure to maternal hyperglycaemia. CGM should be offered to all pregnant women with type 1 diabetes using intensive insulin therapy. This study is the first to indicate potential for improvements in non-glycaemic health outcomes from CGM use. FUNDING: Juvenile Diabetes Research Foundation, Canadian Clinical Trials Network, and National Institute for Health Research
Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study
Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ℠0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (nâ=â143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (nâ=â152), or no hydrocortisone (nâ=â108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (nâ=â137), shock-dependent (nâ=â146), and no (nâ=â101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
The James Webb Space Telescope Mission
Twenty-six years ago a small committee report, building on earlier studies,
expounded a compelling and poetic vision for the future of astronomy, calling
for an infrared-optimized space telescope with an aperture of at least .
With the support of their governments in the US, Europe, and Canada, 20,000
people realized that vision as the James Webb Space Telescope. A
generation of astronomers will celebrate their accomplishments for the life of
the mission, potentially as long as 20 years, and beyond. This report and the
scientific discoveries that follow are extended thank-you notes to the 20,000
team members. The telescope is working perfectly, with much better image
quality than expected. In this and accompanying papers, we give a brief
history, describe the observatory, outline its objectives and current observing
program, and discuss the inventions and people who made it possible. We cite
detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space
Telescope Overview, 29 pages, 4 figure
Prospective, multicentre study of screening, investigation and management of hyponatraemia after subarachnoid haemorrhage in the UK and Ireland
Background: Hyponatraemia often occurs after subarachnoid haemorrhage (SAH). However, its clinical significance and optimal management are uncertain. We audited the screening, investigation and management of hyponatraemia after SAH. Methods: We prospectively identified consecutive patients with spontaneous SAH admitted to neurosurgical units in the United Kingdom or Ireland. We reviewed medical records daily from admission to discharge, 21 days or death and extracted all measurements of serum sodium to identify hyponatraemia (<135 mmol/L). Main outcomes were death/dependency at discharge or 21 days and admission duration >10 days. Associations of hyponatraemia with outcome were assessed using logistic regression with adjustment for predictors of outcome after SAH and admission duration. We assessed hyponatraemia-free survival using multivariable Cox regression. Results: 175/407 (43%) patients admitted to 24 neurosurgical units developed hyponatraemia. 5976 serum sodium measurements were made. Serum osmolality, urine osmolality and urine sodium were measured in 30/166 (18%) hyponatraemic patients with complete data. The most frequently target daily fluid intake was >3 L and this did not differ during hyponatraemic or non-hyponatraemic episodes. 26% (n/N=42/164) patients with hyponatraemia received sodium supplementation. 133 (35%) patients were dead or dependent within the study period and 240 (68%) patients had hospital admission for over 10 days. In the multivariable analyses, hyponatraemia was associated with less dependency (adjusted OR (aOR)=0.35 (95% CI 0.17 to 0.69)) but longer admissions (aOR=3.2 (1.8 to 5.7)). World Federation of Neurosurgical Societies grade IâIII, modified Fisher 2â4 and posterior circulation aneurysms were associated with greater hazards of hyponatraemia. Conclusions: In this comprehensive multicentre prospective-adjusted analysis of patients with SAH, hyponatraemia was investigated inconsistently and, for most patients, was not associated with changes in management or clinical outcome. This work establishes a basis for the development of evidence-based SAH-specific guidance for targeted screening, investigation and management of high-risk patients to minimise the impact of hyponatraemia on admission duration and to improve consistency of patient care
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 nonâcritically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (nâ=â257), ARB (nâ=â248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; nâ=â10), or no RAS inhibitor (control; nâ=â264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ supportâfree days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ supportâfree days among critically ill patients was 10 (â1 to 16) in the ACE inhibitor group (nâ=â231), 8 (â1 to 17) in the ARB group (nâ=â217), and 12 (0 to 17) in the control group (nâ=â231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ supportâfree days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
Evaluation and development of an image-guided radiotherapy protocol for prostate and nodes
Toward a nitrogen footprint calculator for Tanzania
We present the first nitrogen footprint model for a developing country: Tanzania. Nitrogen (N) is a crucial element for agriculture and human nutrition, but in excess it can cause serious environmental damage. The Sub-Saharan African nation of Tanzania faces a two-sided nitrogen problem: while there is not enough soil nitrogen to produce adequate food, excess nitrogen that escapes into the environment causes a cascade of ecological and human health problems. To identify, quantify, and contribute to solving these problems, this paper presents a nitrogen footprint tool for Tanzania. This nitrogen footprint tool is a concept originally designed for the United States of America (USA) and other developed countries. It uses personal resource consumption data to calculate a per-capita nitrogen footprint. The Tanzania N footprint tool is a version adapted to reflect the low-input, integrated agricultural system of Tanzania. This is reflected by calculating two sets of virtual N factors to describe N losses during food production: one for fertilized farms and one for unfertilized farms. Soil mining factors are also calculated for the first time to address the amount of N removed from the soil to produce food. The average per-capita nitrogen footprint of Tanzania is 10 kg N yrâ1. 88% of this footprint is due to food consumption and production, while only 12% of the footprint is due to energy use. Although 91% of farms in Tanzania are unfertilized, the large contribution of fertilized farms to N losses causes unfertilized farms to make up just 83% of the food production N footprint. In a developing country like Tanzania, the main audiences for the N footprint tool are community leaders, planners, and developers who can impact decision-making and use the calculator to plan positive changes for nitrogen sustainability in the developing world