115 research outputs found

    A comparison of liquid and solid culture for determining relapse and durable cure in phase III TB trials for new regimens

    Get PDF
    Supported by the Global Alliance for TB Drug Development with support from the Bill & Melinda Gates Foundation, the Medical Research Council (MC_UU_12023/27), the European and Developing Countries Clinical Trials Partnership (grant IP.2007.32011.011), the US Agency for International Development, the UK Department for International Development, the Directorate General for International Cooperation of the Netherlands, Irish Aid, the Australia Department of Foreign Affairs and Trade and National Institutes of Health, AIDS Clinical Trials Group and by grants from the National Institute of Allergy and Infectious Diseases (NIAID) (UM1AI068634, UM1 AI068636 and UM1AI106701) and by NIAID grants to the University of KwaZulu Natal, South Africa, AIDS Clinical Trials Group (ACTG) site 31422 (1U01AI069469); to the Perinatal HIV Research Unit, Chris Hani Baragwanath Hospital, South Africa, ACTG site 12301 (1U01AI069453); and to the Durban International Clinical Trials Unit, South Africa, ACTG site 11201 (1U01AI069426); Bayer Healthcare for the donation of moxifloxacin; and Sanofi for the donation of rifampin. Additional grants were from Chief Scientist Office, Scottish Government, British Society of Antimicrobial Chemotherapy.Background:  Tuberculosis kills more people than any other infectious disease, and new regimens are essential. The primary endpoint for confirmatory phase III trials for new regimens is a composite outcome that includes bacteriological treatment failure and relapse. Culture methodology is critical to the primary trial outcome. Patients in clinical trials can have positive cultures after treatment ends that may not necessarily indicate relapse, which was ascribed previously to laboratory cross-contamination or breakdown of old lesions. Löwenstein-Jensen (LJ) medium was the previous standard in clinical trials, but almost all current and future trials will use the Mycobacteria Growth Indicator Tube (MGIT) system due to its simplicity and consistency of use, which will affect phase III trial results. LJ was used for the definition of the primary endpoint in the REMoxTB trial, but every culture was also inoculated in parallel into the MGIT system. The data from this trial, therefore, provide a unique opportunity to investigate and compare the incidence of false ‘isolated positives’ in liquid and solid media and their potential impact on the primary efficacy results. Methods:  All post-treatment positive cultures were reviewed in the REMoxTB clinical trial. Logistic regression models were used to model the incidence of isolated positive cultures on MGIT and LJ. Results:  A total of 12,209 sputum samples were available from 1652 patients; cultures were more often positive on MGIT than LJ. In 1322 patients with a favourable trial outcome, 126 (9.5%) had cultures that were positive in MGIT compared to 34 (2.6%) patients with positive cultures on LJ. Among patients with a favourable outcome, the incidence of isolated positives on MGIT differed by study laboratory (p < 0.0001) with 21.9% of these coming from one laboratory investigating only 4.9% of patients. No other baseline factors predicted isolated positives on MGIT after adjusting for laboratory. There was evidence of clustering of isolated positive cultures in some patients even after adjusting for laboratory, p < 0.0001. The incidence of isolated positives on MGIT did not differ by treatment arm (p = 0.845, unadjusted). Compared to negative MGIT cultures, positive MGIT cultures were more likely to be associated with higher grade TB symptoms reported within 7 days either side of sputum collection in patients with an unfavourable primary outcome (p < 0.0001) but not in patients with a favourable outcome (p = 0.481). Conclusions:  Laboratory cross-contamination was a likely cause of isolated positive MGIT cultures which were clustered in some laboratories. Certain patients had repeated positive MGIT cultures that did not meet the definition of a relapse. This pattern was too common to be explained by cross-contamination only, suggesting that host factors were also responsible. We conclude that MGIT can replace LJ in phase III TB trials, but there are implications for the definition of the primary outcome and patient management in trials in such settings. Most importantly, the methodologies differ in the incidence of isolated positives and in their capacity for capturing non-tuberculosis mycobacteria. It emphasises the importance of effective medical monitoring after treatment ends and consideration of clinical signs and symptoms for determining treatment failure and relapse.Publisher PDFPeer reviewe

    Baseline and acquired resistance to bedaquiline, linezolid and pretomanid, and impact on treatment outcomes in four tuberculosis clinical trials containing pretomanid

    Get PDF
    Bedaquiline (B), pretomanid (Pa) and linezolid (L) are key components of new regimens for treating rifampicin-resistant tuberculosis (TB). However, there is limited information on the global prevalence of resistance to these drugs and the impact of resistance on treatment outcomes. Mycobacterium tuberculosis (MTB) phenotypic drug susceptibility and whole-genome sequence (WGS) data, as well as patient profiles from 4 pretomanid-containing trials–STAND, Nix-TB, ZeNix and SimpliciTB–were used to investigate the rates of baseline resistance (BR) and acquired resistance (AR) to BPaL drugs, as well as their genetic basis, risk factors and impact on treatment outcomes. Data from >1,000 TB patients enrolled from 2015 to 2020 in 12 countries was assessed. We identified 2 (0.3%) participants with linezolid BR. Pretomanid BR was also rare, with similar rates across TB drug resistance types (0–2.1%). In contrast, bedaquiline BR was more prevalent among participants with highly resistant TB or longer prior treatment histories than those with newly diagnosed disease (5.2–6.3% vs. 0–0.3%). Bedaquiline BR was a risk factor for bacteriological failure or relapse in Nix-TB/ZeNix; 3/12 (25%, 95% CI 5–57%) participants with vs. 6/185 (3.2%, 1.2–6.9%) without bedaquiline BR. Across trials, we observed no linezolid AR, and only 3 cases of bedaquiline AR, including 2 participants with poor adherence. Overall, pretomanid AR was also rare, except in ZeNix patients with bedaquiline BR. WGS analyses revealed novel mutations in canonical resistant genes and, in 7 MTB isolates, the genetic determinants could not be identified. The overall low rates of BR to linezolid and pretomanid, and to a lesser extent to bedaquiline, observed in the pretomanid trials are in support of the worldwide implementation of BPaL-based regimens. Similarly, the overall low AR rates observed suggest BPaL drugs are better protected in the regimens trialed here than in other regimens combining bedaquiline with more, but less effective drugs

    Retrospective harm benefit analysis of pre-clinical animal research for six treatment interventions

    Get PDF
    The harm benefit analysis (HBA) is the cornerstone of animal research regulation and is considered to be a key ethical safeguard for animals. The HBA involves weighing the anticipated benefits of animal research against its predicted harms to animals but there are doubts about how objective and accountable this process is.i. To explore the harms to animals involved in pre-clinical animal studies and to assess these against the benefits for humans accruing from these studies; ii. To test the feasibility of conducting this type of retrospective HBA.Data on harms were systematically extracted from a sample of pre-clinical animal studies whose clinical relevance had already been investigated by comparing systematic reviews of the animal studies with systematic reviews of human studies for the same interventions (antifibrinolytics for haemorrhage, bisphosphonates for osteoporosis, corticosteroids for brain injury, Tirilazad for stroke, antenatal corticosteroids for neonatal respiratory distress and thrombolytics for stroke). Clinical relevance was also explored in terms of current clinical practice. Harms were categorised for severity using an expert panel. The quality of the research and its impact were considered. Bateson's Cube was used to conduct the HBA.The most common assessment of animal harms by the expert panel was 'severe'. Reported use of analgesia was rare and some animals (including most neonates) endured significant procedures with no, or only light, anaesthesia reported. Some animals suffered iatrogenic harms. Many were kept alive for long periods post-experimentally but only 1% of studies reported post-operative care. A third of studies reported that some animals died prior to endpoints. All the studies were of poor quality. Having weighed the actual harms to animals against the actual clinical benefits accruing from these studies, and taking into account the quality of the research and its impact, less than 7% of the studies were permissible according to Bateson's Cube: only the moderate bisphosphonate studies appeared to minimise harms to animals whilst being associated with benefit for humans.This is the first time the accountability of the HBA has been systematically explored across a range of pre-clinical animal studies. The regulatory systems in place when these studies were conducted failed to safeguard animals from severe suffering or to ensure that only beneficial, scientifically rigorous research was conducted. Our findings indicate a pressing need to: i. review regulations, particularly those that permit animals to suffer severe harms; ii. reform the processes of prospectively assessing pre-clinical animal studies to make them fit for purpose; and iii. systematically evaluate the benefits of pre-clinical animal research to permit a more realistic assessment of its likely future benefits

    Harm–benefit analysis – what is the added value?:A review of alternative strategies for weighing harms and benefits as part of the assessment of animal research

    Get PDF
    Animal experiments are widely required to comply with the 3Rs, to minimise harm to the animals and to serve certain purposes in order to be ethically acceptable. Recently, however, there has been a drift towards adding a so-called harm-benefit analysis as an additional requirement in assessing experiments. According to this, an experiment should only be allowed if there is a positive balance when the expected harm is weighed against the expected benefits. This paper aims to assess the added value of this requirement. Two models, the discourse model and the metric model, are presented. According to the former, the weighing of harms and benefits must be conducted by a committee in which different stakeholders engage in a dialogue. Research into how this works in practice, however, shows that in the absence of an explicit and clearly defined methodology, there are issues about transparency, consistency and fairness. According to the metric model, on the other hand, several dimensions of harms and benefits are defined beforehand and integrated in an explicit weighing scheme. This model, however, has the problem that it makes no real room for ethical deliberation of the sort committees undertake, and it has therefore been criticised for being too technocratic. Also, it is unclear who is to be held accountable for built-in ethical assumptions. Ultimately, we argue that the two models are not mutually exclusive and may be combined to make the most of their advantages while reducing the disadvantages of how harm-benefit analysis in typically undertaken

    Spot sputum samples are at least as good as early morning samples for identifying Mycobacterium tuberculosis

    Get PDF
    Supported by the Global Alliance for TB Drug Development with support from the Bill and Melinda Gates Foundation, the European and Developing Countries Clinical Trials Partnership (Grant IP.2007.32011.011), US Agency for International Development, UK Department for International Development, Directorate General for International Cooperation of the Netherlands, Irish Aid, Australia Department of Foreign Affairs and Trade, National Institutes of Health, AIDS Clinical Trials Group. The study was also supported by grants from the National Institute of Allergy and Infectious Diseases (NIAID) (UM1AI068634, UM1 AI068636, and UM1AI106701) and by NIAID grants to the University of KwaZulu Natal, South Africa, AIDS Clinical Trials Group (ACTG) site 31422 (1U01AI069469); to the Perinatal HIV Research Unit, Chris Hani Baragwanath Hospital, South Africa, ACTG site 12301 (1U01AI069453); and to the Durban International Clinical Trials Unit, South Africa, ACTG site 11201 (1U01AI069426). Bayer Healthcare for donated moxifloxacin and Sanofi donated rifampin.Background:  The use of early morning sputum samples (EMS) to diagnose tuberculosis (TB) can result in treatment delay given the need for the patient to return to the clinic with the EMS, increasing the chance of patients being lost during their diagnostic workup. However, there is little evidence to support the superiority of EMS over spot sputum samples. In this new analysis of the REMoxTB study, we compare the diagnostic accuracy of EMS with spot samples for identifying Mycobacterium tuberculosis pre- and post-treatment. Methods:  Patients who were smear positive at screening were enrolled into the study. Paired sputum samples (one EMS and one spot) were collected at each trial visit pre- and post-treatment. Microscopy and culture on solid LJ and liquid MGIT media were performed on all samples; those missing corresponding paired results were excluded from the analyses. Results:  Data from 1115 pre- and 2995 post-treatment paired samples from 1931 patients enrolled in the REMoxTB study were analysed. Patients were recruited from South Africa (47%), East Africa (21%), India (20%), Asia (11%), and North America (1%); 70% were male, median age 31 years (IQR 24–41), 139 (7%) co-infected with HIV with a median CD4 cell count of 399 cells/μL (IQR 318–535). Pre-treatment spot samples had a higher yield of positive Ziehl–Neelsen smears (98% vs. 97%, P = 0.02) and LJ cultures (87% vs. 82%, P = 0.006) than EMS, but there was no difference for positivity by MGIT (93% vs. 95%, P = 0.18). Contaminated and false-positive MGIT were found more often with EMS rather than spot samples. Surprisingly, pre-treatment EMS had a higher smear grading and shorter time-to-positivity, by 1 day, than spot samples in MGIT culture (4.5 vs. 5.5 days, P < 0.001). There were no differences in time to positivity in pre-treatment LJ culture, or in post-treatment MGIT or LJ cultures. Comparing EMS and spot samples in those with unfavourable outcomes, there were no differences in smear or culture results, and positive results were not detected earlier in Kaplan–Meier analyses in either EMS or spot samples. Conclusions:  Our data do not support the hypothesis that EMS samples are superior to spot sputum samples in a clinical trial of patients with smear positive pulmonary TB. Observed small differences in mycobacterial burden are of uncertain significance and EMS samples do not detect post-treatment positives any sooner than spot samples.Publisher PDFPeer reviewe

    Aircraft-based mass balance estimate of methane emissions from offshore gas facilities in the Southern North Sea

    Get PDF
    Atmospheric methane (CH4) concentrations have more than doubled since the beginning of the industrial age, making CH4 the second most important anthropogenic greenhouse gas after carbon dioxide (CO2). The oil and gas sector represent one of the major anthropogenic CH4 emitters as it is estimated to account for 22 % of global anthropogenic CH4 emissions. An airborne field campaign was conducted in April&ndash;May 2019 to study CH4 emissions from offshore gas facilities in the Southern North Sea with the aim to derive emission estimates using a top-down (measurement-led) approach. We present CH4 fluxes for six UK and five Dutch offshore platforms/platform complexes using the well-established mass balance flux method. We identify specific gas production emissions and emission processes (venting/fugitive or flaring/combustion) using observations of co-emitted ethane (C2H6) and CO2. We compare our top-down estimated fluxes with a ship-based top-down study in the Dutch sector and with bottom-up estimates from a globally gridded annual inventory, UK national annual point-source inventories, and with operator-based reporting for individual Dutch facilities. In this study, we find that all inventories, except for the operator-based facility-level reporting, underestimate measured emissions, with the largest discrepancy observed with the globally gridded inventory. Individual facility reporting, as available for Dutch sites for the specific survey date, shows better agreement with our measurement-based estimates. For all sampled Dutch installations together, we find that our estimated flux of (122.7 &plusmn; 9.7) kg h-1 deviates by a factor 0.7 (0.35&ndash;12) from reported values (183.1 kg h-1). Comparisons with aircraft observations in two other offshore regions (Norwegian Sea and Gulf of Mexico) show that measured, absolute facility-level emission rates agree with the general distribution found in other offshore basins despite different production types (oil, gas) and gas production rates, which vary by two orders of magnitude. Therefore, mitigation is warranted equally across geographies.</p
    corecore