1,579 research outputs found

    Interpretation of active-control randomised trials: the case for a new analytical perspective involving averted events

    Get PDF
    Active-control trials, where an experimental treatment is compared with an established treatment, are performed when the inclusion of a placebo control group is deemed to be unethical. For time-to-event outcomes, the primary estimand is usually the rate ratio, or the closely-related hazard ratio, comparing the experimental group with the control group. In this article we describe major problems in the interpretation of this estimand, using examples from COVID-19 vaccine and HIV pre-exposure prophylaxis trials. In particular, when the control treatment is highly effective, the rate ratio may indicate that the experimental treatment is clearly statistically inferior even when it is worthwhile from a public health perspective. We argue that it is crucially important to consider averted events as well as observed events in the interpretation of active-control trials. An alternative metric that incorporates this information, the averted events ratio, is proposed and exemplified. Its interpretation is simple and conceptually appealing, namely the proportion of events that would be averted by using the experimental treatment rather than the control treatment. The averted events ratio cannot be directly estimated from the active-control trial, and requires an additional assumption about either: (a) the incidence that would have been observed in a hypothetical placebo arm (the counterfactual incidence) or (b) the efficacy of the control treatment (relative to no treatment) that pertained in the active-control trial. Although estimation of these parameters is not straightforward, this must be attempted in order to draw rational inferences. To date, this method has been applied only within HIV prevention research, but has wider applicability to treatment trials and other disease areas

    Licenced doses of approved COVID-19 vaccines may not be optimal: A review of the early-phase, dose-finding trials

    Get PDF
    Although over 13 billion COVID-19 vaccine doses have been administered globally, the issue of whether the optimal doses are being used has received little attention. To address this question we reviewed the reports of early-phase dose-finding trials of the nine COVID-19 vaccines approved by World Health Organization, extracting information on study design and findings on reactogenicity and early humoral immune response. The number of different doses evaluated for each vaccine varied widely (range 1-7), as did the number of subjects studied per dose (range 15-190). As expected, the frequency and severity of adverse reactions generally increased at higher doses, although most were clinically tolerable. Higher doses also tended to elicit better immune responses, but differences between the highest dose and the second-highest dose evaluated were small, typically less than 1.6-fold for both binding antibody concentration and neutralising antibody titre. All of the trials had at least one important design limitation - few doses evaluated, large gaps between adjacent doses, or an inadequate sample size - although this is not a criticism of the study investigators, who were working under intense time pressures at the start of the epidemic. It is therefore open to question whether the single dose taken into clinical efficacy trials, and subsequently authorised by regulatory agencies, was optimal. In particular, our analysis indicates that the recommended doses for some vaccines appear to be unnecessarily high. Although reduced dosing for booster injections is an active area of research, the priming dose also merits study. We conclude by suggesting improvements in the design of future vaccine trials, for both next-generation COVID-19 vaccines and for vaccines against other pathogens

    Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles

    Full text link
    Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity. These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity, modeled by simple tuning curve functions. This has recently been demonstrated using Gaussian processes, with applications to realistic and topologically relevant latent manifolds. Those and previous models, however, missed crucial shared coding properties of neural populations. We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization. We also propose a solution to the problem of ensemble detection, whereby different groups of neurons, i.e., ensembles, can be modulated by different latent manifolds. This is achieved through a soft clustering of neurons during training, thus allowing for the separation of mixed neural populations in an unsupervised manner. These innovations lead to more interpretable models of neural population activity that train well and perform better even on mixtures of complex latent manifolds. Finally, we apply our method on a recently published grid cell dataset, recovering distinct ensembles, inferring toroidal latents and predicting neural tuning curves all in a single integrated modeling framework

    Effects of the bioturbating marine yabby Trypaea australiensis on sediment properties in sandy sediments receiving mangrove leaf litter

    Get PDF
    Laboratory mesocosm incubations were undertaken to investigate the influence of burrowing shrimp Trypaea australiensis (marine yabby) on sediment reworking, physical and chemical sediment characteristics and nutrients in sandy sediments receiving mangrove (Avicennia marina) leaf litter. Mesocosms of sieved, natural T. australiensis inhabited sands, were continually flushed with fresh seawater and pre-incubated for 17 days prior to triplicates being assigned to one of four treatments; sandy sediment (S), sediment + yabbies (S+Y), sediment + leaf litter (organic matter; S+OM) and sediment + yabbies + leaf litter (S+Y+OM) and maintained for 55 days. Mangrove leaf litter was added daily to treatments S+OM and S+Y+OM. Luminophores were added to mesocosms to quantify sediment reworking. Sediment samples were collected after the pre-incubation period from a set of triplicate mesocosms to establish initial conditions prior to the imposition of the treatments and from the treatment mesocosms at the conclusion of the 55-day incubation period. Yabbies demonstrated a clear effect on sediment topography and leaf litter burial through burrow creation and maintenance, creating mounds on the sediment surface ranging in diameter from 3.4 to 12 cm. Within S+Y+OM sediments leaf litter was consistently removed from the surface to sub-surface layers with only 7.5% ± 3.6% of the total mass of leaf detritus added to the mesocosms remaining at the surface at the end of the 55-day incubation period. Yabbies significantly decreased sediment wet-bulk density and increased porosity. Additionally, T. australiensis significantly reduced sediment bio-available ammonium (NH4+bio) concentrations and altered the shape of the concentration depth profile in comparison to the non-bioturbated mesocosms, indicating influences on nutrient cycling and sediment-water fluxes. No significant changes for mean apparent biodiffusion coefficients (Db) and mean biotransport coefficients (r), were found between the bioturbated S+Y and S+Y+OM mesocosms. The findings of this study provide further evidence that T. australiensis is a key-species in shallow intertidal systems playing an important role as an ‘ecosystem engineer’ in soft-bottom habitats by significantly altering physical and chemical structures and biogeochemical function

    Measuring health-related quality of life outcomes in bladder cancer patients using the Bladder Cancer Index (BCI)

    Full text link
    BACKGROUND. Health-related quality of life (HRQOL) has not been adequately measured in bladder cancer. A recently developed reliable and disease-specific quality of life instrument (Bladder Cancer Index, BCI) was used to measure urinary, sexual, and bowel function and bother domains in patients with bladder cancer managed with several different interventions, including cystectomy and endoscopic-based procedures. METHODS. Patients with bladder cancer were identified from a prospective bladder cancer outcomes database and contacted as part of an Institutional Review Board-approved study to assess treatment impact on HRQOL. HRQOL was measured using the BCI across stratified treatment groups. Bivariate and multivariable analyses adjusted for age, gender, income, education, relationship status, and follow-up time were performed to compare urinary, bowel, and sexual domains between treatment groups. RESULTS. In all, 315 bladder cancer patients treated at the University of Michigan completed the BCI in 2004. Significant differences were seen in mean BCI function and bother scores between cystectomy and native bladder treatment groups. In addition, urinary function scores were significantly lower among cystectomy patients treated with continent neobladder compared with those treated with ileal conduit (all pairwise P < .05). CONCLUSIONS. The BCI is responsive to functional and bother differences in patients with bladder cancer treated with different surgical approaches. Significant differences between therapy groups in each of the urinary, bowel, and sexual domains exist. Among patients treated with orthotopic continent urinary diversion, functional impairments related to urinary incontinence and lack of urinary control account for the low observed urinary function scores. Cancer 2007. © 2007 American Cancer Society.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/55989/1/22556_ftp.pd

    High incidence of Hepatitis C infection observed in the PROUD study of HIV pre-exposure prophylaxis

    Get PDF
    HIV negative men who have sex with men (MSM) who access pre-exposure prophylaxis (PrEP) report sexual behaviours that could place them at high risk of hepatitis C virus infection (HCV). We report HCV prevalence and incidence from the PROUD trial of PrEP.PROUD was an open-label, wait-list design randomised trial of HIV PrEP for MSM.Participants were recruited between November 2012 and April 2014, and follow-up continued to October 2016. Initial HCV testing followed national guidelines, with screening "on indication", but was replaced by routine quarterly screening in the latter part of the study.We estimated HCV seroprevalence at enrolment and incidence overall and according to calendar year.544 participants were recruited to PROUD. 133 (24.4%) were screened for HCV at enrolment, and 490 (90.1%) were tested at least once during follow-up. Seroprevalence at enrolment was 2.1% (11/530; 95% CI: 1.0-3.7%). Median follow-up time was 2.6 (IQR: 2.1-3.0) years and total follow-up of 1188.8 person years (PY). Twenty-five participants had a new HCV infection during the trial, yielding an incidence rate of 2.1 per 100 PY (25/1188.8; 95% CI: 1.4-3.1), of which three were re-infections. There was some evidence that HCV incidence increased over calendar time (P-value for trend=0.09), reaching an estimated 4.0 per 100 PY (95% CI: 2.0-8.1)in 2016. In conclusion, participants in PROUD had a high, and possibly increasing, incidence of HCV infection. This high incidence of HCV supports the 2018 BHIVA/BASHH recommendation for quarterly HCV testing among HIV-negative MSM using PrEP in the UK

    The virological durability of first-line ART among HIV-positive adult patients in resource limited settings without virological monitoring: a retrospective analysis of DART trial data.

    Get PDF
    BACKGROUND: Few low-income countries have virological monitoring widely available. We estimated the virological durability of first-line antiretroviral therapy (ART) after five years of follow-up among adult Ugandan and Zimbabwean patients in the DART study, in which virological assays were conducted retrospectively. METHODS: DART compared clinically driven monitoring with/without routine CD4 measurement. Annual plasma viral load was measured on 1,762 patients. Analytical weights were calculated based on the inverse probability of sampling. Time to virological failure, defined as the first viral load measurement ≥200 copies/mL after 48 weeks of ART, was analysed using Kaplan-Meier plots and Cox regression models. RESULTS: Overall, 65% of DART trial patients were female. Patients initiated first-line ART at a median (interquartile range; IQR) age of 37 (32-42) and with a median CD4 cell count of 86 (32-140). After 240 weeks of ART, patients initiating dual-class nucleoside reverse-transcriptase inhibitor (NRTI) -non-nucleoside reverse-transcriptase (NNRTI) regimens containing nevirapine + zidovudine + lamivudine had a lower incidence of virological failure than patients on triple-NRTI regimens containing tenofovir + zidovudine + lamivudine (21% vs 40%; hazard ratio (HR) =0.48, 95% CI:0.38-0.62; p < 0.0001). In multivariate analyses, female patients (HR = 0.79, 95% CI: 0.65-0.95; p = 0.02), older patients (HR = 0.73 per 10 years, 95% CI: 0.64-0.84; p < 0.0001) and patients with a higher pre-ART CD4 cell count (HR = 0.64 per 100 cells/mm3, 95% CI: 0.54-0.75; p < 0.0001) had a lower incidence of virological failure after adjusting for adherence to ART. No difference in failure rate between the two randomised monitoring strategies was observed (p= 0.25). CONCLUSIONS: The long-term durability of virological suppression on dual-class NRTI-NNRTI first-line ART without virological monitoring is remarkable and is enabled by high-quality clinical management and a consistent drug supply. To achieve higher rates of virological suppression viral-load-informed differentiated care may be required. TRIAL REGISTRATION: Prospectively registered on 18/10/2000 as ISRCTN13968779

    Delays in diagnosis and bladder cancer mortality

    Full text link
    BACKGROUND: Mortality from invasive bladder cancer is common, even with high-quality care. Thus, the best opportunities to improve outcomes may precede the diagnosis. Although screening currently is not recommended, better medical care of patients who are at risk (ie, those with hematuria) has the potential to improve outcomes. METHODS: The authors used the Surveillance, Epidemiology, and End Results-Medicare linked database for the years 1992 through 2002 to identify 29,740 patients who had hematuria in the year before a bladder cancer diagnosis and grouped them according to the interval between their first claim for hematuria and their bladder cancer diagnosis. Cox proportional hazards models were fitted to assess relations between these intervals and bladder cancer mortality, adjusting first for patient demographics and then for disease severity. Adjusted logistic models were used to estimate the patient's probability of receiving a major intervention. RESULTS: Patients (n = 2084) who had a delay of 9 months were more likely to die from bladder cancer compared with patients who were diagnosed within 3 months (adjusted hazard ratio [HR], 1.34; 95% confidence interval [CI], 1.20-1.50). This risk was not markedly attenuated after adjusting for disease stage and tumor grade (adjusted HR, 1.29; 95% CI, 1.14-1.45). In fact, the effect was strongest among patients who had low-grade tumors (adjusted HR, 2.11; 95% CI, 1.69-2.64) and low-stage disease (ie, a tumor [T] classification of Ta or tumor in situ; adjusted HR, 2.02; 95% CI, 1.54-2.64). CONCLUSIONS: A delay in the diagnosis of bladder cancer increased the risk of death from disease independent of tumor grade and or disease stage. Understanding the mechanisms that underlie these delays may improve outcomes among patients with bladder cancer. Cancer 2010. © 2010 American Cancer Society.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78303/1/25310_ftp.pd

    “Take Off 4-Health�: Nutrition Education Curriculum for a Healthy Lifestyle Camp for Overweight Youth

    Get PDF
    There is evidence that residential summer weight loss camps can be effective to initiate or support the small change approach to address childhood obesity. This report describes the development and evaluation of nutrition education for overweight adolescents attending a three week healthy lifestyle camp. Campers were given a diet prescription based on MyPryamid and self-selected their meals and snacks that were served family style. The curriculum included eating strategies known to contribute to healthy weight in youth. Campers demonstrated improved ability to estimate portion sizes. Thirty-four campers completed the three week experience with a weight loss considered to be safe. Note: the deposited item is not the final published version, but rather is the last revised manuscript sent to the publisher
    • …
    corecore