113 research outputs found

    Use of Composite End Points in Early and Intermediate Age-Related Macular Degeneration Clinical Trials: State-of-the-Art and Future Directions

    Get PDF
    The slow progression of early AMD stages to advanced AMD requires the use of surrogate endpoints in clinical trials. The use of combined endpoints may allow for shorter and smaller trials due to increased precision. We performed a literature search for the use of composite endpoints as primary outcome measures in clinical studies of early AMD stages. PubMed was searched for composite endpoints used in early/intermediate AMD studies published during the last 10 years. A total of 673 articles of interest were identified. After reviewing abstracts and applicable full-text articles, 33 articles were eligible and thus included in the qualitative synthesis. The main composite endpoint categories were: Combined structural and functional endpoints, combined structural endpoints, combined functional endpoints and combined multi-categorical endpoints. The majority of the studies included binary composite endpoints. There was a lack of sensitivity analyses of different endpoints against accepted outcomes (i.e. progression) in the literature. Various composite outcome measures have been used but there is a lack of standardization. To date no agreement on the optimal approach to implement combined endpoints in clinical studies of early stages of AMD exists and no surrogate endpoints have been accepted for AMD progression

    Partial Deletion of Chromosome 8 β-defensin Cluster Confers Sperm Dysfunction and Infertility in Male Mice

    Get PDF
    β-defensin peptides are a family of antimicrobial peptides present at mucosal surfaces, with the main site of expression under normal conditions in the male reproductive tract. Although they kill microbes in vitro and interact with immune cells, the precise role of these genes in vivo remains uncertain. We show here that homozygous deletion of a cluster of nine β-defensin genes (DefbΔ9) in the mouse results in male sterility. The sperm derived from the mutants have reduced motility and increased fragility. Epididymal sperm isolated from the cauda should require capacitation to induce the acrosome reaction but sperm from the mutants demonstrate precocious capacitation and increased spontaneous acrosome reaction compared to wild-types but have reduced ability to bind the zona pellucida of oocytes. Ultrastructural examination reveals a defect in microtubule structure of the axoneme with increased disintegration in mutant derived sperm present in the epididymis cauda region, but not in caput region or testes. Consistent with premature acrosome reaction, sperm from mutant animals have significantly increased intracellular calcium content. Thus we demonstrate in vivo that β-defensins are essential for successful sperm maturation, and their disruption leads to alteration in intracellular calcium, inappropriate spontaneous acrosome reaction and profound male infertility

    Challenges, facilitators and barriers to screening study participants in early disease stages-experience from the MACUSTAR study

    Get PDF
    BACKGROUND: Recruiting asymptomatic participants with early disease stages into studies is challenging and only little is known about facilitators and barriers to screening and recruitment of study participants. Thus we assessed factors associated with screening rates in the MACUSTAR study, a multi-centre, low-interventional cohort study of early stages of age-related macular degeneration (AMD). METHODS: Screening rates per clinical site and per week were compiled and applicable recruitment factors were assigned to respective time periods. A generalized linear mixed-effects model including the most relevant recruitment factors identified via in-depth interviews with study personnel was fitted to the screening data. Only participants with intermediate AMD were considered. RESULTS: A total of 766 individual screenings within 87 weeks were available for analysis. The mean screening rate was 0.6 ± 0.9 screenings per week among all sites. The participation at investigator teleconferences (relative risk increase 1.466, 95% CI [1.018-2.112]), public holidays (relative risk decrease 0.466, 95% CI [0.367-0.591]) and reaching 80% of the site's recruitment target (relative risk decrease 0.699, 95% CI [0.367-0.591]) were associated with the number of screenings at an individual site level. CONCLUSIONS: Careful planning of screening activities is necessary when recruiting early disease stages in multi-centre observational or low-interventional studies. Conducting teleconferences with local investigators can increase screening rates. When planning recruitment, seasonal and saturation effects at clinical site level need to be taken into account. TRIAL REGISTRATION: ClinicalTrials.gov NCT03349801 . Registered on 22 November 2017

    Ten simple rules for implementing open and reproducible research practices after attending a training course

    Get PDF
    Open, reproducible, and replicable research practices are a fundamental part of science. Training is often organized on a grassroots level, offered by early career researchers, for early career researchers. Buffet style courses that cover many topics can inspire participants to try new things; however, they can also be overwhelming. Participants who want to implement new practices may not know where to start once they return to their research team. We describe ten simple rules to guide participants of relevant training courses in implementing robust research practices in their own projects, once they return to their research group. This includes (1) prioritizing and planning which practices to implement, which involves obtaining support and convincing others involved in the research project of the added value of implementing new practices; (2) managing problems that arise during implementation; and (3) making reproducible research and open science practices an integral part of a future research career. We also outline strategies that course organizers can use to prepare participants for implementation and support them during this process

    Eleven strategies for making reproducible research and open science training the norm at research institutions

    Get PDF
    Reproducible research and open science practices have the potential to accelerate scientific progress by allowing others to reuse research outputs, and by promoting rigorous research that is more likely to yield trustworthy results. However, these practices are uncommon in many fields, so there is a clear need for training that helps and encourages researchers to integrate reproducible research and open science practices into their daily work. Here, we outline eleven strategies for making training in these practices the norm at research institutions. The strategies, which emerged from a virtual brainstorming event organized in collaboration with the German Reproducibility Network, are concentrated in three areas: (i) adapting research assessment criteria and program requirements; (ii) training; (iii) building communities. We provide a brief overview of each strategy, offer tips for implementation, and provide links to resources. We also highlight the importance of allocating resources and monitoring impact. Our goal is to encourage researchers - in their roles as scientists, supervisors, mentors, instructors, and members of curriculum, hiring or evaluation committees - to think creatively about the many ways they can promote reproducible research and open science practices in their institutions

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div
    corecore