17 research outputs found

    Risk of bias reporting in the recent animal focal cerebral ischaemia literature

    Get PDF
    BACKGROUND: Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. METHODS: We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. RESULTS: Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). DISCUSSION: There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported

    Development and uptake of an online systematic review platform: the early years of the CAMARADES Systematic Review Facility (SyRF)

    Get PDF
    Preclinical research is a vital step in the drug discovery pipeline and more generally in helping to better understand human disease aetiology and its management. Systematic reviews (SRs) can be powerful in summarising and appraising this evidence concerning a specific research question, to highlight areas of improvements, areas for further research and areas where evidence may be sufficient to take forward to other research domains, for instance clinical trial. Guidance and tools for preclinical research synthesis remain limited despite their clear utility. We aimed to create an online end-to-end platform primarily for conducting SRs of preclinical studies, that was flexible enough to support a wide variety of experimental designs, was adaptable to different research questions, would allow users to adopt emerging automated tools and support them during their review process using best practice. In this article, we introduce the Systematic Review Facility (https://syrf.org.uk), which was launched in 2016 and designed to support primarily preclinical SRs from small independent projects to large, crowdsourced projects. We discuss the architecture of the app and its features, including the opportunity to collaborate easily, to efficiently manage projects, to screen and annotate studies for important features (metadata), to extract outcome data into a secure database, and tailor these steps to each project. We introduce how we are working to leverage the use of automation tools and allow the integration of these services to accelerate and automate steps in the systematic review workflow

    Improving our understanding of the in vivo modelling of psychotic disorders: a systematic review and meta-analysis

    Get PDF
    Psychotic disorders represent a severe category of mental disorders affecting about one percent of the population. Individuals experience a loss or distortion of contact with reality alongside other symptoms, many of which are still not adequately managed using existing treatments. While animal models of these disorders could offer insights into these disorders and potential new treatments, translation of this knowledge has so far been poor in terms of informing clinical trials and practice. The aim of this project was to improve our understanding of these pre-clinical studies and identify potential weaknesses underlying translational failure. I carried out a systematic search of the literature to provide an unbiased summary of publications reporting animal models of schizophrenia and other psychotic disorders. From these publications, data were extracted to quantify aspects of the field including reported quality of studies, study characteristics and behavioural outcome data. The latter of these data were then used to calculate estimates of efficacy using random-effects meta-analysis. Having identified 3847 publications of relevance, including 852 different methods used to induce the model, over 359 different outcomes tested in them and almost 946 different treatments reported to be administered. I show that a large proportion of studies use simple pharmacological interventions to induce their models of these disorders, despite the availability of models using other interventions that are arguably of higher translational relevance. I also show that the reported quality of these studies is low, and only 22% of studies report taking measures to reduce the risk of biases such as randomisation and blinding, which has been shown to affect the reliability of results drawn. Through this work it becomes apparent that the literature is incredibly vast for studies looking at animal models of psychotic disorders and that some of the relevant work potentially overlaps with studies describing other conditions. This means that drawing reliable conclusions from these data is affected by what is made available in the literature, how it is reported and identified in a search and the time that it takes to reach these conclusions. I introduce the idea of using computer-assisted tools to overcome one of these problems in the long term. Translation of results from studies looking at animals modelling uniquely-human psychotic disorders to clinical successes might be improved by better reporting of studies including publishing of all work carried out, labelling of studies more uniformly so that it is identifiable, better reporting of study design including improving on reporting of measures taken to reduce the risk of bias and focusing on models with greater validity to the human condition

    Building a Systematic Online Living Evidence Summary of COVID-19 Research

    Get PDF
    Throughout the global coronavirus pandemic, we have seen an unprecedented volume of COVID-19 researchpublications. This vast body of evidence continues to grow, making it difficult for research users to keep up with the pace of evolving research findings. To enable the synthesis of this evidence for timely use by researchers, policymakers, and other stakeholders, we developed an automated workflow to collect, categorise, and visualise the evidence from primary COVID-19 research studies. We trained a crowd of volunteer reviewers to annotate studies by relevance to COVID-19, study objectives, and methodological approaches. Using these human decisions, we are training machine learning classifiers and applying text-mining tools to continually categorise the findings and evaluate the quality of COVID-19 evidence

    The Automated Systematic Search Deduplicator (ASySD): a rapid, open-source, interoperable tool to remove duplicate citations in biomedical systematic reviews

    Get PDF
    Abstract Background Researchers performing high-quality systematic reviews search across multiple databases to identify relevant evidence. However, the same publication is often retrieved from several databases. Identifying and removing such duplicates (“deduplication”) can be extremely time-consuming, but failure to remove these citations can lead to the wrongful inclusion of duplicate data. Many existing tools are not sensitive enough, lack interoperability with other tools, are not freely accessible, or are difficult to use without programming knowledge. Here, we report the performance of our Automated Systematic Search Deduplicator (ASySD), a novel tool to perform automated deduplication of systematic searches for biomedical reviews. Methods We evaluated ASySD’s performance on 5 unseen biomedical systematic search datasets of various sizes (1845–79,880 citations). We compared the performance of ASySD with EndNote’s automated deduplication option and with the Systematic Review Assistant Deduplication Module (SRA-DM). Results ASySD identified more duplicates than either SRA-DM or EndNote, with a sensitivity in different datasets of 0.95 to 0.99. The false-positive rate was comparable to human performance, with a specificity of > 0.99. The tool took less than 1 h to identify and remove duplicates within each dataset. Conclusions For duplicate removal in biomedical systematic reviews, ASySD is a highly sensitive, reliable, and time-saving tool. It is open source and freely available online as both an R package and a user-friendly web application

    EBA3A- Evidence based approaches assessing the mechanisms for the action of antidepressants on mood and behavior.

    No full text
    Antidepressants are used worldwide to the treatment of different psychiatric conditions. Despite the popularity in therapeutics, the biological mechanisms underlying the effects of antidepressants remain elusive. In the literature, there are concurrent explanatory theories on the mechanisms of action of antidepressants focusing in different neurobiological targets. Theories on the mechanisms of action of antidepressants rely mostly in preclinical studies that often display mixed results. Controversal findings may be consequence of experimental heterogeneity among studies. To identify the sources of heterogeneity and quantify the contribution of different neurobiological targets such as monoamines, atypical neurotransmitters, neural plasticity, hippocampal neurogenesis to the effects of antidepressants, we will create protocols and perform systematic reviews and meta-analyses of primary preclinical studies

    A “LIVING” EVIDENCE SUMMARY OF PRIMARY RESEARCH RELATED TO COVID-19

    No full text
    We aim to generate a “living” evidence summary of all preclinical and clinical primary studies related to COVID-19 for which we will exploit automation tools

    A protocol for systematic review and meta-analysis of data from preclinical studies employing the forced swimming test

    No full text
    Depressive disorder is a highly prevalent psychiatric disorder that negatively affects the emotional well-being, and also causes a significant impact on health care costs and workplace productivity. Despite the wide range of antidepressants available, they are only marginally effective in patients. Therefore, there is a criticism related to the abundance of “positive results” contrasting with the partial success of antidepressant treatments in clinical trials or in therapeutics. Besides the lack of representativeness in animal models, many different reasons may contribute to the poor translation from preclinical to clinical studies in antidepressant research. Methodological factors such as poor experimental design and low power analysis as well as confirmation bias and publication bias may account for the low reproducibility and poor representation of animal models of depression. All these factors may contribute to the reduced translation of preclinical studies. In this regard, we aim to estimate the influence of methodological qualities on the outcomes of preclinical studies employing animal models and to determine the effect sizes for primary and secondary outcomes according to species, strains, sex, age, via of administration, natural compound class, treatment schedule and protocols of animal models of depression
    corecore