28 research outputs found

    Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies

    Get PDF
    There is growing concern that poor experimental design and lack of transparent reporting contribute to the frequent failure of pre-clinical animal studies to translate into treatments for human disease. In 2010, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were introduced to help improve reporting standards. They were published in PLOS Biology and endorsed by funding agencies and publishers and their journals, including PLOS, Nature research journals, and other top-tier journals. Yet our analysis of papers published in PLOS and Nature journals indicates that there has been very little improvement in reporting standards since then. This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented

    High impact  =  high statistical standards? Not necessarily so.

    Get PDF
    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors

    The influence of journal submission guidelines on authors' reporting of statistics and use of open research practices.

    Get PDF
    From January 2014, Psychological Science introduced new submission guidelines that encouraged the use of effect sizes, estimation, and meta-analysis (the "new statistics"), required extra detail of methods, and offered badges for use of open science practices. We investigated the use of these practices in empirical articles published by Psychological Science and, for comparison, by the Journal of Experimental Psychology: General, during the period of January 2013 to December 2015. The use of null hypothesis significance testing (NHST) was extremely high at all times and in both journals. In Psychological Science, the use of confidence intervals increased markedly overall, from 28% of articles in 2013 to 70% in 2015, as did the availability of open data (3 to 39%) and open materials (7 to 31%). The other journal showed smaller or much smaller changes. Our findings suggest that journal-specific submission guidelines may encourage desirable changes in authors' practices

    Improving our understanding of the in vivo modelling of psychotic disorders: a systematic review and meta-analysis

    Get PDF
    Psychotic disorders represent a severe category of mental disorders affecting about one percent of the population. Individuals experience a loss or distortion of contact with reality alongside other symptoms, many of which are still not adequately managed using existing treatments. While animal models of these disorders could offer insights into these disorders and potential new treatments, translation of this knowledge has so far been poor in terms of informing clinical trials and practice. The aim of this project was to improve our understanding of these pre-clinical studies and identify potential weaknesses underlying translational failure. I carried out a systematic search of the literature to provide an unbiased summary of publications reporting animal models of schizophrenia and other psychotic disorders. From these publications, data were extracted to quantify aspects of the field including reported quality of studies, study characteristics and behavioural outcome data. The latter of these data were then used to calculate estimates of efficacy using random-effects meta-analysis. Having identified 3847 publications of relevance, including 852 different methods used to induce the model, over 359 different outcomes tested in them and almost 946 different treatments reported to be administered. I show that a large proportion of studies use simple pharmacological interventions to induce their models of these disorders, despite the availability of models using other interventions that are arguably of higher translational relevance. I also show that the reported quality of these studies is low, and only 22% of studies report taking measures to reduce the risk of biases such as randomisation and blinding, which has been shown to affect the reliability of results drawn. Through this work it becomes apparent that the literature is incredibly vast for studies looking at animal models of psychotic disorders and that some of the relevant work potentially overlaps with studies describing other conditions. This means that drawing reliable conclusions from these data is affected by what is made available in the literature, how it is reported and identified in a search and the time that it takes to reach these conclusions. I introduce the idea of using computer-assisted tools to overcome one of these problems in the long term. Translation of results from studies looking at animals modelling uniquely-human psychotic disorders to clinical successes might be improved by better reporting of studies including publishing of all work carried out, labelling of studies more uniformly so that it is identifiable, better reporting of study design including improving on reporting of measures taken to reduce the risk of bias and focusing on models with greater validity to the human condition
    corecore