2,166 research outputs found

    Open Science and Research Reproducibility

    Get PDF
    Many scientists, journals and funders are concerned about the low reproducibility of many scientific findings. One approach that may serve to improve the reliability and robustness of research is open science. Here I argue that the process of pre-registering study protocols, sharing study materials and data, and posting preprints of manuscripts may serve to improve quality control procedures at every stage of the research pipeline, and in turn improve the reproducibility of published work

    Improving the Transparency and Robustness of Research

    Get PDF
    One of my first editorials as Editor-in-Chief offered guidelines on statistical reporting.1 This offered specific recommendations, including avoiding dichotomizing results into “significant” and “nonsignificant” and reporting effect size and confidence intervals. It was in part motivated by a desire to avoid what Gigerenzer2 has described as the use of “statistical rituals”—conducting statistical tests and generating p values without a clear sense of why. Since then, there has been considerable interest in initiatives intended to improve not only the quality of statistical reporting but the quality of how scientific studies are reported (and indeed conducted) more generally

    Don’t let the perfect be the enemy of the good

    Get PDF
    There is a growing interest in the factors that influence research quality and into research culture more generally. Reform must be evidence based, but experimental studies in real-world settings can be challenging. Observational evidence, even if imperfect, can be a valuable and efficient starting point to help identify the most fruitful avenues for meta-research investment

    A Policy on the Use of Artificial Intelligence and Large Language Models in Peer Review

    Get PDF
    Our readers will now be well aware of the astonishing growth in the capability and availability of artificial intelligence (AI) and Large Language Models (LLMs) through platforms such as ChatGPT. This presents both challenges and exciting opportunities for researchers. This includes the potential use of AI in evaluating submissions we receive to the journal.Time will tell where and how we can use AI to our advantage—in evaluating code that forms part of a submission for example. This is something we are actively exploring. However, currently, our policy is that—while we remain open to the potential opportunities offered by AI—we do not support the use of AI or LLMs as a substitute for human expertise in the review process

    Opening up addiction science

    Get PDF

    Understanding the Role of Additives in Tobacco Products

    Get PDF

    Tobacco Marketing by Stealth

    Get PDF
    • …
    corecore