278 research outputs found

    Probing academic consensus on COVID-19 mitigation: are lockdown policies favoured mainly in high-income countries?

    Get PDF
    Lockdown policies are thought to reflect the scientific consensus. But how do we measure that consensus? Daniele Fanelli (LSE) set up a site that enables academics to anonymously give their views on the ‘focused protection’ model endorsed by the ‘Great Barrington Declaration’, and found some striking differences between both countries and genders

    Are public health policies keeping up with shifting scientific consensus? the case of vitamin D

    Get PDF
    Arguing that vitamin D can help avoid bad COVID outcomes is widely dismissed as misinformation. Yet the latest results of the covidConsensus.org project tell a different story, says Daniele Fanelli (LSE)

    Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data

    Get PDF
    The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high

    Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity

    Get PDF
    The honesty and integrity of scientists is widely believed to be threatened by pressures to publish, unsupportive research environments, and other structural, sociological and psychological factors. Belief in the importance of these factors has inspired major policy initiatives, but evidence to support them is either non-existent or derived from self-reports and other sources that have known limitations. We used a retrospective study design to verify whether risk factors for scientific misconduct could predict the occurrence of retractions, which are usually the consequence of research misconduct, or corrections, which are honest rectifications of minor mistakes. Bibliographic and personal information were collected on all co-authors of papers that have been retracted or corrected in 2010-2011 (N=611 and N=2226 papers, respectively) and authors of control papers matched by journal and issue (N=1181 and N=4285 papers, respectively), and were analysed with conditional logistic regression. Results, which avoided several limitations of past studies and are robust to different sampling strategies, support the notion that scientific misconduct is more likely in countries that lack research integrity policies, in countries where individual publication performance is rewarded with cash, in cultures and situations were mutual criticism is hampered, and in the earliest phases of a researcher’s career. The hypothesis that males might be prone to scientific misconduct was not supported, and the widespread belief that pressures to publish are a major driver of misconduct was largely contradicted: high-impact and productive researchers, and those working in countries in which pressures to publish are believed to be higher, are less-likely to produce retracted papers, and more likely to correct them. Efforts to reduce and prevent misconduct, therefore, might be most effective if focused on promoting research integrity policies, improving mentoring and training, and encouraging transparent communication amongst researchers

    What difference might retractions make? An estimate of the potential epistemic cost of retractions on meta-analyses

    Get PDF
    The extent to which a retraction might require revising previous scientific estimates and beliefs – which we define as the epistemic cost – is unknown. We collected a sample of 229 meta-analyses published between 2013 and 2016 that had cited a retracted study, assessed whether this study was included in the meta-analytic estimate and, if so, re-calculated the summary effect size without it. The majority (68% of N = 229) of retractions had occurred at least one year prior to the publication of the citing meta-analysis. In 53% of these avoidable citations, the retracted study was cited as a candidate for inclusion, and only in 34% of these meta-analyses (13% of total) the study was explicitly excluded because it had been retracted. Meta-analyses that included retracted studies were published in journals with significantly lower impact factor. Summary estimates without the retracted study were lower than the original if the retraction was due to issues with data or results and higher otherwise, but the effect was small. We conclude that meta-analyses have a problematically high probability of citing retracted articles and of including them in their pooled summaries, but the overall epistemic cost is contained

    Is science really facing a reproducibility crisis, and do we need it to?

    Get PDF
    Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which most published results are unreliable due to growing problems with research and publication practices. This article provides an overview of recent evidence suggesting that this narrative is mistaken, and argues that a narrative of epochal changes and empowerment of scientists would be more accurate, inspiring, and compelling

    In silico screening of mutational effects on enzyme-proteic inhibitor affinity: a docking-based approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Molecular recognition between enzymes and proteic inhibitors is crucial for normal functioning of many biological pathways. Mutations in either the enzyme or the inhibitor protein often lead to a modulation of the binding affinity with no major alterations in the 3D structure of the complex.</p> <p>Results</p> <p>In this study, a rigid body docking-based approach has been successfully probed in its ability to predict the effects of single and multiple point mutations on the binding energetics in three enzyme-proteic inhibitor systems. The only requirement of the approach is an accurate structural model of the complex between the wild type forms of the interacting proteins, with the assumption that the architecture of the mutated complexes is almost the same as that of the wild type and no major conformational changes occur upon binding. The method was applied to 23 variants of the ribonuclease inhibitor-angiogenin complex, to 15 variants of the barnase-barstar complex, and to 8 variants of the bovine pancreatic trypsin inhibitor-β Trypsin system, leading to thermodynamic and kinetic estimates consistent with in vitro data. Furthermore, simulations with and without explicit water molecules at the protein-protein interface suggested that they should be included in the simulations only when their positions are well defined both in the wild type and in the mutants and they result to be relevant for the modulation of mutational effects on the association process.</p> <p>Conclusion</p> <p>The correlative models built in this study allow for predictions of mutational effects on the thermodynamics and kinetics of association of three substantially different systems, and represent important extensions of our computational approach to cases in which it is not possible to estimate the absolute free energies. Moreover, this study is the first example in the literature of an extensive evaluation of the correlative weights of the single components of the ZDOCK score on the thermodynamics and kinetics of binding of protein mutants compared to the native state.</p> <p>Finally, the results of this study corroborate and extend a previously developed quantitative model for in silico predictions of absolute protein-protein binding affinities spanning a wide range of values, i.e. from -10 up to -21 kcal/mol.</p> <p>The computational approach is simple and fast and can be used for structure-based design of protein-protein complexes and for in silico screening of mutational effects on protein-protein recognition.</p

    How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One.

    Get PDF
    Abstract The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first metaanalysis of these surveys. To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, &apos;&apos;cooking&apos;&apos; of data, etc… Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis. A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86-4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once -a serious form of misconduct by any standard-and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91-19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words &apos;&apos;falsification&apos;&apos; or &apos;&apos;fabrication&apos;&apos;, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/ pharmacological researchers than others. Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct

    How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One.

    Get PDF
    Abstract The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first metaanalysis of these surveys. To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, &apos;&apos;cooking&apos;&apos; of data, etc… Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis. A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86-4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once -a serious form of misconduct by any standard-and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91-19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words &apos;&apos;falsification&apos;&apos; or &apos;&apos;fabrication&apos;&apos;, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/ pharmacological researchers than others. Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct

    Testing hypotheses on risk factors for scientific misconduct via matched-control analysis of papers containing problematic image duplications

    Get PDF
    It is commonly hypothesized that scientists are more likely to engage in data falsification and fabrication when they are subject to pressures to publish, when they are not restrained by forms of social control, when they work in countries lacking policies to tackle scientific misconduct, and when they are male. Evidence to test these hypotheses, however, is inconclusive due to the difficulties of obtaining unbiased data. Here we report a pre-registered test of these four hypotheses, conducted on papers that were identified in a previous study as containing problematic image duplications through a systematic screening of the journal PLoS ONE. Image duplications were classified into three categories based on their complexity, with category 1 being most likely to reflect unintentional error and category 3 being most likely to reflect intentional fabrication. We tested multiple parameters connected to the hypotheses above with a matched-control paradigm, by collecting two controls for each paper containing duplications. Category 1 duplications were mostly not associated with any of the parameters tested, as was predicted based on the assumption that these duplications were mostly not due to misconduct. Categories 2 and 3, however, exhibited numerous statistically significant associations. Results of univariable and multivariable analyses support the hypotheses that academic culture, peer control, cash-based publication incentives and national misconduct policies might affect scientific integrity. No clear support was found for the “pressures to publish” hypothesis. Female authors were found to be equally likely to publish duplicated images compared to males. Country-level parameters generally exhibited stronger effects than individual-level parameters, because developing countries were significantly more likely to produce problematic image duplications. This suggests that promoting good research practices in all countries should be a priority for the international research integrity agenda
    corecore