16 research outputs found
The extent and consequences of p-hacking in science
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses
Gender differences in conference presentations: a consequence of self-selection?
Women continue to be under-represented in the sciences, with their representation declining at each progressive academic level. These differences persist despite long-running policies to ameliorate gender inequity. We compared gender differences in exposure and visibility at an evolutionary biology conference for attendees at two different academic levels: student and post-PhD academic. Despite there being almost exactly a 1:1 ratio of women and men attending the conference, we found that when considering only those who presented talks, women spoke for far less time than men of an equivalent academic level: on average student women presented for 23% less time than student men, and academic women presented for 17% less time than academic men. We conducted more detailed analyses to tease apart whether this gender difference was caused by decisions made by the attendees or through bias in evaluation of the abstracts. At both academic levels, women and men were equally likely to request a presentation. However, women were more likely than men to prefer a short talk, regardless of academic level. We discuss potential underlying reasons for this gender bias, and provide recommendations to avoid similar gender biases at future conferences
Data from: The extent and consequences of p-hacking in science
This zip file consists of three parts. 1. Data obtained from text-mining and associated analysis files. 2. Data obtained from previously published meta-analyses and associated analysis files. 3. Analysis files used to conduct meta-analyses of the data. Read me files are contained within this zip file
The effect of publication bias on the distribution of <i>p</i>-values around the significance threshold of 0.05.
<p>A) Black line shows distribution of p-values when there is no evidential value and the red line shows how publication bias influences this distribution. B) Black line shows distribution of <i>p</i>-values when there is evidential value and red line shows how publication bias influences this distribution. Tests for publication bias due to a file-drawer effect often compare the number of <i>p</i>-values in each of the bins either side of 0.05.</p
The distribution of <i>p</i>-values associated with the meta-analysis conducted by Jiang et al. (2013).
<p>The p-curve shows evidence for evidential value (strong right skew) and p-hacking (rise in <i>p</i>-values just below 0.05).</p
The effect of p-hacking on the distribution of p-values in the range of significance.
<p>A) Black line shows distribution of <i>p</i>-values when there is no evidential value and the red line shows how p-hacking influences this distribution. B) Black line shows distribution of <i>p</i>-values when there is evidential value and the red line shows how p-hacking influences this distribution. Tests for p-hacking often compare the number of <i>p</i>-values in two adjacent bins just below 0.05.</p