27 research outputs found

    Low statistical power in biomedical science:a review of three human research domains

    Get PDF
    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation

    Replication Validity of Initial Association Studies:A Comparison between Psychiatry, Neurology and Four Somatic Diseases

    Get PDF
    CONTEXT:There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. OBJECTIVE:Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others"). METHODS:We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. RESULTS:Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. CONCLUSION:The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains

    The reproducibility crisis in biomedical research : an analysis of the validity of biomedical studies published in peer-reviewed journals and their media coverage

    No full text
    De nombreux articles dans les journaux scientifiques font état du manque de reproductibilité des études biomédicales. Cette « crise de la reproductibilité » ne doit pas être confondue avec les problèmes de fraudes ou de plagiats. Elle recouvre un phénomène plus général aux disciplines scientifiques : un grand nombre de résultats publiés ne sont pas reproduits.Ce manque de reproductibilité n’est pas choquant en soi : la connaissance scientifique est un processus cumulatif qui évolue de résultats prometteurs mais incertains pour arriver à un consensus après réplication des observations par les pairs. L’incertitude est donc inhérente à la recherche en train de se faire. Cependant, cette incertitude ne semble pas être prise en compte dans les interactions entre recherche et société, notamment au travers des médias.Cette thèse s’intéresse à la façon dont l’incertitude est présentée dans les médias en se basant sur l’étude de la couverture médiatique de résultats de la recherche biomédicale dont la validité est connue. Nous avons constitué une large base de données regroupant des résultats de la recherche biomédicale couvrant 3 domaines de la recherche, la psychiatrie, la neurologie et un échantillon de 4 maladies somatiques. Nous avons sélectionné des études décrivant l’association de facteurs de risques (génétiques, environnementaux, biochimiques) avec différentes pathologies. La validité des études initiales a été calculée en comparant leurs résultats à ceux des méta-analyses sur le même sujet. Dans 65% des cas, les résultats des études initiales ne sont pas confirmés par ceux des méta-analyses et ce même si elles sont publiées dans les journaux prestigieux. Nous avons également identifié, parmi les études de la base de données, celles qui avaient retenu l’attention de la presse anglo-saxonne. Celle-ci privilégie les études scientifiques initiales publiées dans des journaux scientifiques prestigieux et présentant des implications directes pour le lecteur. La validité de ces études n’est pas meilleure que celles des publications scientifiques : plus de la moitié n’ont pas été confirmées et la presse ne s’en fait quasiment jamais l’écho. D’autre part, l’analyse du contenu des articles de presse révèle que les journalistes et leurs rédacteurs en chef ne prennent que rarement en compte l’incertitude scientifique. En effet, la majorité des articles précise qu'il s'agit bien d'une découverte initiale, mais seulement 21% mentionnent que la découverte doit être confirmée par des études ultérieures. Ces mentions sont principalement le fait des scientifiques et tendent à disparaître dans les articles les plus récents. Enfin, au travers d’entretiens semi-directifs réalisés auprès de journalistes scientifiques, nous avons confirmé que ceux-ci utilisaient volontiers les résultats publiés dans les journaux scientifiques prestigieux qu’ils considèrent comme des sources fiables. L’enquête révèle que ces journalistes méconnaissent le fonctionnement de la recherche : les deux tiers ne savent pas que les résultats initiaux sont incertains ou bien confondent incertitude et fraude. Quant au tiers restant, il indique les difficultés à faire valoir cette incertitude auprès de leur hiérarchie respective.Plus généralement, cette thèse discute de l’influence grandissante de facteurs extérieurs à l’activité scientifique dans le processus de production de connaissances. En particulier, la prise en compte par les chercheurs et les institutions scientifiques de critères d’intérêt médiatique pourrait influencer les stratégies de recherche et la fiabilité des résultats scientifiques. D’autre part, la détérioration des conditions de travail des journalistes et leur méconnaissance du fonctionnement de la recherche soulèvent des interrogations importantes sur la pertinence des informations présentées dans la presse et sur la qualité du débat public des questions de santé.Many academic publications are devoted to the « reproducibility crisis » in biomedical sciences. Their authors distinguish this lack of reproducibility from fraud or plagiarism. This “crisis” deals with a much larger phenomenon encompassing many scientific disciplines: a large amount of scientific results are disconfirmed by subsequent studies.This lack of reproducibility is to be expected: knowledge production is an incremental process where early, promising yet tentative findings are validated through replication. Indeed, scientific results are uncertain per se. The problem, however, is that this uncertainty does not seem to be taken into consideration when science “meets” the public, especially through the media.In this dissertation we studied how the media presented this uncertainty when dealing with biomedical findings. To do so we first created a large, original database of scientific studies investigating the association between risk factors (genetic, biochemical, environmental) and pathologies from three biomedical domains; psychiatry, neurology and a set of four somatic diseases. We evaluated the validity of each initial study by comparing their results to the result of meta-analyses on the same subject. The replication validity is low: 65% of initial studies are disconfirmed by corresponding meta-analysis even when they were published in high-ranking journals. We then identified which studies were selected by the press: initial studies published in prestigious journals and relevant to the readers were preferentially covered. Their validity was nonetheless poor with more than 50% being subsequently invalidated. The press rarely mentioned these frequent invalidations. Analysing the newspaper article contents, we found that journalists and their editors do not deal with scientific uncertainty. Indeed, the majority of newspaper articles referred to the study as being an initial study but only 21% indicated that the results needed to be replicated. Moreover those statements were made by scientists and have become scarce in most recent articles. A survey of 21 science journalists confirmed that journalists still consider high-ranking scientific journals to be reliable sources of information. However, these journalists were not familiar with the incremental process of knowledge production: two-thirds did not know that early findings were uncertain, or confused uncertainty with fraud. The other third knew about the uncertainty of initial results but found it hard to take it into account in their articles because of their respective hierarchy.More generally, the dissertation discusses the influence of extra-scientific factors upon the production of scientific knowledge. We conclude that the scientific assessment process based on the number of papers published in high impact factor journals, combined with the scientific institutions’ orientation towards the media, might undermine the reliability of scientific results, and this in academic publications as well as in the media. Indeed, journalists’ working conditions are deteriorating and most do not seem to properly grasp how scientific facts are produced. This might be damaging for public trust in biomedical research and public debate about health-related issues

    Le mésusage des citations et ses conséquences en médecine

    No full text
    Les observations biomédicales ne deviennent une source de connaissance qu’après un débat entre chercheurs. Au cours de ce débat, la citation des études antérieures tient un rôle majeur, mais les travaux académiques qui en évaluent l’usage sont rares. Ils ont cependant pu révéler deux types de problèmes : les biais de citation et les écarts de sens entre l’étude antérieure citée et ce qu’en dit l’article citant. Dans cette revue, nous synthétisons ces travaux et en dégageons les principales caractéristiques : les études favorables à la conclusion des auteurs citants sont plus souvent citées que celles qui les questionnent ; des écarts de sens majeurs affectent environ 10 % des citations. Nous illustrons par deux exemples les conséquences de ce mésusage des citations

    Replication Validity of Initial Association Studies:A Comparison between Psychiatry, Neurology and Four Somatic Diseases

    Get PDF
    CONTEXT: There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems.OBJECTIVE: Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others").METHODS: We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%.RESULTS: Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p&lt;0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor.CONCLUSION: The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains.</p

    Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings

    No full text
    International audienceNewspapers preferentially cover initial biomedical findings although they are often disconfirmed by subsequent studies. We analyzed 426 newspaper articles covering 40 initial biomedical studies associating a risk factor with 12 pathologies and published between 1988 and 2009. Most articles presented the study as initial but only 21% mentioned that it must be confirmed by replication. Headlines of articles with a replication statement were hyped less often than those without. Replication statements have tended to disappear after 2000, whereas hyped headlines have become more frequent. Thus, the public is increasingly poorly informed about the uncertainty inherent in initial biomedical findings

    Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings

    No full text
    Newspapers preferentially cover initial biomedical findings although they are often disconfirmed by subsequent studies. We analyzed 426 newspaper articles covering 40 initial biomedical studies associating a risk factor with 12 pathologies and published between 1988 and 2009. Most articles presented the study as initial but only 21% mentioned that it must be confirmed by replication. Headlines of articles with a replication statement were hyped less often than those without. Replication statements have tended to disappear after 2000, whereas hyped headlines have become more frequent. Thus, the public is increasingly poorly informed about the uncertainty inherent in initial biomedical findings

    Poor replication validity of biomedical association studies reported by newspapers

    No full text
    OBJECTIVE: To investigate the replication validity of biomedical association studies covered by newspapers.METHODS: We used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses.RESULTS: Among the 5029 articles of our database, 156 primary studies (of which 63 were lifestyle studies) and 5 meta-analysis articles were reported in 1561 newspaper articles. The percentage of covered studies and the number of newspaper articles per study strongly increased with the impact factor of the journal that published each scientific study. Newspapers almost equally covered initial (5/39 12.8%) and subsequent (58/600 9.7%) lifestyle studies. In contrast, initial non-lifestyle studies were covered more often (48/366 13.1%) than subsequent ones (45/3718 1.2%). Newspapers never covered initial studies reporting null findings and rarely reported subsequent null observations. Only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses. Initial non-lifestyle studies were less often confirmed (16/48) than subsequent ones (29/45) and than lifestyle studies (31/63). Psychiatric studies covered by newspapers were less often confirmed (10/38) than the neurological (26/41) or somatic (40/77) ones. This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim.CONCLUSION: Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed

    Replication validity of primary studies reported by newspapers in three biomedical domains.

    No full text
    <p>The blue bars show the percentage of primary studies covered by newspapers whose main finding was consistent with the corresponding meta-analysis. The red bars show the percentage of initial findings among primary studies echoed by newspapers and related to four psychiatric disorders (PSY), four neurological diseases (NEURO) and four somatic diseases (SOMA). Raw data are given in Supporting Information (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0172650#pone.0172650.s002" target="_blank">S2 Text</a>).</p

    Preferential coverage of initial findings and influence of the impact factor (IF).

    No full text
    <p>The figure shows the percentage of primary studies that are covered by newspapers depending on the study type (lifestyle versus non-lifestyle). Studies of the lifestyle category described associations linking a pathology to a risk factor on which each subject can act. Regarding non-lifestyle articles, the figure also contrasts initial articles with subsequent ones. Differences in the media coverage between initial studies and subsequent ones were statistically significant (see text) except for studies published in prestigious journals (IF ≥ 30). Raw data are given in Supporting Information (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0172650#pone.0172650.s002" target="_blank">S2 Text</a>).</p
    corecore