25 research outputs found

    Variability in the analysis of a single neuroimaging dataset by many teams

    Get PDF
    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses1. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, a meta-analytic approach that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2-5. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors possibly related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed

    Consensus-based guidance for conducting and reporting multi-analyst studies

    Get PDF
    International audienceAny large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research

    Centering inclusivity in the design of online conferences: An OHBM-Open Science perspective

    Get PDF
    As the global health crisis unfolded, many academic conferences moved online in 2020. This move has been hailed as a positive step towards inclusivity in its attenuation of economic, physical, and legal barriers and effectively enabled many individuals from groups that have traditionally been underrepresented to join and participate. A number of studies have outlined how moving online made it possible to gather a more global community and has increased opportunities for individuals with various constraints, e.g., caregiving responsibilities. Yet, the mere existence of online conferences is no guarantee that everyone can attend and participate meaningfully. In fact, many elements of an online conference are still significant barriers to truly diverse participation: the tools used can be inaccessible for some individuals; the scheduling choices can favour some geographical locations; the set-up of the conference can provide more visibility to well-established researchers and reduce opportunities for early-career researchers. While acknowledging the benefits of an online setting, especially for individuals who have traditionally been underrepresented or excluded, we recognize that fostering social justice requires inclusivity to actively be centered in every aspect of online conference design. Here, we draw from the literature and from our own experiences to identify practices that purposefully encourage a diverse community to attend, participate in, and lead online conferences. Reflecting on how to design more inclusive online events is especially important as multiple scientific organizations have announced that they will continue offering an online version of their event when in-person conferences can resume

    Variability in the analysis of a single neuroimaging dataset by many teams

    Get PDF
    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed

    Variability in the analysis of a single neuroimaging dataset by many teams

    Get PDF
    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed

    Raising the Bar: Improving Methodological Rigour in Cognitive Alcohol Research

    Get PDF
    Background and Aims: A range of experimental paradigms claim to measure the cognitive processes underpinning alcohol use, suggesting that heightened attentional bias, greater approach tendencies and reduced cue-specific inhibitory control are important drivers of consumption. This paper identifies methodological shortcomings within this broad domain of research and exemplifies them in studies focused specifically on alcohol-related attentional bias. Argument and analysis: We highlight five main methodological issues: (i) the use of inappropriately matched control stimuli; (ii) opacity of stimulus selection and validation procedures; (iii) a credence in noisy measures; (iv) a reliance on unreliable tasks; and (v) variability in design and analysis. This is evidenced through a review of alcohol-related attentional bias (64 empirical articles, 68 tasks), which reveals the following: only 53% of tasks use appropriately matched control stimuli; as few as 38% report their stimulus selection and 19% their validation procedures; less than 28% used indices capable of disambiguating attentional processes; 22% assess reliability; and under 2% of studies were pre-registered. Conclusions: Well-matched and validated experimental stimuli, the development of reliable cognitive tasks and explicit assessment of their psychometric properties, and careful consideration of behavioural indices and their analysis will improve the methodological rigour of cognitive alcohol research. Open science principles can facilitate replication and reproducibility in alcohol research

    I did it my way

    No full text
    corecore