219 research outputs found
Variability in the analysis of a single neuroimaging dataset by many teams
Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed
Brain volumetric changes in the general population following the COVID-19 outbreak and lockdown
The coronavirus disease 2019 (COVID-19) outbreak introduced unprecedented health-risks, as well as pressure on the economy, society, and psychological well-being due to the response to the outbreak. In a preregistered study, we hypothesized that the intense experience of the outbreak potentially induced stress-related brain modifications in the healthy population, not infected with the virus. We examined volumetric changes in 50 participants who underwent MRI scans before and after the COVID-19 outbreak and lockdown in Israel. Their scans were compared with those of 50 control participants who were scanned twice prior to the pandemic. Following COVID-19 outbreak and lockdown, the test group participants uniquely showed volumetric increases in bilateral amygdalae, putamen, and the anterior temporal cortices. Changes in the amygdalae diminished as time elapsed from lockdown relief, suggesting that the intense experience associated with the pandemic induced transient volumetric changes in brain regions commonly associated with stress and anxiety. The current work utilizes a rare opportunity for real-life natural experiment, showing evidence for brain plasticity following the COVID-19 global pandemic. These findings have broad implications, relevant both for the scientific community as well as the general public
Variability in the analysis of a single neuroimaging dataset by many teams
Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses1. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, a meta-analytic approach that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2-5. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors possibly related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed
Fake hands in action: embodiment and control of supernumerary limbs
Demonstrations that the brain can incorporate a fake limb into our bodily representations when stroked in synchrony with our unseen real hand [(the rubber hand illusion (RHI)] are now commonplace. Such demonstrations highlight the dynamic flexibility of the perceptual body image, but evidence for comparable RHI-sensitive changes in the body schema used for action is less common. Recent evidence from the RHI supports a distinction between bodily representations for perception (body image) and for action (body schema) (Kammers et al. in Neuropsychologia 44:2430–2436, 2006). The current study challenges and extends these findings by demonstrating that active synchronous stroking of a brush not only elicits perceptual embodiment of a fake limb (body image) but also affects subsequent reaching error (body schema). Participants were presented with two moving fake left hands. When only one was synchronous during active touch, ownership was claimed for the synchronous hand only and the accuracy of reaching was consistent with control of the synchronous hand. When both fake hands were synchronous, ownership was claimed over both, but only one was controlled. Thus, it would appear that fake limbs can be incorporated into the body schema as well as the body image, but while multiple limbs can be incorporated into the body image, the body schema can accommodate only one
Subjective evidence evaluation survey for many-analysts studies
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study
Science Forum: Consensus-based guidance for conducting and reporting multi-analyst studies
Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research
Consensus-based guidance for conducting and reporting multi-analyst studies
International audienceAny large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research
Recommended from our members
Consensus-based guidance for conducting and reporting multi-analyst studies
Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research
- …
