The scientific process involves many steps, from developing a theory and creating hypotheses to collecting and analysing data.
Each of these steps can potentially affect the final conclusions, but to what extent? Specifically, will different researchers reach different conclusions based on the same data and hypotheses?
Now a major study published in the journal Nature has sought answers to these questions in the context of brain imaging research, but which can be applied much more widely.
Lecturer in Psychology Nadège Bault was part of a team of almost 200 researchers who took part in the Neuroimaging Analysis, Replication and Prediction Study (NARPS), investigating how variable the findings of brain imaging research are as a result of researchers’ choices about how to analyse the data. The project was led by Dr Rotem Botvinik-Nezer from Dartmouth College and Dr Tom Schonberg from Tel Aviv University, along with Dr Russell Poldrack from Stanford University.
The results of NARPS show for the first time that there is considerable variance when the same complex neuroimaging dataset is analysed with different ‘analysis pipelines’ to test the same hypotheses.
No two teams chose identical workflows to analyse the data, leading to a wide range of different conclusions. The number of teams reporting a statistically significant outcome for each hypothesis varied substantially; for five of the hypotheses there was substantial disagreement, with between 20 and 40 per cent of the analysis teams reporting a statistically significant result. In addition, researchers in the field were over-optimistic regarding the likelihood of significant results, even if they had analysed the data themselves.
Despite this, a meta-analysis performed on teams’ intermediate results showed convergence for most hypotheses. This suggests the last steps of the analysis workflow, which are the most susceptible to human bias and interpretation, account for a substantial part of the variability in conclusions and therefore the human factor might be more to blame than the technique itself.
Dr Bault, who is Co-Lead for the MRI (Magnetic Resonance Imaging) Lab in the University’s new Brain Research & Imaging Centre (BRIC) said:
“This exciting international project highlighted the strong drive of many researchers to improve the quality of their work. This can only be achieved by cooperating in setting new standards. One aspect is the systematic sharing of data, but this on its own is not enough.
“What this study shows is that peer-validation of complex analysis workflows by making analysis code and intermediate results available is important as well. We are implementing these good practices in the BRIC MRI lab.”
The findings will be of interest to members of the neuroimaging research community, and those in every other field with complex analysis procedures, where researchers have to make many choices about how to analyse the data.
NARPS found that analytic flexibility can have substantial effects on scientific conclusions. However, the finding that underlying statistical results were relatively consistent across groups, and meta-analyses led to more convergence, suggests ways to improve research, highlighting the importance of validating and sharing complex analysis workflows, and the need for multiple analyses of the same data.
The fact that almost 200 individuals were each willing to put tens or hundreds of hours into such a critical self-assessment demonstrates the commitment of scientists in this field to assessing and improving the quality of data analyses.