Saturday, July 27, 2013

(De)Merit Badges for Non-Preregistered Research

Will Study Pre-Registration Be Good for Psychology?

There has been a lively debate recently about study pre-registration, a publishing model (or online repository) where detailed methodological and statistical plans for an experiment are registered in advance of data collection. The idea is to eliminate questionable research practices such as failing to report all of a study's dependent measures, deciding whether to collect more data after looking to see whether the results are significant, and selectively reporting studies that 'worked.'

Chris Chambers and Marcus Munafo wrote a widely discussed article that appeared in the Guardian:
Trust in science would be improved by study pre-registration

Open letter [with over 80 signatories]: We must encourage scientific journals to accept studies before the results are in

. . .

[The current] publishing culture is toxic to science. Recent studies have shown how intense career pressures encourage life scientists to engage in a range of questionable practices to generate publications – behaviours such as cherry-picking data or analyses that allow clear narratives to be presented, reinventing the aims of a study after it has finished to "predict" unexpected findings, and failing to ensure adequate statistical power. These are not the actions of a small minority; they are common, and result from the environment and incentive structures that most scientists work within.

The Open Science Framework, a movement for greater transparency in science, has developed merit badges to designate Open Data, Open Materials, and Preregistration.



The Open Data badge is earned for making publicly available the digitally shareable data necessary to reproduce the reported results.



The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.



The Preregistered badge is earned for having a preregistered design and analysis plan for the reported research and reporting results according to that plan. An analysis plan includes specification of the variables and the analyses that will be conducted.


One could imagine the introduction of two new demerit badges for Questionable and Rejected work.1



Questionable badges are issued when the committee suspects that questionable research practices have been used, as outlined in the paper by John et al. (2012).



The Rejected badge is earned when there is a suspicion that outright fraud may have occurred. This will typically spur an inquiry.


While an admirable goal, there may be aspects of this scheme that the proponents haven't fully considered.

Pre-registration would put science in chains

The pre-registration of study designs must be resisted, says Sophie Scott

. . .

...there are numerous problems with the idea. Limiting more speculative aspects of data interpretation risks making papers more one-dimensional in perspective. And the commitment to publish with the journal concerned would curtail researchers’ freedom to choose the most appropriate forum for their work after they have considered the results.

. . .

Moreover, in my fields (cognitive neuroscience and psychology), a significant proportion of studies would simply be impossible to run on a pre-registration model because many are not designed simply to test hypotheses. Some, for instance, are observational, while many of the participant populations introduce significant sources of complexity and noise; as introductions to psychology often point out, humans are very dirty test tubes.

One possible outcome is that certain types of research are privileged over others.2  The badge manifesto states that...
Badges do not define good practice, they certify that a particular practice was followed.

I find this assertion to be kind of hollow in the absence of badges issued for these other types of research, considered unsuitable for Preregistration. Therefore, in the spirit of fair play, I hereby introduce three new badges!



The Exploratory badge is issued to meritorious research that is not hypothesis-driven. This could include characterization of disease states and vast swaths of the neuroimaging literature ("Human Brain Mapping"), particularly in the early days. Not to mention the entire Human Connectome Project...



The Fishing Expedition badge can be earned by imaging studies that use exciting new methods like multi-voxel pattern analysis in neural decoding ("mind reading") applications, machine learning approaches to classify patient vs. control groups, and the latest in data mining ("Big Data").



The BRAIN Initiative badge is awarded by President Obama to research supported by his new $100 million Brain Research through Advancing Innovative Neurotechnologies Initiative. This bold new research effort will include advances in nanotechnology and purely exploratory efforts to record from thousands of neurons simultaneously.3



Additional Commentary on Study Pre-Registration

Sophie Scott has compiled the thoughts of researchers with varying degrees of opposition to pre-registration. Some are not totally opposed, but have questions on how it will be implemented and how it might be problematic for certain types of research. I fall into this latter camp.

The one current publication format for Registered Reports, in the journal Cortex, "guarantees publication of their future results providing that they adhere precisely to their registered protocol."

I'm not sure this would work in studies with children, patients, or other difficult populations, where everything is not always predictable in terms of task performance, nature of the brain response, etc. In my blurb on Sophie's blog, I said:

Another of your examples, neuropsychological case studies, is particularly difficult. Are you not supposed to test the rare individual with hemi-prosopagnosia or a unique form of synesthesia? Many aging and developmental studies could be problematic too. What if your elderly group is no better than chance in a memory test that undergrads could do at 80% accuracy? Maybe your small pilot sample of elderly were very high performers and not representative? Obviously, being locked into publishing such a study would set you back the time it would take to make the task easier and re-run the experiment. You could even say in the new paper that you ran the experiment with 500 items in the study list and the elderly were no better than chance. Who's to say that a reviewer would have caught that error in advance?

At any rate, I think it's important to have these kinds of discussions. And to freely distribute new kinds of badges.


Footnotes

1 Just to be clear, I made these up.

2 I'm not at all opposed to pre-registration, and I think it'll be an interesting experiment to see whether research practices improve and "scientific quality," or replicability, increases. But I can see the danger in that being viewed as "saintly" research with the rest of it tainted.

3 The Brain Activity Map as the Functional Connectome
To elucidate emergent levels of neural circuit function, we propose to record every action potential from every neuron within a circuit—a task we believe is feasible.

No comments:

Post a Comment