The File Drawer. The Shame Basket. The Cavern of Unmentionables. The Island of Misfit Ps. I've got one, you've got one, we've all got one. It's the place where unpublished studies go to hang out with each other in silence and darkness. They can be problematic (1).
This post is about what to do with those pesky studies that just don't yield the results we hoped for/wanted/could easily publish/etc. Now, I'd argue that some (many?) studies are just hopeless misfits and were poorly conceived. Maybe they shouldn't see the light of day. On the other hand, presumably many of us have a neat idea that we test in a rigorous way, but reality - being a HEARTLESS AND CRUEL BITCH or BASTARD - does not cooperate and we end up with nasty, ugly results. What now? On the one hand, we have an interesting prevailing publication climate that can make it a hell of a lot of effort to disseminate our misshapen scientific progeny. And we're busy folk with lots of interesting ideas. Given the realities of finite time, energy, and resources, it's not always easy to justify putting a ton of work into forcing the square peg of ugly results through the round hole of scientific publication. It's really easy to justify pouring the bulk of our energies into fruitful, generative lines of research. On the other hand, scientific results (provided rigorous methods) are worth disseminating, if for no other reason than to produce a more transparent scientific record. Wouldn't it be cool if there was a low-effort way to make public our "failures" so that others can incorporate the full literature into meta-analyses, avoid our mistakes in the future, and other neat outcomes.
Preprints as a low-effort solution?
I'm currently teaching a grad methods class. In one of our first meetings, we got to chatting about file drawers and incentives and whatnot (DID THEY NOT READ THE SYLLABUS? WE WERE MEANT TO DISCUSS THESE TOPICS IN WEEKS 5 & 9, respectively). Everyone seemed on board with the idea that it'd be great to have a low-effort way to get interesting failed one-off studies out of the file drawer (more on different types of file drawers below). We came up with one potential solution: preprints.
Preprint services like the excellent new PsyArXiv are emerging. The basic logic, as I understand it (note: I've only posted one preprint, like a week ago. I'm a total n00b) is that if you want the scientific community to see your work while it travels through the LABYRINTH OF PEER REVIEW on the way to vanquishing the MINOTAUR OF REVISE AND RESUBMIT, you can post a preprint. A publicly accesible pdf of your paper, linked to any methods/data/code you want associated with it. Seems like a cool way to get your ideas out there before the paper is (hopefully) accepted for formal publication.
But do you have to only use preprints for papers you are actually submitting for actual publication in journals? Why not post a short summary of your idea, methods, and results as a preprint, and link to the methods/materials and data...for projects that you might never actually fully write up and submit? In this way, you could get "failed" studies out of the file drawer into the public domain. PsyArXiv allows you to "tag" your preprints with keywords, which would enable other researchers to stumble across your "failed" project by searching for specific topics.
The preprint itself could be super short, even just a page or two with the bare-bones basics of what you were studying, how you studied it, and what you found. Since PsyArXiv autmoatically creates an OSF project for each preprint, you could easily upload full methods, Qualtrics scripts, data, and analysis code for other researchers to enjoy. Other researchers could search for you as an author to find all of your results, regardless of outcome. Then they could say things like "Wow, Will Gervais has a lot of shitty ideas" and "No seriously, does this guy even know how to psychology" and "Okay, that approach didn't work, maybe let's tweak it and try again." With an hour or two of work, you could make even your ugly results public, for others to use.
I initially had some reservations. Is it dodgy to create a preprint for a paper you never intend to submit for publication? Being a SCIENTIST, I naturally thought, "Hey, ask Twitter." So I posted a poll. Here are the results of said poll:
Pretty overwhelming support for the idea, it seems. And evidently it's common practice in other disciplines.
Nifty, eh? Minimal effort to get those pesky findings out of the file drawer and into the public domain. Let others learn from your unsuccessful attempts. And hey, let's say other folks are publishing "successful" studies that look really similar to unpublished "unsuccessful" studies. Isn't that odd? What's going on? Well, if the negative results don't see the light of day, there's no way to figure out what's going on.
So, assuming there's any merit to the preprint-as-open-drawer model, when might it be useful? And where do I think it might actually appear?
The varieties of file drawer experience...
As earlier hinted, I think that there are at least two kinds of file drawers out there. One might be more amenable (or more realistically anticipated) to the preprint cleansing method.
I. The shotgun file drawer
Confession time: I have conducted more than the 50 (or whatever the number happens to be...I honestly don't know, what with multi-study papers and whatnot) studies I've published in peer-reviewed journals. Shocking, right? Sometimes, I manage to not-publish things.
The typical study in my file drawer goes something like this: We cook up some sort of half-baked but really-cool-if-supported hypothesis. Most of them have low prior probabilities of being right. What can I say, I'm a fucking gambler (alternative framing: my ideas are stupid). So we try lots of ideas out that often have little to nothing to do with each other. And the majority of these ideas don't pan out. Even with rigorous methods, bad ideas yield ugly data. So we banish the less promising studies to the drawer and pursue the lines that seem more promising. Shoot at lots of targets, then winnow the projects accordingly.
Now, here's why all those file drawered studies can be problematic. Presumably, other folks might have similar stupid ideas. Sometimes even identical stupid ideas! They might even run identical studies with the same dismal results. Wouldn't it be cool if we could check to see if others have already tested and rejected these hopeless ideas? It'd save everyone time to see what others didn't find. And maybe something shows up in one culture but not another. Nifty. But you'll never know unless you see the stuff I DIDN'T find.
II. The assembly line file drawer
I haven't really tried this one, but here's an alternative approach to the file drawer. You have an idea that you're pretty sure is right. Maybe it's supported by existing theory or whatever. This effect simply SHOULD BE REAL. All you've gotta do is run the right studies to demonstrate the phenomenon. So you run a study. Didn't work. File drawer. Then you run another one with a tweak. HIT! Keep it. Try again. No dice. File drawer. Try again. HIT! Rinse, repeat.
If the effect is really there, then there's probably something wrong with the studies that didn't find it (or so goes the logic). Here the file drawer is (perceived to be) a quality control process. Keep the studies that had the "right" methods to show the true effect. Mercilessly cull those that didn't.
Now, this is probably a caricature (right?). But this is problematic because it'll systematically bias the literature in a severe way, leading to a state of affairs that people named Mickey or Sanjay would perhaps dub "fucked." And given that the unfucking abilities of our meta-analytic unfuckers are, optimistically, unknown, cartoon Simine might say:
Which model will preprints help?
In principle, both. Making all rigorous studies public solves both the problem of "someone else already did this, and it sucked" and "wow, every single published study on phenomenon X worked."
In practice, I'm a lot more optimistic about seeing preprints being used to address the shotgun problem. I don't have any evidence to support that hunch, but I'd bet that a ton of people are more willing to make public their one-offs than to reveal that a 5 study paper actually took 12 studies to assemble. But...eternal optimist that I am...I strongly suspect that far more file drawers are riddled with shotgun-style one-offs than with discarded studies testing the same ideas as published papers. I could be wrong, but i hope I'm not.
Okay, it's a Friday afternoon. I've used up all my f-bombs and cartoons. So it's time to check out for the weekend. But I'm curious to hear more reactions to the idea, especially from the 17% of people who see the practice as potentially problematic. I'm always deeply wary of ideas that seem simultaneously helpful, practical, and easy. So I'm really eager to collectively brainstorm and (hopefully) fix the glitches. In the meantime, my lab is planning on trying this out for a while. So be prepared to see some preprints of all the shitty ideas emerging from the BAM!!Lab (grad students...oh yeah, I meant to tell you we're doing this now. We'll work out the details next week. Now get back to work. Everything is due Monday morning, so I hope you didn't plan on having fun this weekend or sleeping. Eating is acceptable, so long as you check your email whilst eating cat food out of the can as per lab policy). Maybe I'll keep a running blog post with THINGS WE DID NOT FIND.
(1) I threw a footnote up there somewhere about how file-drawered studies can be problematic. I'm not sure if they're always problematic. Sometimes, a study just isn't rigorous, or there's an obvious flaw in the design. I'm comfortable with file drawers for obviously flawed studies. Huge caveat: it's pretty easy to "spot" the flaw once the results are in. That's just an easy excuse. But, sometimes the flaw is real, and I'm not entirely convinced that those studies need be made public. But I could be quite wrong. Who knows.