Currently, most of my experiments draw heavily from the methods of social psychology. We use laboratory experimental manipulations to test predictions derived from the big-picture questions highlighted above. I also really like adopting lots of new experimental techniques, and co-opting methods used for other tasks to fit my needs. So you’ll see me doing experiments that utilize priming, heuristic processing, judgment and decision making tasks, implicit and indirect measures of attitudes and beliefs, and—that venerable staple of social psychological research—the Likert scale.

The past few years, social psychology has taken a bit of a beating. Whether it’s the discovery of several fraudsters (which I’m guessing is rare) to broader concerns about how questionable research practices undermine the replicability of our science (which is likely much more common), it’s a very interesting time to be embarking on a career as an assistant professor. With that in mind, here’s my very brief methods manifesto:

As researchers, we like statistically significant results. They lead to publications, jobs, and tenure (plus they just look cooler on graphs). To obtain statistically significant results, we need powerful designs. I’m of the impression that questionable research practices are not primarily used because there are lots of researchers who want to see p < .05, whether it’s likely to be a real effect or not. Instead, I think the use of questionable research practices is nothing more than an attempt to boost power that, as an unfortunate consequence, leads to the publication of (likely) lots and lots of false positive effects. The intention is good. The consequences could be very bad indeed.

Recently, I made a very deliberate decision about how I wanted to conduct my research. There’s one way to boost statistical power without undermining replicability: run bigger studies. Much, much bigger. More participants = more power, plain and simple. So now I run big studies, and simple studies (e.g., we don’t stack up DVs, in order to see “what comes out”). Also, I’m not going to publish an effect I can’t replicate. The upshot of this is that I can be reasonably confident that any positive effects I publish are “real.” It also means (since time, resources, and participants are finite) that I’ll necessarily be conducting and publishing fewer studies. But I think it’s worth the tradeoff.

 

Posted
AuthorScienceSites