### I dabble in Bayes. Most of my recent papers use Bayesian stats. Gun to my head, I’d call myself a Bayesian. But I ain’t terribly good at it. I’m a dabbler. I spoke about my dabbling last year at SPSP, alongside people who actually understand stats (Julia Haff, Alex Etz, Joe Hilgard). How did I get to be this way?

**How it happened…**

I was teaching undergraduates the logic of null hypothesis significance testing, and a student said “esteemed Dr. Gervais, do we not in actuality test our chosen hypothesis?” I and explained that no, we are not allowed to do that, for first we must reject the null. She responded thusly: “Well, if we did want to test our own hypothesis, how would we do that?” I turned to the board and scribbled some conditional probabilities. Once I figured the conditional probability the student wanted, I took the conditional probability for a p-value and flipped it like a frat pledge with a Solo cup. I ginned up a quick formula describing what other information we needed. I took a picture of my chalk board scribbling to ponder later. I got back to my office and googled my scribbles, only to learn that my scribbles were in fact known as Bayes’s[i] theorem, which I had independently derived. Huzzah!

**How it actually happened…**

I had some fucked up data. Just a bad stupid distribution. Zero inflated (a term I hadn’t heard yet) with a big stupid long tail, then a bump at the maximum score. Our DV was how many pins someone wanted to stick in a voodoo doll they thought was an atheist or something like that. Although it seemed to make some sort of sense to try at the time, I dunno why. Crazy times. Results-wise it kind of looked like people wanted to stick it to the godless, but it’s really tough to figure out how to get a p-value from flammable garbage like this.

We presented this issue in a brownbag and someone said something like “those data are really skewed, you have to ** fit a Poisson distribution**.” I hadn’t heard of that, so I wrote it down. A couple of google searches later, I realized three things: 1)

**, 2)**

*NO***, and 3) it was time to learn some goddamn statistics like I should’ve years ago. I needed to go back to basics, like in this training montage:**

*NO IT DOESN’T WORK LIKE THAT*My basic toolkit wasn’t gonna cut it, so I decided to branch out a bit. I started reading about nonparametrics. I started reading on robust stats. I came across some stuff on Bayes. I started with some light Bayes play (tutorials on Bayes factors, Kruschke’s BEST paper). But I never really implemented any of it directly. It sounded conceptually cool, and like something to dig into. But I had a lot of other stuff going on, like trying to publish papers and raising 1-2 small humans, and walking the dog. It went into my voluminous “cool maybe I’ll do this someday” file that I occasionally delete.

I also taught grad methods a couple of times in this spell and included some Bayes readings. Like me, students were intrigued by this Bayes wizardry (especially Bayes factors, which seemed to them like a gateway transition from p-ing on everything). But one persistent objection was pithily summed up by one student: “How will we publish this stuff when everyone knows p-values instead[ii]?” Wouldn’t want to let testing the hypothesis you actually had get in the way of publishing a paper, I guess.

Around this time, I had read most of the accessible Bayes factor primers out. I’d read Kruschke’s book (love the concept chapters). I’d done my Dienes pilgrimage. I read Alex Etz’s blog posts and assigned them to my grad classes while he was still an undergrad, to shame my students. And I really was digging into different stats backgrounds. I knew I’d never be a front line maths dude making new tools. I last took calc in high school, have never taken probability, skipped almost every stats lecture in college, and halfassed my way through minimum required grad stats. Enough laziness was enough, I wanted to have a firmer conceptual understanding of what was at stake, even if I wasn’t going to become some math whiz in my early-mid 30s. But it was hard as hell to try to teach myself stats while also doing that whole tenure push. It was brutal, and I’m so glad I did it. Here’s a blog post with some of my favorites.

Along came McElreath’s Statistical Rethinking, and I was hooked. Clear conceptual outline of concepts. Code examples that even a middling R learner like me could work out.

I could follow the logic. I could build some models. And for the first time in forever, I felt like the stats were answering the questions I liked to ask.

I won’t pretend that everything immediately clicked. Going from casually reading about Bayes to actually publishing with Bayes was NOT an especially clean path for me. In the early stages, it was strange and scary and uncomfortable, as learning ought to be.

**What scared me about Bayes…**

**Q:** Would I be able to publish papers with Bayes?

### A: Yes, but I ain’t gonna lie and say it’s just as easy as publishing frequentist.

Every paper I’ve written with Bayes has to include a section saying how Bayes works (FWIW, I’d love it if *all* papers had to explain the core logic of what they were doing). Every editor at least suggested removing the Bayes or making it subordinate to frequentist approaches (termed either “more familiar” or “simpler”). Every editor so far has been cool with me keeping in Bayes and not including frequentist if I didn’t want to, but it’s always flagged somehow. I think one key to keeping the Bayes featured has been that we’ve never taken an ideological purist stance that Bayes was *righter* or *smarter* but just that it was a good fit for the questions we asked.

**Q:** Aren’t priors evil demonspawn from the pits of hell?

**A:** No, they are demonspawn from the pits of you.

I’ve seen/heard people teaching grad stats dismiss Bayes out of hand to their students because priors are *subjective* and therefore *scary.* And to be fair, I was pretty nervous when I first started working with priors. Why should my opinions affect my inferences (oh what a sweet summer child I was)?

I remember being at an APS stats workshop on Bayes factors and actually plotting the default alternative (Cauchy, rscale= sqrt(2)/2) and thinking “that ain’t an alternative hypothesis I’m ever likely to have” and it was weird, but I realized it wouldn’t be a big deal to change an alternative to a hypothesis that actually makes sense in my world. I was kind of a dick online for a while about why I didn’t like the default, and that’s silly. It didn’t work for me, but if it’s a good fit/first start for lots of other people, that’s cool. My bad, y’all. Thanks to Morey, Rouder, Wagenmakers, et al., for dealing with my shit on Twitter during my “awkward Bayes teenager” phase.

McElreath’s book does a great job of pointing out that estimation priors are essentially tools for telling your model to chill out. Do you know the scale of your DV? Do you think little relationships are more plausible than huge ones? Do you think overfitting is bad? Cool, you’ve got some priors. And if you have non-insane priors and a decent amount of data, it’s not really a huge deal. Boom. ** In my experience, priors are only scary until you try them** then it just becomes a routine step of thinking about your analysis before you button mash it into existence.

**Q: **Do I understand Bayes well enough to pull it off?

**A: **That doesn’t stop people from using frequentist stats, friendo.

Got any other concerns? Let me know and I’ll offer my unexpert opinion.

**These Days…**

Most of my recent work uses Bayesian stats. Examples are here here and here. My current favored approach is Bayesian estimation. A lot of my work actually focuses on estimation problems (how many of **A Thing** are there, is that thing big and stable or weak and fickle). Beyond that, there’s a metric crap-bucket of really useful things you can do with a posterior distribution. You can ask and answer a whole host of questions with it. What’s the probability of A Thing being Like That, now that you’ve got a model and some data? As Jason Mendoza would say, that’s pretty dope.

That’s not to say I’m some statistical monogamist. If I find myself in a situation where I’ve got clear duelling theoretical predictions, Bayes factors seem like a cool fit. In this case, the priors can match the theories, and voila I could actually test them. If I find myself wanting to have known failure rates for a statistical tool in lots of repeated uses of A Thing, then I’ll go frequentist. If I’m collaborating with people who want to do things one way, I’ll probably acquiesce. I’m a dabbler with very diffuse priors about what techniques work best. If I tussle with someone online about stats, it’s probably because they’re dissing something they haven’t tried. Or because I don’t know when to shut up. And if I joke about stats, it’s cuz I realize that I barely know enough to be dangerous, and I LAUGH IN THE FACE OF DANGER.

Get some data. Try to analyze it different ways. Ask different questions with the data. Do you want to make choices with known failure rates? Do you want evidence? Do you want to quantify your uncertainty? You can’t pick a stats tool unless you know what kind of question you’re asking. And if you think priors are scary, that’s a prior rather than a posterior talking.

Wanna dabble with Bayes? Not sure how to start? Drop me a line. I probably can’t help, but I can reassure you that I’ve been there.

I’m currently developing a grad course that will be on applying Bayes in a dabblish manner. If you’ve got any ideas, let me know. And I’ll post it for feedback at some point.

[i] Bayes’ theorem? You think multiple people named Baye own it? Oh, you actually think a dude named Bayes owns it? Then it’s Bayes’s. Fight me, motherfuckers.

[ii] debatable