After four episodes spent fawning over Scott Alexander's "Non-libertarian FAQ", we turn around and attack the good man instead. In this episode we respond to Scott's piece "In Continued Defense of Non-Frequentist Probabilities", and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What's the probability that Scott changes his mind based on this episode?

We discuss

Why we're not defending frequentism as a philosophy

The Bayesian interpretation of probability

The importance of being explicit about assumptions

Why it's insane to think that 50% should mean both "equally likely" and "I have no effing idea".

Why Scott's interpretation of probability is crippling our ability to communicate

How super are Superforecasters?

Marginal versus conditional guarantees (this is exactly as boring as it sounds)

How to pronounce Samotsvety and are they Italian or Eastern European or what?

References

In Continued Defense Of Non-Frequentist Probabilities

Article on superforecasting by Gavin Leech and Misha Yugadin

Essay by Michael Story on superforecasting

Existential risk tournament: Superforecasters vs AI doomers and Ben's blogpost about it

The Good Judgment Project

Quotes

During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.

- Michael Story

Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.

Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.

- SA, Section 2

Socials

Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani

Come join our discord server! DM us on twitter or send us an email to get a supersecret link

Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here.

Click dem like buttons on youtube

What's your credence in Bayesianism? Tell us over at incrementspodcast@gmail.com.

Support Increments

Podden och tillhörande omslagsbild på den här sidan tillhör Ben Chugg and Vaden Masrani. Innehållet i podden är skapat av Ben Chugg and Vaden Masrani och inte av, eller tillsammans med, Poddtoppen.