By and large, humans are pretty bad at understanding probabilities. We live in a world of tangible things and real events which makes it hard to wrap our heads around the abstract concept of randomness.
Among statisticians, there are two broad philosophies when it comes to thinking about uncertainty: frequentist statistics and bayesian statistics.
Frequentist statistics assumes that there is some true probability distribution from which we randomly observe some data. For example, imagine we were flipping a coin and trying to determine if it was fair or not.
A frequentist would design an experiment, and determine a hypothesis test. “I am going to flip the coin 50 times, and I will say the coin is not fair if I get less than 20 or more than 30 heads.”
From this frequentist experiment, we will be able to produce a p-value and a confidence interval. These tell us about the probability of observing our particular set of 50 coin flips, out of the universe of all possible sets of 50 coin flips.
On the other hand, Bayesian statistics relies on our prior knowledge about random variables as the foundation model. These prior assumptions can be the result of historical data or simply the statisticians' judgment.
Take the same coin flipping experiment, we no longer begin by defining a null hypothesis and some criteria to reject it. Instead, we begin with a prior distribution, perhaps assuming the coin is fair, and our goal is to create a “posterior distribution.” In this case, the posterior would be what is the probability the coin is fair.
Assume now that we ran our experiment and flipped 21 heads in 50 tries. In this case, the frequentist would say “We fail to reject the null hypothesis and can not conclude whether or not the coin is fair.” The Bayesian would instead find the posterior distribution, and be able to make a statement about the probability the coin is fair.
In short, the frequentist will never calculate the probability of a hypothesis being true. They will either reject it or fail to reject it. A Bayesian statistician will always begin with a prior guess about whether a hypothesis is true, and update that prior guess as they observe new information.
I personally find Bayesian thinking to be a much more satisfying way to conceptualize uncertainty. We all carry around some prior knowledge about how random things should play out, and as we observe new information we update those priors.
However, one thing to realize about intentionally thinking like a Bayesian is that it is often slow to incorporate new and dramatically different pieces of information. This can be good when the dramatically different information is an outlier that we don’t want to have completely determine how we think, but our priors can be slow to recognize when things do change dramatically.
In practice, I still mostly use frequentist statistics as part of my analysis. In practice, Bayesian analysis is often sensitive to the construction of the prior distribution and it is not always useful to a policy maker when the result of some analysis still has probability baked in.
Still, Bayesian statistics reminds us that we live in a world where things are uncertain. Especially in policy analysis, where we often try to predict future outcomes, it is important to remember that there is uncertainty and unlikely outcomes do not necessarily mean our predictions were wrong.