How to Change Your Mind
The simple mathematical idea you can use to make decisions in everyday life.
We all need to make hundreds of decisions every day. Decision research suggest that the average adult makes about 35,000 remotely conscious decisions each day. But not always are we presented with enough data to make those decisions. In fact, when we are presented with large volumes of ambiguous or uncertain evidence, we often, understandably, find it hard to draw concrete conclusions.
Fortunately, as I explore in more detail in How to Expect the Unexpected there are tools which can help us to reason in the face of the uncertainty. One such mechanism has been around for almost 250 years. Bayes' theorem (also known as Bayes’ rule or sometimes just Bayes) is one of the most important tools across all of applied mathematics.
At its heart, Bayes’ theorem is a statement about conditional probability—the probability that a hypothesis is true given some piece of evidence. It might be the probability that a suspect is innocent (hypothesis) given a piece of forensic evidence, or it might be the probability (without looking at the team sheet) that Pelé was on the pitch (hypothesis) given that Brazil scored a goal (evidence). In real life it is often easier to assess what is known as the transposed statement—the probability of seeing the evidence given that we assume an underlying hypothesis is true: the chances of seeing a particular piece of forensic evidence if a suspect is innocent, or assessing the chances of Brazil having scored if Pelé was playing. Bayes developed his theorem as a tool to bridge between these two sides of the conditional probability equation.
Today, Bayes’ theorem is at work behind the scenes, filtering out spam emails ranging from phishing attempts to pharmaceutical offers. It underlies the algorithms that recommend films, songs and products to us online and is behind the deep-learning algorithms which are helping to provide more accurate diagnostic tools for our health services.
But the implications of Bayes' theorem go way beyond any one application. In a nutshell, it suggests that one can update one's initial belief with new data in order to come up with a new belief. In modern parlance the prior probablity (initial belief) is combined with the likelihood of observing the new data to give the posterior probability (new belief). As much as a mathematical statement, Bayes’ theorem is a philosophical viewpoint: that we can never access perfect absolute truth, but the more evidence that accrues, the more tightly our beliefs can be refined, eventually converging toward the truth.
When my information changes…
Bayes absolutely typifies the essence of modern science: the ability to change one’s mind in the face of new evidence. As economist John Maynard Keynes once said, “When my information changes, I alter my conclusions."
Many of the theorem’s more ardent disciples argue that Bayes’ theorem is a philosophy by which to live. Although this is not my personal view, I think there are practical lessons we can benefit from if we learn to think in a Bayesian way—tools which can help us to decide which of the multiple competing stories to believe, how confident to be in our assertions and, perhaps most importantly, when and how to change our minds. Although it has a precise mathematical statement, I think it’s more helpful here to focus on two of the key lessons that Bayes' rule gives us to take away into everyday life.
Consider a different point of view
Many of us will be aware of the ways in which confirmation bias can lead us astray. The cognitive underpinnings of the phenomenon, however, are perhaps most neatly explained by thinking in terms of Bayes’ theorem. Confirmation bias is essentially a failure to consider or assign sufficient weight to our prior beliefs about alternative hypotheses, or alternatively an underestimation of the likelihood—the strength of evidence in favour—of these alternative hypotheses, or a combination of the two.
Imagine the situation in which you are trialling a new medicine to treat the chronic back pain you’ve been suffering from. After a week of taking the pills, you start to feel better. The obvious conclusion to draw is that the medicine has improved your back problems. But it’s important to remember that there is at least one alternative hypothesis to consider. Perhaps your back pain fluctuates significantly from week to week anyway and, during the period over which you were taking the medicine, it’s likely that your pain might have receded anyway. Perhaps less likely is the possibility that the improvement was caused by something else entirely—a different sleeping position or taking different forms of exercise, for example. We often fail to take this vital step back and ask, what if I were wrong? What are the alternative possibilities? What would I expect to see if they were correct? And how different is it from what I currently see? Unless we consider the other hypotheses and assign them realistic prior probabilities, then the contribution of the new evidence will always be disproportionately assigned to the obvious hypothesis we have in mind.
Alternatively, confirmation bias can arise when we are well aware of alternative hypotheses but fail to seek out, or assign appropriate weight to, evidence which contradicts our own preferred beliefs. This results in our overestimation of the likelihood of data supporting our favoured hypothesis and our underestimation of the likelihood of data supporting the alternatives. X (formerly Twitter) and other social media sites are classic examples of platforms on which many users exist inside an echo chamber. By being fed only those posts which reinforce their current views, their feeds shelter many of the platforms’ users from alternative points of view. Users with what may start out as only mildly differing views have their opinions reinforced continually to the point of near certainty. This can result in increased polarisation and tribalism, both on the social media platform and back in the real world.
Change your opinion incrementally
Bayes’ rule was never designed to be a tool that could only be applied once to update a single prior belief with one new piece of evidence. The ability to continually reuse Bayes' theorem to update our beliefs is one of its greatest strengths. We must be wary about overweighting our prior beliefs. The feeling of confidence in our convictions might make it tempting to ignore small pieces of information that don’t change our view of the world significantly. The flip side of allowing ourselves to have prior beliefs as part of the Bayesian perspective is that we must commit to altering our opinion every time a new piece of relevant information appears, no matter how insignificant it seems. If lots of small pieces of evidence were to arrive which each slightly undermine a strongly held belief, then Bayes would allow us to—indeed, the theorem would dictate that we must—update our view incrementally.
It’s not always an easy thing to do, to change our opinions in the light of new evidence. It feels uncomfortable to admit we were wrong and almost cowardly to renege on the beliefs we previously held onto so strongly. In fact, it requires great courage to hold and to espouse a view contradictory to one you have previously embraced.
Attempting to reason in the face of uncertain and fluctuating evidence is no easy task. We must accept that we will not always make the right choices, generate the correct predictions, or hold the correct opinions. Ultimately, we will all be happier once we learn to accept, if not always expect, the unexpected.