How to Deal With Cognitive Biases


When was the last time you made a conscious decision after more than a few seconds of serious thought?

Personally, I can’t recall the last time I stopped, paused, thought, dug in my head for a bit, made a choice, and went on with my day. Even if it only took a few minutes.

Can you?

These moments are quite rare, aren’t they? And yet, we make thousands of ‘choices’ every day.

You can imagine the huge impact the quality of your decisions has on our success.

However, if you compare people’s reasoning to scientific, statistical and logical standards, you’ll find large classes of judgments to be systematically mistaken.

In this guide, I investigate how that’s possible and present a five-step method for bulletproofing your decision-making process.

Cognitive biases 101

For a long time, we humans thought we were endowed with a beautiful machine for thinking — our brains.

Alas, work on ‘cognitive biases’ — most famously by Nobel Prize winner Daniel Kahneman and his colleague Amos Tversky; popularized in the former’s book Thinking, Fast and Slow— showed that we are, in fact, running on corrupted hardware.

Our intellect was made for survival, not truth or rationality. Cognition is an instrument, not the end product:

Consider that those who started theorizing upon seeing a tiger on whether the tiger was of this or that taxonomic variety, and the degree of danger it represented, ended up being eaten by it. Others who just ran away at the smallest presumption and were not slowed down by the smallest amount of thinking ended up either outchasing the tiger or outchasing their cousin who ended up being eaten by it. — Nassim Nicholas Taleb, Fooled by Randomness

If we were to optimize the truth-percentage of our beliefs at every step in life, it would cost us much-needed time and energy. Accordingly, our brain uses shortcuts to avoid spending more than necessary.

Central processing is an exception. We do not think when making choices. We use heuristics.

Heuristics are rules of thumb that suggest a solution to a problem. Our mind operates by a rulebook — a set of disconnected rules. Your response in a situation depends not on conscious deliberation, but on which page you open and rule you apply.

Inferences gone wrong

A good way to think about these rules is as [if → then] inferences we automatically make. In our brains, some properties are destined to be a couple:

  • The effort heuristic links hardship [if] and importance [then]. We presume difficult projects are more important than endeavors that didn’t take so much toil.
  • The familiarity heuristic links acquaintance [if] and importance [then]. It’s the reason Americans estimate that Marseille has a bigger population than Nice and Nice has a bigger population than Toulouse. Marseille rings a louder bell than Nice, which in turn buzzes harder than Toulose. The reason that sounds like I’ve heard before, we infer, must be that it’s a larger, more in-the-news, more important city.
  • The representativeness heuristic links typicalness [if] and frequency [then]. We automatically judge events as more likely if they’re similar to the prototype of the event than if they are less similar to it. Homicide is more similar to my prototype of causes of death — a more representative cause of death — than suicide, so I judge it as something that happens more frequently than suicide.

Heuristics operate unconsciously. We need to fight hard to gain access to the mental processes underlying most of our judgments. We can’t monitor what makes us think what we do. For most judgments, we’re only conscious of the end result, not of its assembly.

Their automaticity is both their strength and their pitfall: it makes them efficient, but also blind and very hard to intercept (as we’ll see later).

Flawed, not just imperfect

For now, the problems are that many of these push-button if-then inferences our brain makes:

  • (a) often misfire while they
  • (b) present themselves to our consciousness as the considered truth rather than a rough-and-ready shortcut.

Heuristics are not merely a simplification of rational models, but are different in methodology and category.

As a consequence, the way we arrive at conclusions can be glitched up. Our inferences frequently violate principles of statistics, economics, logic, and basic scientific methodology.

Our brain’s rulebook saves your life when a tiger is around. Its heuristics will be helpful more often than not and often beat a stab in the dark. But they come with side effects. Rather than consistently handing us a rough approximation of the best decision — in the right ballpark but not fully optimal — they introduce mistakes.

A downside of the representativeness heuristic, for example, is that we often ignore information that should cause us to assign less weight to the similarity judgment. There are twice as many suicide deaths in the United States in a given year than homicide deaths.

These rules of thumb often lead you astray.

A very partial catalog

In this section, we’re going to kill two birds with one stone. As you’re taking in the examples, I want you to think about this question:

What’s the underlying pattern of the mistakes we make when a bias affects us? What MAKES these decision rules flawed?

Before considering what to about our biases, let’s first get a better grip on what cognitive biases look like in practice. Here are some well-known ones:

Fundamental attribution error

FLAWED RULE: [IF] OBSERVE BEHAVIOR X → [THEN] INFER PERSONALITY X

In explaining behavior, we over-emphasize personality-based factors. And under-emphasize the role of situational influences. We say: “Juan was late because he’s lazy.” We don’t consider he got stuck in an unforeseeable traffic jam when dropping his kid off at the nursery.

Anchoring

FLAWED RULE: [IF] HAVE INITIAL RANDOM DATA POINT → [THEN] TREAT IT AS NON-RANDOM, AS IMPORTANT

In making judgments, we typically start with a first approximation (an anchor) and then make incremental adjustments based on additional information. These adjustments are usually insufficient, giving the initial anchor a great deal of influence over future assessments. Moreover, there is often no reason to actually believe the anchor provided a good rough initial estimate in the first place.

For example, in a study by Tversky and Kahneman, participants observed a roulette wheel that was predetermined to stop on either 10 or 65. Participants were then asked to guess the percentage of the United Nations that were African nations. Participants whose wheel stopped on 10 guessed lower values (25% on average) than participants whose wheel stopped at 65 (45% on average).

What’s the mistake here? You’ve got two more tries.

Confirmation bias

FLAWED RULE: [IF] DIAGNOSE INCONSISTENCY WITH OWN VIEWS → [THEN] ASSUME FALSITY

In evaluating information, we search for, interpret, favor, and recall information in a way that confirms our pre-existing beliefs. In assessing whether an argument is sound, we focus too much on whether it jibes with our opinions and not enough on the argument itself. If a well-designed experiment proves us wrong, we mistakenly judge it to be flawed. And if a bad study affirms our views, its errors escape us.

Affect heuristic

FLAWED RULE: [IF] EXPERIENCE POSITIVE/NEGATIVE FEELING→ [THEN] INFER POSITIVE/NEGATIVE TRUTH VALUE

The biggest one: the plausibility in your mind of some statement you hear is determined by what emotions the proposition elicits. For instance, things we’re exposed to regularly feel more true and are processed with cognitive ease. And because they feel effortless, we judge them more likely to be true.

Write down your guess.


Which data matter, and which don’t?

Biases cause us to make mistakes in judging how relevant certain points of data are. We ignore important information and attach too much weight to an irrelevant (but salient) feature of the problem.

Go back over the list and see how they’re all instances of this error. We overvalue the importance of character and undervalue environmental factors. We attach to much weight to the (arbitrary) ‘anchor’. We overestimate the significance of whether something feels true. And so on.

(For the nerds, here’s some cool jargon from a 2012 paper by information researcher Martin Hilbert: what many biases have in common is that they are instances of noisy deviations in the memory-based information processes that convert objective evidence (observations) into subjective estimates (decisions)).

I thought you were gonna give us examples

It might not be obvious just yet how such errors affect your daily life. Since most choices you make don’t seem like ones where you weigh information, and then attach relevance to data points, and then call your shot, you might not get any flashbacks of a bias affecting you.

This, at least, is how I feel.

When do I make choices this way?

Scenario’s to the extent of “I woke up and there was a roulette wheel, and someone spinned it, and it was kinda weird, and after it stopped the guy who spinned it asked some weird question about Africa and the UN and I guessed something — I had no idea — and the guy had this smug smile and told me I could go ahead and carry on with my day” don’t count.

Because, you know, of course that happens every day.

More formally, there is a problem with the generalization. Very little of human life takes place in the lab. And life outside of the lab is messy.

What decisions you actually make in your life exhibit the oh-fuck-me-my-brain-is-totally-assigning-bad-credences-because-it-designated-too-much-weight-to-irrelevant-variables-and-ignores-actual-predictors pattern? What are real-life circumstances where one would expect to observe the effects of knowing your own biases?

Luckily, in his book Mindware, psychologist Richard Nesbitt offers more recognizable scenarios:

  • You paid twelve dollars for a ticket to a movie that, you realize after a half an hour of watching it, is quite uninteresting. Should you stay at the theater or leave?
  • You must choose between two candidates for a job. Candidate A has more experiences and stronger recommendations than candidate B, but in the interview candidate B seems brighter and more energetic than candidate A. Which do you hire?
  • You are a human relations officer for a company. Several women have written to you to complain that their job applications were turned down in favor of men with poorer job qualifications. How could you go about finding whether there is really discrimination on the basis of gender?

I’m not immune, and I suspect you’re not either

When I added these things up, I had an epiphany: of course biases permeate everyday thinking. It seems as if many choices we make are not within reach of cognitive biases, but that’s a false impression.

We don’t look at many decisions as data-weighing-assessing-relevance-and-then-calling-shot. That way of thinking doesn’t come naturally to us. However, that such an interpretation of our judgments doesn’t feel instinctive, doesn’t mean most of our decisions don’t fit that pattern. Because they do. We just don’t realize it.

Nisbett has a solution:

The key is learning how to frame events in such a way that the relevance of the principles to the solutions of particular problems is made clear, and learning how to code events in such a way that the principles can actually be applied to the events.

And, talking about the fundamental attribution error we discussed above, he drives the point home:

We don’t normally think of forming impressions of an individual’s personality as a statistical process consisting of sampling a population of events, but they are exactly that. And framing them in that way makes us both more cautious about some kinds of personality ascriptions and better able to predict the individual’s behavior in the future.

Rules of logic, statistical principles and decision theory concepts influence the way we reason about problems that crop up in everyday life such as what information matters when hiring someone and whether you should finish a boring film you’ve invested time and money in.

Mastering cognitive algorithms that have incorporated such concepts, updating the rules by which your mind operates, should improve your performance in life.

Right?

That’s where we turn next.


But before we go there, another interlude.What could prevent knowledge of cognitive biases from making a difference? Write down your answer.


A solution?

The discovery of our cognitive biases has a positive flipside.

We know (i) what the correct cognitive processes are for finding truth — which mental operations, that is, are likely to lead to accurate beliefs. We’ve also discerned (ii) where, by contrast, our brain goes wrong if we let it take its beloved shortcuts. As a consequence, we can intervene.

Look at how Nisbett presents this possibility and optimistically predicts its effectiveness:

Once you have the knack of framing real-world problems as statistical ones and coding their elements in such a way that statistical [overcoming bias rules] can be applied, those principles seem to pop up magically to help you solve a given problem.

This, in theory, is where so-called rationalists shine.

Mastering such explicit interventions in our habitual thinking — which go by the name of ‘overcoming bias rationality’ — is their goal. To trump flawed heuristics, they teach themselves “cognitive algorithms that systematically promote map-territory correspondences or goal achievement” instead.

In other words: they’re rewriting their brains rulebook.

‘Overcoming bias’

We fail to befriend Juan because we make hasty judgments based on the wrong evidence. Rationalists know better.

We hire people who are not the most capable because we trusted firsthand information too much and more extensive and superior information from other sources too little. A rationalist has a better understanding of which data are relevant here.

We lose money and time because we don’t realize the applicability of psychological concepts such as the sunk cost fallacy, which causes us to finish the boring movie. Won’t happen to the bias-shattering rationalist.

We have rules in our head that render our decisions suboptimal. The rationalist strives to replace these rules with better cognitive algorithms.

If you don’t believe in overcoming bias, it’s because you don’t know how to do it.


Haha just kidding lolz

So: our hardware is corrupted, and some of us devote time and energy to making optimal — “rational” — decisions regardless. By extension, one would expect them to do better in life. Because they make better choices, they get more output.

Except, while there are no surveys on this, members of rationality communities such as LessWrong themselves have noticed this isn’t happening.

For one, rationalists aren’t (visibly) more successful. Rationality doesn’t seem to be of help in professional life. (Which — unfortunately — explains quite a bit about the world ((and rationalists.))

Sure: they know cognitive algorithms that produce above-average map-territory correspondence. But this is hardly rewarded in standard career tracks. Climbing the ladder of achievement and respecting processes of truth are different beasts.

More importantly however, and perhaps accounting for the observation about (the lack of) success, studying the art of thinking doesn’t appear to make you more competent either. Even if you’re familiar with overcoming bias rationality, you still make the same stupid damn mistakes. Here’s the big man himself:

People think that reading ‘Thinking Fast and Slow’ will make them better at decision making and avoiding biases, I have written the book and am not able to do it.” — Daniel Kahneman

Will there be real-life differences?

If you’re skeptical about Kahneman’s conclusion, I have a test for you.

  1. Step one: find a real-life situation in which being trained in the art of rationality is supposed to lead you to act differently than those who haven’t spend time thinking about thinking.
  2. Step two: secretly observe whether this behavioral divergence actually occurs between rationalists and a control group.

Condition one is easy to meet. Think back to Nesbitt’s cases about who to hire and whether to sit out a movie. And, for the sake of argument, assume you could observe behavioral differences (if there are any) without invalidating the experiment.

What, you reckon, will the results be?


Do you know about cognitive biases? Fancy another trial?

1. For a set time period (a week, a month, whatever), take special note of every decision you make — of the sort that’s made on a conscious level and requires at least a few seconds’ serious thought.

2. Then note whether you make that decision rationally. If yes, also record whether you made that decision using rationalists techniques of the overcoming bias sort. I don’t just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there’s a serious chance that using overcoming-bias-rationality instead of ordinary-not-being-stupid actually changed the result of your decision.

3. Finally, note whether, once you came to the rational conclusion, you actually followed it.

I foresee sobering results.


How this test will pan out

My prediction is that, if you were to measure whether knowledge of cognitive biases goes along with a behavioral difference — besides ticking different boxes on questionnaires in psychology labs — the results will not be pretty.

You will find you make fewer conscious decisions than you thought.

You will find you rarely apply even the slightest modicum of rationality to the ones you do make.

You will find you practically never apply overcoming-bias efforts to the ones you apply rationality to.

You will find you don’t follow those decisions consistently when you do apply everything.

And, like productivity guru Tiago Forte, you start to wonder:

The knowledge doesn’t trickle down

Time has come to pose the key question:

Knowing of your own biases and fighting to counteract them — sounds smart, but does it make a difference when it counts?

We have reasons to believe the answer is negative.

Kahneman and the rationalists themselves notice this lack of (visible) effects of their knowledge about cognitive biases on their success and decision-making in life. And if you run the experiment, you’ll find the same applies to you.

This should unsettle you

Overcoming bias rationality doesn’t seem to benefit one in one’s individual career. Neither does it deliver in social areas of life. Rationalists aren’t winning.

I find that a stunning observation. Cognitive biases are all the rage today. Why would they be if, as Kahneman himself says, knowing about doesn’t do anything for you?

If it doesn’t make you better at decision making, why would you learn about decision making?

In light of this one starts to suspect that for most people, reading and talking about cognitive biases is just intellectual virtue signaling.

Why do biases matter?

Surprise: in the rest of this guide I’ll explain why they are actually extremely important (despite appearances to the contrary). We’ll also look at how to take advantage of the huge opportunity they present. Biases point the way towards dramatically increasing our decision-making skills.

To see how, we first need to discover why knowing about them doesn’t seem to make you more successful.


It takes a lot of work

First, let’s review. We’ve already come quite a long way. After piercing the illusions of homo rationalis, we asked why those who think about thinking aren’t found at the top level of every elite selected on any basis that has anything to do with thought. Why do most “rationalists” just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?

And we saw that the problem is not absence of knowledge, but of execution.

As author James Clear observed, knowledge of cognitive biases doesn’t mean they are easier to counteract. Apparently, books and papers don’t touch people’s falling victim to them. Knowing how to make optimal decisions in the abstract is not enough to actually influence your decision-making process ‘on the ground’ — when your prefrontal cortex is not typically in control.

It’s not that rationalists are lazy, it’s just that it takes a lot of hard work to make knowledge about cognitive biases pay rent. Uber-rationalist and LessWrong founder Eliezer Yudkowsky explains:

Every now and then, someone asks why the people who call themselves “rationalists” don’t always seem to do all that much better in life, and from my own history the answer seems straightforward: It takes a tremendous amount of rationality before you stop making stupid damn mistakes.

Why is it so hard?

One would think it should be possible to decrease susceptibility to biases by simply encouraging individuals to use controlled processing compared to automatic processing.

It turns out not to be that easy.

Why not?

Well, remember that they are automatic processes? And how that was both their power and their weakness?

Even if you’ve written down better rules somewhere in your brain’s rulebook, you still need to make a huge effort to get your brain to open to (i) the right page — the one with ‘overcoming bias rules’, not the one with the heuristics — (ii) at the right time and then (iii) follow through.

And that’s where the shoe pinches.

Specifically, the two major factors responsible for this bottleneck are:

  • (a) lack of awareness of the cognitive process and its biased methodology
  • (b) impenetrability of this cognitive process

Consider first that for most of us who are trying to make better decisions by observing and countermanding our biases, the becoming aware takes place in a vacuum that would then, in a second step, have to be applied in the moment of decision making.

However, a bias is usually unconscious and virtually impossible to oppose in that manner:

No matter how long I study and try to understand probability, my emotions will respons to a different set of calculations, those that my unintelligent genes want me to handle … We are mere animals in need of lower forms of tricks. — Nassim Nicholas Taleb, Fooled by Randomness

This shows that we not only have to fix our decision making, it’s damn hard to bring our automatic thinking modules into consciousness to begin with. After all, only then can we make sure we skip to the right page of the rulebook instead.

This requires constant monitoring of your thinking steps. It slows you down. It’s not fun to do.

It needs sharp vigilance and forces you to think harder and longer. You’d have to, as investor-philosopher Naval Ravikant puts it, keep your brain running in troubleshoot mode all the time.

Who actually lives like that? TLDR shortcuts please.

Poor application due to weakness of will

So: intervening in your cognitive processes is easier said than done. Reasoning is a slow, effortful process that commands resources like attention that are themselves finite. Despite popular advice to the contrary, striving for an “always-on relationship” with your biases is not realistic.

But that can’t be the end of the explanation. There are a lot of things that take plenty of effort but we still actually do. So what is it that prevents us from pulling out all the stops here?

The likely answer is a disappointing one. It’s akrasia: we act against better judgment through weakness of will.

As my clinical psychologist friend Nick Wignall reported in conversation: most people don’t actually do anything about their cognitive biases. They don’t think about how they really show up in their own life. Nor consider what the actual consequences are of their biases are. And most importantly, they don’t take the time to develop a system and routine for regularly identifying their biases and implementing something different.

A lot of people with pretty well-developed senses of rationality don’t use them for anything more interesting than winning debates. They have more ammunition with which to argue against anything they don’t like, and end up less effective because they have the argumentative skills to — for themselves — make a convincing case for ascribing anything that goes against their preferred opinion to some bias or other. Political scientists Charles Taber and Milton Lodge dubbed this the ‘sophistication effect’: the knowledgeable are more prone to motivated biases.

Source

Funny: learning about biases makes you more likely to leap from knowledge of others’ failures to skepticism about a disliked conclusion. Which is, of course, a flawed cognitive algorithm.

Willpower as limiting factor

Summing up: maybe the reason rationalists rarely do much better than anyone else is that they’re not actually using all that extra brainpower they develop.

We let our biases affect us because we don’t have enough willpower, or don’t try hard enough, to keep on intervening in the automatic processes of our brain. So we still use the bad rules. The flawed heuristics. The wrong pages.

That’s why we can have complete command over good principles for reasoning in a particular field and yet be unable to apply them to the full panoply of problems we encounter in everyday life.

As we’ve seen, on top of rational decision rules themselves being complicated, it takes another huge effort to bring your decision-making process into awareness, and another one to follow these rules behaviorally. A general rational outlook is limited in its effectiveness by poor application due to akrasia.

Automatic judgment. Faulty? Unclear. Intercept? Lots of work. Never mind.

More problems of awareness

In the previous section, I suggested that if the limiting factor is willpower and not rationality, throwing truckloads of rationality into our brains isn’t going to increase success very much.

This is definitely a relevant factor, but I was being unfair towards aspiring rationalists. There’s a deeper layer to why people don’t do the work required: if the benefits are unclear, and the workload is high, it almost stands to reason not to care too much about improving your rationality.

Problems of awareness thus strike again: it’s not clear what we would harvest at the end of all this cognitive farming, were we to put in the effort. Eliezer Yudkowsky describes the predicament like this:

People lack the sense that rationality is something that should be systematized and trained and tested like a martial art, that should have as much knowledge behind it as nuclear engineering, whose superstars should practice as hard as chess grandmasters, whose successful practitioners should be surrounded by an evident aura of awesome.

And conversely they don’t look at the lack of visibly greater formidability, and say, “We must be doing something wrong.”

It’s hard to tell whether you’ve managed to become more rational (compare: it’s easy to tell whether you’ve managed to successfully hit someone).

It’s difficult to map beliefs to reality. And when you can’t verify the success of an intervention, you can’t have evidence-based training methods.

And so, while there are (for example) schools of martial arts, there are no rationality dojos.

To summarize, when learning some ability takes a shitload of work, time, and practice, but there’s no place to go where you can learn from a community of practicers who have gotten together and systemized their skills, it’s no wonder that few people actually acquire the ability.

A vicious circle

And now our follow-up of why-questions has led us into a self-reinforcing cycle of cause and effect.

  1. Applying rationality to decisions requires intervening in your automatic cognitive processes followed by at least a few seconds of hard conscious thought.
  2. However, people hardly apply conscious thought to the vast majority of choices (whether they know about cognitive biases or not).
  3. Therefore, real-life situations where knowing overcoming bias rationality will make or has made a visible difference are hard to recognize.
  4. 2 and 3 lead to the first problem of awareness: It’s hard to know when you’ve “done rationality well.”
  5. Therefore, it’s difficult to test the results of rationality training programs.
  6. Moreover, people lack the sense that rationality is something that should be systematized and trained and tested like a martial art.
  7. 4, 5 and 6 together explain why people hardly receive training of rationality in a systematic context.
  8. As it happens, a general rational outlook is limited in its effectiveness by poor application and akrasia, which will persist unless you get a lot of training and make a tremendous systematic effort. Knowing about cognitive biases isn’t even half the battle.
  9. Combine 7 and 8, and you see why most people don’t succeed in structuring systems in a way that counteracts biases, which is why poor application and akrasia persist, which is why rationality doesn’t seem to make a difference, which is why people don’t get the sense of it as something that should be systematized and trained and tested like a martial art, which is why people don’t train it, which is why poor application and akrasia persist, which is why it doesn’t make a visible difference, and the circle goes on and on…

The limits of epistemic rationality

I’ve gotten countless clarity-of-mind benefits from [practicing overcoming-bias] rationality, but practical benefits? Aside from some peripheral disciplines, I can’t think of any. —Scott Alexander (psychiatrist; writer at Slate Star Codex and prominent LessWrong member)

If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people.

I understand the logic. It makes sense. And I want the link to be there. I want it desperately.

I can just point out that it doesn’t resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that it seems to be a pattern.

My heart heavily protests, but the “epistemic rationality” skills of figuring out what is true seem not to significantly contribute to the skill of succeeding in the world.

“Rationality is a huge project,” entrepreneur and rationality adept David Siegel once told me. Yet, we don’t see overcoming-bias skills as ones to be trained and practiced like a sport, so we don’t, and theoretical knowledge of them scarcely makes a practical difference.

At least, not yet.

Armed with this knowledge, I’ll now make good on my promise to break down why biases are extremely important and give you my 5-step method for updating your thinking skills in light of this.


How to be rational about rationality

Before we move on to the good part, a quick recap.

As we’ve seen, I’ve come to the conclusion that in order to be rational about rationality it’s not a good idea to try to develop an always-on relationship with your biases and try to spot and fight every instance of them.

As Farnam Street (Shane Parrish) wrote, biases are great at explaining why you messed up, not preventing you from messing up.

Warren Buffett’s associate Charlie Munger, another rationalist demigod, confessed that knowing about biases helped him about 2% of the time.

Even in these 2%, psychologist Richard Nisbett points out, we usually fail to be aware of a process that went on in our heads and also fail to retrieve that process when asked directly about it. Hence we neither recognize the biases nor the situation that prompted it.

Unless you deliberately correct for them with a non-brain system, a different book altogether, flawed heuristics are inescapable.

It may be hard to bear that most of daily life is driven by automatic, non-conscious mental processes — but it appears impossible that conscious control could be up to the job.

Two percent is still an edge. And I’m all in favor of trying to be less wrong. But the opportunity costs strike me as too high on this one.

I have a better idea.


Hey, remember these? Let’s say God whispered in your ear that there is a right repsponse to almost any question. What is it? Again: write down your answer.


Effective strategies for living with biases

Right now, you could say: “So overcoming-bias training isn’t worth the opportunity costs of the relatively big time-investment and the relatively elusive output. In that case, I’m not going to do anything about my flawed heuristics. Let’s forget all about them.”

I’d like to offer a different perspective.

The change in thought that we should try to acquire is not some high-effort, time-consuming, debiasing rulebook update. Precisely because you can hardly debias your intuitive judgments and habitual inferences, you should go about problem-solving without relying on them.

Because here’s the flipside of the inefficacy of overcoming bias rationality: the difficulty of countering biases implies that everyone takes cognitive shortcuts all the time.

This implies that you should radically overhaul your way of forming judgments.

It’s best to accept we are permanently biased and exploit the room for improvement this insight into the human condition offers, rather than trying to avoid cognitive algorithms that aren’t maximally truth-conducive on a case-by-case basis.

I want to end this guide by outlining how you can do exactly that.

The right answer to almost any question

According to the Pareto Principle, in many domains, 80% of the benefits come from 20% of the effort. When it comes to optimizing your decision-making in light of cognitive biases, I propose the 80/20 is this:

  • 80% of the benefits come from thoroughly internalizing the fact that heuristics permeate human cognitive processes, including your own, and taking action correspondingly;
  • As a result, we should focus our 20% on that meta-point rather than updating the brain’s heuristics-rulebook page by page (which is a lot of work anyway).

Rather than fighting biases directly, you should come to grips with the fact that your intuitive, automatic cognition is not to be trusted. Don’t fight your biases. Go around them. Stop using automatic judgments and believing in what ‘feels intuitive’ as much as possible.

Make your belief- and judgment-forming processes explicitly conscious and systematic.

Here’s how.


The five-step method for improving your thinking

1. Be realistic about your process — it’s probably biased

Begin by developing a healthy distrust in your gut feeling. If you’re going to pull an 80/20 on cognitive biases, you must realize their existence dictates humility.

Think back to the recall exercise from the very beginning: what happens in your mind when you solve a problem? Why do you believe what you believe? By what procedure did you arrive at that conclusion?

The one method we use far too often is to look into our limited store of experiential data, pull out a few anecdotes or clips from CNN, and declare that we have a pretty good way to answer the question. After all, this is what they do at the Harvard Business School, and we know it doesn’t work. — DavidSiegel

This is a bias-sensitive and error-prone procedure. If there’s one thing you should take away from this article it’s this:

The belief-forming method most people (including you) most often use is unreliable.

As a result, you should lower your faith in its deliverances. The low level of conscious rational decisions involved and the existence of automatic biases means these can’t be trusted.

2. When you’re confronted with a question, assume you don’t know

Many of us feel compelled to say, to intervene, to give the answer. Any answer. We are so rich in answers and so poor in questions.

Train yourself to resist this urge. Realize these kneejerk-responses are probably the result of untrustworthy cognitive processes.

Your automatic processes can’t be trusted.

Our understanding of the world is shaped by tribalism, the media is often biased, and most people have an incredibly skewed view of the world. You can’t assume you know the answer. You have to start looking into the actual data instead.

There is an old saying in Taoism: “To know that one does not know is best. To not know, but to believe that one knows, is a disease.”

As David Siegel recently argued,

“I don’t know” is the right answer to almost any question.

Learning to admit you don’t know is the price of intellectual freedom, but it’s a cost so high that most refuse to pay it.

“I don’t know.” Say it out loud. Do it now.

3. Make a commitment to the truth

But don’t leave it like this. You can always just say “I don’t know”, and there’s no inquiry. No progress.

We don’t want to behave like Socrates. The whole point of rationality is to increase the accuracy of your beliefs, not to end up knowing squat regardless. We should hunt for truth despite and in full knowledge of our biases — not let them paralyze us.

So, as author Leandro Herrero proposes, go with: “I don’t know. And I suspect you don’t know either. Shall we find out?

4. Take the Outside View

In doing so, the right approach is to take what rationalists call theOutside View.

The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are biased and wrong. So ‘from the outside’, discounting how your conviction feels ‘from the inside’, you should conclude you, too, are likely mistaken. Two examples to demonstrate:

  • I feel like I’m an above-average driver. But I know there are surveys saying everyone believes they’re above-average drivers. Since most people who believe they’re an above-average driver are wrong, I reject my intuitive feelings and assume I’m probably just an average driver.
  • In the words of Scott Alexander, the previously quoted LessWrong authority and SlateStarCodex writer: “A Christian might think to themselves: “Only about 30% of people are Christian; the other 70% have some other religion which they believe as fervently as I believe mine. And no religion has more than 30% of people in the world. So of everyone who believes their religion as fervently as I do, at least 70% are wrong. Even though the truth of the Bible seems compelling to me, the truth of the Koran seems equally compelling to Muslims, the truth of dianetics equally compelling to Scientologists, et cetera. So probably I am overconfident in my belief in Christianity and really I have no idea whether it’s true or not.””

If you’re serious about pulling an 80/20 on cognitive biases, then actually do the 20%, and generalize from the knowledge of others’ failures to an appreciation of your own. Suck up your pride and realize you are no exception.

Reject your intuitive feelings of rightness. They can’t be trusted.

5. Increase the level of conscious decisions involved

Biases rely on the automaticity of your thinking to infect your reasoning with faulty leaps. If you make your inferences explicit, their strength fades.

A cool trick for increasing the level of conscious decisions involved in your beliefs is to ask, “How would a Martian descending to earth answer any question?”

A Martian doesn’t have anecdotes from his friends or the news or a pool of experiential data. Hence he would have to start doing research, gathering data, asking questions, and trying to put a mental model together from scratch.

Being like our Martian doesn’t require you to catch biases out in the moment. That’s impossible, as we saw. Simply take a moment or two to ask “Why do I believe what I believe? What made me come to this conclusion?” and reverse-engineer your thinking steps. Then, look if your chain of reasoning holds up or whether some of your inferences are biased, unsupported or have steered your perception of reality away from actual reality in some other way.

For example, you may have the view that nuclear energy is a bad policy option. Ask yourself: How did I arrive at that opinion? Do I only think nuclear energy is less save than coal energy because — well come on have you heard about Tsjernobyl? Those are the exact kinds of gut-feelings likely to be loaded with biases. If your backward-engineering reveals that such an intuition, rather than actual data, is what lies behind your assessment, realize the representativeness heuristic has done a number on you. You should realize that such a kneejerk response doesn’t merit the inference you’re making about the all-things-considered wiseness of nuclear energy compared to coal energy. Stop trusting your automatic judgments when it comes to these complex issues. Instead, take the Martian outside view.

A Martian doesn’t have gut feelings (let’s assume) and his Martian friends haven’t told him anything about earth. Say “I don’t know,” ignore your gut feelings, and start building a mental model from the ground.

Only rely on explicit, conscious steps of reasoning. This method works around your biases because instead of relying on intuitive, implicit judgments, you’re bringing your cognitive process into the open.

This works in the other direction too: making your train of thought explicit and becoming conscious of the mental steps you’ve taken to reach a particular conclusion, long-term trains your thought processes to better understand which heuristics and generators-of-heuristics are reliable in which situations.

The importance of asking why you actually came to believe the things you ended up believing

Will you continue to fall victim to biases, rely on gut feelings and limited data, some anecdotes and go through life mistaking a jumble of falsehoods, misconceptions, and contradictions for an accurate mental model?

Or do you believe truth actually matters?

Despite its advantages — more likely to have well-supported beliefs and make better decisions accordingly — what I ask you to do is a demanding mode of making op your mind. And saying “I don’t know” requires you to suck up your pride.

The question is: are you really ready to do that?

Our beliefs make up our perception of reality, they drive our behavior, and they shape our life stories. So treat them as an exclusive VIP club. Especially today, you might want to take extra care to make sure they’re filled with real information.

The alternative is standing by the sidelines as your ability to make good decisions shrinks smaller and smaller, smaller and smaller.

As the famous physicist and Nobel-laureate Richard Feynman already knew:

“The first principle is that you must not fool yourself and you are the easiest person to fool.”

Thanks to Nick Wignall, David Siegel, and Joseph Lightfoot for helpful resources and conversations.


Leave a Reply