Cognitive Bias #2: How Your Brain Became a Storytelling Machine – The Evolution of Making Sense

[Written by Claude. Cognitive Bias Codex from here. Image credit.]

Related Post: Cognitive Bias #1

Why we see patterns in clouds, believe in luck, and can’t stop connecting dots that aren’t there

Here’s a terrifying thought experiment: You’re walking through the forest at dusk. You hear a rustling in the bushes. You have two options:

Option A: Stop and carefully analyze the situation. Consider all possibilities: wind, small animal, large predator, falling branch, another human. Weigh the probabilities. Calculate the costs and benefits of each response. Gather more data.

Option B: Assume it’s something dangerous and run.

If you chose Option A, congratulations—you’re logical, rational, and probably eaten by a leopard.

If you chose Option B, welcome to the human race. Your ancestors, who also chose Option B, survived long enough to pass on their genes to you.

This simple scenario explains why your brain is less like a computer calculating probabilities and more like a novelist frantically writing stories to make sense of incomplete information. Evolution didn’t reward perfect accuracy. It rewarded fast meaning-making, even when that meaning was wrong.

The Fundamental Problem: A World Full of Gaps

Imagine trying to understand the world with the information available to our ancestors 100,000 years ago. You’d know:

  • What you directly experienced
  • What your tribe of 50-150 people told you
  • What you could observe in your immediate environment
  • Nothing else

No internet. No libraries. No scientific method. No global communication network. Just you, your limited experience, and a universe that desperately needed explaining.

Why did the sun rise? Why did people get sick? Why did some plants heal and others kill? Why did the hunting party succeed yesterday but fail today? Why did the baby die? Why did the rains fail?

The universe was full of gaps—enormous, terrifying gaps—and those gaps needed to be filled with something because understanding (even incorrect understanding) enabled action, and action meant survival.

Pattern Recognition: The Brain’s Killer App

Natural selection didn’t give us the ability to see the world as it truly is. It gave us the ability to see patterns that promoted survival, whether those patterns were real or not.

Consider our ancestors’ two biggest survival challenges:

1. Predicting where food would be 2. Avoiding becoming food

Both required recognizing patterns with incomplete information. The tribe that said “the last three times we found berries near the big rock was after a rain” had better nutrition than the tribe that said “we need more data to establish statistical significance.” The hunter who noticed “animals seem scarce when the moon is full” (even if it was coincidence) adjusted his behavior and caught more game than the hunter who dismissed it as superstition.

Were these patterns always real? No. Did believing in them sometimes help anyway? Absolutely.

The Cost-Benefit Analysis of Being Wrong

Here’s why evolution favored brains that hallucinate meaning:

False Positive (seeing a pattern that isn’t there): You think the rustling is a predator when it’s just wind. Cost: wasted energy running away. Benefit: you live.

False Negative (missing a pattern that is there): You think the rustling is wind when it’s a predator. Cost: you die. Benefit: none, because you’re dead.

This isn’t even close. The evolutionary cost of false negatives (missing real patterns) was so catastrophic that natural selection built brains that erred heavily toward false positives (seeing patterns that aren’t there).

Your brain isn’t broken when it sees faces in clouds (pareidolia) or assumes the dice are “due” for a certain number (gambler’s fallacy). It’s doing exactly what it evolved to do: finding patterns aggressively, even in randomness, because the cost of missing a real pattern was death.

Story-Making: The Ultimate Survival Tool

But pattern recognition alone wasn’t enough. Humans needed something more powerful: narrative.

Stories did something magical for our ancestors: they turned disconnected observations into transmissible knowledge. “Stay away from the red berries” is an observation. “Og ate the red berries and died screaming while his skin turned black” is a story—visceral, memorable, shareable, and far more likely to keep you alive.

Why We Confabulate

Confabulation—unconsciously filling memory gaps with fabricated details—wasn’t a bug. It was essential infrastructure for social learning. When your grandmother told you about the winter the whole tribe nearly starved, she probably didn’t remember every detail. But her brain filled in the gaps with plausible details that made the story coherent and the lesson clear: “store extra food before winter.”

Were all the details true? Maybe not. Did the story preserve essential survival information? Absolutely. A culture that told vivid, coherent stories (even partly fabricated ones) transmitted knowledge better than a culture that said “grandmother’s memory is incomplete, so we can’t draw conclusions.”

The Clustering Illusion

The clustering illusion—seeing patterns in random data—helped our ancestors notice genuine statistical regularities without needing formal statistics. When hunters noticed that game seemed plentiful in certain areas at certain times, they were often picking up on real ecological patterns (seasonal migrations, water source locations, predator territories).

Yes, sometimes they saw patterns that were just noise. But over time, the patterns that worked led to food, and the patterns that didn’t eventually got abandoned. The important thing was to notice potential patterns quickly and test them through action, not to wait until you had statistically significant proof.

The problem today? We apply this same pattern-recognition to things like stock prices, sports statistics, and which parking space is “lucky”—domains where randomness actually dominates but our brains insist on finding meaning.

Stereotypes: Dangerous but Functional

This is uncomfortable to discuss, but it’s crucial to understanding why stereotyping exists: in a world of limited information and high stakes, category-based thinking was often the only thinking available.

When your ancestor encountered a stranger, they had seconds to decide: friend or foe? They couldn’t run a background check. They couldn’t interview references. They had exactly two pieces of information: what tribe the stranger appeared to be from, and how previous encounters with that tribe had gone.

Using group membership to predict individual behavior—the essence of stereotyping—was often wrong. But it was less wrong than having no prediction at all. The cost of treating an enemy as a friend was death. The cost of treating a potential friend as an enemy was missed cooperation—bad, but survivable.

Modern humans inherited this same machinery. Our brains automatically categorize people into groups and make predictions based on group membership. The difference is that our modern categories (race, nationality, religion, profession) are far more diverse and arbitrary than ancestral tribes, and our decisions based on these categories have ethical implications our ancestors never faced.

The group attribution error and ultimate attribution error exist because our brains are trying to solve an ancestral problem: “How do I predict behavior when I don’t know the individual?” The tragedy is that we’re applying Stone Age software to Bronze Age problems in an Information Age world.

Filling in the Blanks: When Incomplete is All You Get

Evolution faced a dilemma: The world is complex, information is limited, but decisions still need to be made. The solution? Fill in missing information with plausible assumptions.

The Authority Bias

Authority bias—trusting the opinions of authority figures—made perfect sense when authority usually meant experience. The tribal elder who’d survived 60 years knew things you didn’t. The successful hunter who’d fed the tribe for decades had genuine expertise. Deferring to authority was often deferring to accumulated wisdom.

Today, we apply the same deference to people in lab coats selling supplements, celebrities endorsing political candidates, and CEOs making claims outside their domain—because our brains can’t easily distinguish “person with relevant expertise” from “person with authority markers.”

The Just-World Hypothesis

The just-world hypothesis—believing people get what they deserve—seems particularly irrational. But consider its evolutionary function: In small tribal groups where reputation mattered and you’d interact with the same people for life, there was often a genuine correlation between behavior and outcomes.

The lazy hunter who didn’t pull his weight did suffer social consequences. The person who betrayed trust did get excluded. The generous person did receive reciprocal help. The just-world hypothesis reinforced prosocial behavior by making people believe their actions had consequences.

The modern problem? We live in a world where the connection between behavior and outcomes is far more random—lottery winners, inherited wealth, systemic injustice—but our brains still search for the moral narrative that explains why things happened.

Stereotypes as Schemas

Essentialism—believing categories have underlying essences—helped our ancestors organize knowledge efficiently. If you categorized “things that taste sweet” together, you could predict that a new sweet thing was probably safe to eat. If you categorized “animals with sharp teeth” together, you could predict that a new sharp-toothed animal was probably dangerous.

These mental shortcuts—schemas, stereotypes, essences—allowed for knowledge transfer. Once you learned something about one member of a category, you could apply it to other members without starting from scratch each time. This was computationally efficient and often accurate enough for survival.

The problem emerges when we apply this to human categories that are socially constructed rather than naturally occurring, treating race, gender, or nationality as though they have essential properties that determine individual traits.

Projection: Your Brain’s Time Machine

One of evolution’s cleverest tricks was giving us the ability to imagine the future. But this created a new problem: How do you imagine something you’ve never experienced?

The solution: assume the future will feel like the present.

The Projection Bias

Projection bias—assuming future preferences match current ones—seems like a flaw. But it was actually a reasonable heuristic when environments were stable. If you liked fatty foods yesterday (because calories were scarce), you’d probably like them tomorrow. If the watering hole was safe last week, it would probably be safe next week.

The problem today? Our environments change rapidly. You shop while hungry and buy food for a future you that won’t be hungry. You make decisions about retirement assuming you’ll feel the same way in 40 years. Your brain is using an ancestral algorithm in a modern context.

The Planning Fallacy

The planning fallacy—underestimating how long tasks take—exists because optimism about the future promoted action. Our ancestors who thought “hunting will probably go well today” went hunting. Those who accurately calculated the low probability of success, high energy cost, and significant danger might have stayed in camp—and starved.

Moderate over-optimism about future outcomes and timelines kept people trying, persisting, and ultimately succeeding more often than perfect accuracy would have.

Hindsight Bias

Hindsight bias—believing we predicted outcomes after the fact—actually served memory consolidation. Once you knew that certain berries were poisonous because someone died, your brain rewrote the memory as “I always knew those berries looked suspicious.” This strengthened the lesson and made you more confident in avoiding them next time.

Was it accurate? No. Did it improve future decision-making by creating false confidence in your pattern-recognition abilities? Yes.

The Halo Effect: Efficient but Flawed

The halo effect—letting one positive trait influence overall judgment—was a time-saving shortcut in a world where extensive evaluation was impossible.

In ancestral environments, there were genuine correlations between traits. A physically healthy person probably had access to good food (wealth or status), probably had good genes, probably came from a successful family. Beauty genuinely correlated with health. Size genuinely correlated with strength. These weren’t perfect correlations, but they were real enough that using one observable trait to infer others was better than random guessing.

The modern problem? We meet thousands of people briefly. We encounter photos of people we’ll never meet. We make snap judgments based on traits (attractiveness, height, voice quality) that are increasingly disconnected from the traits we actually care about (competence, trustworthiness, intelligence).

Mental Accounting: Money Wasn’t Fungible

Mental accounting—treating money differently based on its source—seems economically irrational. A dollar is a dollar, right?

But in ancestral environments, resources often weren’t fungible. Meat couldn’t be saved for next month. Berries had to be eaten soon. A good tool was valuable but couldn’t be converted to food. Different resources required different strategies.

Moreover, the source of resources carried information. Resources you hunted yourself were reliable; you could get more through the same effort. Resources gained through luck (finding a dead animal) weren’t reproducible. This justified treating them differently—being careful with what you could reliably obtain, less careful with windfalls.

Today, money is fungible, but our brains still treat “salary” (earned, stable, precious) differently from “bonus” (windfall, spend freely) even though they’re literally the same thing.

The Survivorship Bias: Learning from Success

Survivorship bias—focusing on successes while ignoring failures—wasn’t just a bias. It was a learning strategy.

Our ancestors couldn’t run controlled experiments. They learned primarily through observation and imitation. When they looked around and asked “what leads to success?” they could only observe the survivors—those who succeeded. The failures were dead, exiled, or irrelevant.

“Watch what successful hunters do and copy them” was excellent advice, even though it systematically ignored all the hunters who tried the same techniques and failed (often for random reasons). The strategy of “imitate success” worked well enough that the occasional false lessons were outweighed by the genuine insights gained.

The modern problem? We read biographies of billionaires and conclude that dropping out of college leads to success, not seeing the thousands who dropped out and failed. We study successful companies and extract lessons, ignoring the identical companies that followed the same strategies and collapsed.

The Curse of Knowledge: Teaching is Hard

The curse of knowledge—inability to imagine not knowing what you know—actually protected valuable knowledge.

When you learned something important (which plants were poisonous, where predators hunted, how to make fire), you needed to remember it permanently. The brain couldn’t afford to treat survival-critical knowledge as optional or uncertain. Once integrated, knowledge became automatic—which made it hard to remember ever not knowing it.

This had a side benefit: it made experts confident. A confident teacher who can’t remember ever struggling with the basics is more convincing than an uncertain one who hedges every statement. In a world where knowledge transfer happened verbally with no written backup, confidence in teaching promoted learning, even at the cost of empathy for beginners.

Living with a Meaning-Making Machine

Your brain is essentially a prediction engine wrapped in a storytelling device, sculpted by millions of years of evolution to find patterns, fill gaps, and create narratives that promote survival and reproduction.

The biases in the “Not Enough Meaning” category aren’t flaws. They’re features of a system optimized for speed over accuracy, action over contemplation, and coherent stories over messy truth.

For millions of years, this worked brilliantly. A brain that saw patterns in clouds, assumed people could be judged by their groups, filled in memory gaps with plausible fictions, and projected current feelings onto the future made better decisions than a brain that waited for perfect information.

The challenge isn’t that we have these biases. The challenge is that the environment has changed faster than evolution can keep up.

The Modern Mismatch

Our ancestors needed to make sense of perhaps 100 important events in a lifetime—births, deaths, migrations, conflicts, seasons, weather patterns. Pattern recognition with limited data worked because:

  • Sample sizes were small: You could actually learn from most relevant events
  • Environments were stable: Past patterns genuinely predicted future outcomes
  • Information was honest: Direct experience doesn’t lie (much)
  • Stakes were clear: Food, safety, reproduction, survival

Today, we’re drowning in information that our meaning-making machinery can’t handle:

  • Sample sizes are huge: Thousands of events daily, most irrelevant
  • Environments are chaotic: Past patterns often don’t predict future outcomes
  • Information is manipulated: Advertising, propaganda, deliberate deception
  • Stakes are unclear: Most information doesn’t affect our survival

Your brain is still looking for the leopard in the bushes, but now the “bushes” are social media feeds, and the “leopards” are political ideologies, consumer choices, and cryptocurrency investments.

Working with Your Storytelling Brain

Understanding the evolutionary origins of these biases doesn’t eliminate them, but it suggests strategies:

1. Recognize that your brain is optimized for story, not truth. When you instantly “understand” why something happened, that’s your brain doing its job—making quick sense of incomplete information. But quick sense isn’t the same as accurate sense.

2. Question your confidence in patterns. Your brain was built to see patterns even in randomness. Before trusting a pattern, ask: “How much data do I have? Could this be coincidence? What would I need to see to disprove this pattern?”

3. Seek disconfirming information actively. Your brain wants to fill gaps with information that fits existing stories. Force yourself to ask: “What evidence would prove me wrong? Have I looked for it?”

4. Remember that categories aren’t essences. When you catch yourself thinking “people like X are…” or “members of Y group always…”, you’re using Stone Age categorization software. Individuals vary far more than your brain’s shortcuts suggest.

5. Distrust hindsight. When you think “I knew that would happen,” you probably didn’t. Your brain is rewriting history to boost your confidence. This feels good but prevents learning.

6. Separate decision quality from outcomes. Just because something worked doesn’t mean it was a good decision. Just because something failed doesn’t mean it was a bad decision. Luck matters. Survivorship bias matters.

The Gift of Meaning-Making

Here’s the paradox: The same machinery that creates false patterns and misleading stories also enables humanity’s greatest achievements.

The ability to see patterns in sparse data led to agriculture—noticing that seeds became plants. The drive to find meaning in randomness led to science—the systematic search for real patterns. The compulsion to create coherent narratives gave us literature, history, and culture. The tendency to anthropomorphize led to art, religion, and social cooperation.

Your brain’s aggressive meaning-making isn’t a flaw to be eliminated. It’s the engine of human creativity, understanding, and progress. The key is learning to question the stories your brain automatically generates while appreciating the extraordinary gift of being able to generate them at all.

You are, after all, a pattern-recognizing, story-creating, meaning-making machine—built by evolution, refined by culture, and capable of understanding your own biases.

That alone is a remarkable story. And unlike most of the stories your brain tells you, this one happens to be true.


The next time you find yourself absolutely certain you understand why something happened, pause. Your brain is doing what it does best—finding patterns and creating meaning from incomplete information. It’s been doing this for millions of years, and it’s kept our species alive. But certainty is often just your brain’s way of saying “I’ve made up a plausible story.” The question isn’t whether you’ll make up stories—you will, you can’t help it—but whether you’ll question them.

2. NOT ENOUGH MEANING

We tend to find stories and patterns even when looking at sparse data:

  • Confabulation – We unconsciously fill in gaps in our memory with fabricated information that we believe to be true. After a night of drinking, you might vividly “remember” details of how you got home, creating a coherent story from fragments of actual memory.
  • Clustering illusion – We see patterns in random data where none actually exist. Looking at stars in the night sky, we see constellations and shapes, even though the stars are randomly distributed in three-dimensional space.
  • Insensitivity to sample size – We fail to account for the fact that small samples are more likely to show extreme results by chance. A school with only 50 students might have 100% college acceptance one year, but this tells you less than a school with 500 students and 95% acceptance.
  • Neglect of probability – We ignore probability and base rate information when making judgments, focusing instead on vivid possibilities. Parents fear kidnapping by strangers despite it being extremely rare, while underestimating risks like car accidents that are far more probable.
  • Anecdotal fallacy – We use personal experience or isolated examples to make broad generalizations while ignoring statistical evidence. Someone might insist smoking isn’t dangerous because their grandfather smoked until 95, ignoring the statistical reality that smoking kills millions.
  • Illusion of validity – We have excessive confidence in the accuracy of our judgments, especially when patterns seem consistent. An interviewer might believe they can predict job performance from a 30-minute conversation, despite research showing interviews have limited predictive validity.
  • Masked man fallacy – We assume that if two descriptions refer to the same thing, they’re interchangeable in all contexts. You might know that Mark Twain wrote great novels but not know that Samuel Clemens did, even though they’re the same person.
  • Recency illusion – We believe that a phenomenon we recently noticed is actually a new phenomenon. Older people often complain that “kids today” are disrespectful, not realizing that every generation has said this about the next.
  • Gambler’s fallacy – We believe that past random events affect future random events. After flipping heads five times in a row, people feel that tails is “due,” even though each flip remains a 50/50 chance.
  • Hot-hand fallacy – We believe that a person who experiences success has a greater chance of further success in additional attempts. Basketball fans believe a player who’s made several shots is “hot” and more likely to make the next one, even though statistical analysis shows shots remain independent.
  • Illusory correlation – We perceive a relationship between variables even when no such relationship exists. People might believe full moons cause strange behavior because the memorable odd events that happened during full moons stand out, while we ignore all the normal full moon nights.
  • Pareidolia – We perceive meaningful images or patterns in random or ambiguous stimuli. People see faces in clouds, Jesus in toast, or the man in the moon, even though these are random patterns our brains interpret as familiar forms.
  • Anthropomorphism – We attribute human characteristics, emotions, and intentions to non-human entities. Pet owners interpret their dog’s guilty expression as understanding of wrongdoing, when the dog is actually responding to the owner’s angry tone.

We fill in characteristics from stereotypes, generalities, and prior histories:

  • Group attribution error – We assume the characteristics of individual group members reflect the group as a whole, and vice versa. Meeting one rude French person might lead you to conclude that French people are rude, or hearing that Germans are punctual leads you to expect every German to be on time.
  • Ultimate attribution error – We attribute negative behavior by out-group members to their character while attributing negative behavior by in-group members to circumstances. When someone from your political party commits fraud, it’s “one bad apple” or they were “under pressure,” but when the opposing party does it, it proves they’re all corrupt.
  • Stereotyping – We expect members of a group to have certain characteristics without knowing anything about the individual. Assuming an elderly person is bad with technology, or that an Asian student must be good at math, based solely on group membership.
  • Essentialism – We believe that categories have an underlying essence that makes them what they are. People often believe that biological sex determines personality traits, or that members of different races have fundamentally different natures, rather than seeing these as social constructs.
  • Functional fixedness – We see objects only in terms of their traditional use, preventing creative problem-solving. When you need a paperweight but can’t think to use your phone because you only see phones as communication devices.
  • Moral credential effect – We feel licensed to behave immorally after establishing moral credentials. After donating to charity, people feel justified in being less generous later, as if they’ve “banked” their goodness.
  • Just-world hypothesis – We believe the world is fundamentally fair and people get what they deserve. When someone loses their job, we assume they must have been a bad employee rather than acknowledging that layoffs can be arbitrary or unfair.
  • Argument from fallacy – We assume that if an argument for a conclusion is flawed, the conclusion itself must be false. Someone argues for climate change using bad statistics, so you conclude climate change isn’t real, even though the conclusion could still be true despite the flawed argument.
  • Authority bias – We attribute greater accuracy to the opinions of authority figures and are more influenced by them. Doctors’ recommendations for treatments are trusted even in areas outside their expertise, like when a physician endorses a financial product.
  • Automation bias – We favor suggestions from automated systems, often ignoring contradictory information from non-automated sources. GPS tells you to turn onto a closed road, and you do it anyway, trusting the machine over your own eyes seeing the “Road Closed” sign.
  • Bandwagon effect – We believe or do things because many other people believe or do them. Restaurant lines form because people assume a crowded restaurant must be good, while empty restaurants remain empty because passersby assume they’re not worth trying.
  • Placebo effect – We experience real physiological effects from fake treatments because we believe the treatment will work. Sugar pills labeled as painkillers can actually reduce pain, demonstrating the power of expectation on physical experience.

We imagine things and people we’re familiar with or fond of as better:

  • Out-group homogeneity bias – We see members of our own group as diverse and unique while viewing out-group members as all the same. You recognize the distinct personalities of your coworkers but think employees at a competitor company all seem identical.
  • Cross-race effect – We more easily recognize and distinguish faces from our own race than from other races. People often say members of other races “all look alike” while having no trouble distinguishing members of their own race.
  • In-group bias – We give preferential treatment to members of our own group. Hiring managers unconsciously favor candidates who attended the same university, share the same hometown, or have similar hobbies.
  • Halo effect – We let one positive trait influence our overall impression of a person or thing. An attractive person is assumed to also be intelligent, kind, and competent, even though physical appearance doesn’t predict these traits.
  • Cheerleader effect – Individuals appear more attractive in a group than when seen alone. People look more appealing in group photos because the brain averages faces together, smoothing out individual flaws.
  • Well-traveled road effect – We estimate distances to familiar locations as shorter than equivalent distances to unfamiliar ones. The drive to a place you’ve been many times seems quick, while the equally long drive to somewhere new feels longer.
  • Not invented here – We avoid using products, research, or knowledge developed outside our group. Engineers reject perfectly good solutions from external sources, preferring to rebuild from scratch because “our situation is different.”
  • Reactive devaluation – We devalue proposals or concessions that come from an adversary, even if objectively beneficial. A peace offering from an enemy nation is viewed with suspicion, while the same offer from an ally would be welcomed.
  • Positivity effect – We remember positive information better than negative information, especially as we age. Older adults tend to recall their youth as a “golden age,” filtering out the difficulties and remembering mostly positive experiences.

We simplify probabilities and numbers to make them easier to think about:

  • Mental accounting – We treat money differently depending on its source or intended use, even though money is fungible. You might carefully save your salary while freely spending a tax refund or casino winnings, even though it’s all just money.
  • Normalcy bias – We underestimate the likelihood and impact of disasters, assuming things will continue to function normally. People fail to evacuate before hurricanes because they assume it won’t be “that bad” since it’s never been that bad before.
  • Appeal to probability fallacy – We assume that because something is possible, it’s bound to happen eventually. Worrying that a meteorite will hit your house because it’s technically possible, even though the probability is infinitesimally small.
  • Murphy’s law – We believe “anything that can go wrong, will go wrong” because we remember and focus on the times things went wrong. You notice when you choose the slow checkout line but forget the many times you chose the fast one.
  • Zero sum bias – We incorrectly believe that one person’s gain must come at another’s loss. Opposing immigration because you think immigrants “take” jobs, not recognizing that economic growth can create new jobs for everyone.
  • Survivorship bias – We focus on successes while ignoring failures, leading to false conclusions. We study successful entrepreneurs who dropped out of college, forgetting the thousands of dropouts who failed, and conclude that dropping out causes success.
  • Subadditivity effect – We judge the probability of the whole to be less than the sum of its parts. People estimate the probability of dying from any cause as lower than the sum of dying from cancer, heart disease, accidents, etc.
  • Denomination effect – We’re less likely to spend money when it’s in large denominations versus small ones. You might hesitate to break a $100 bill for a $5 purchase but freely spend five $1 bills, even though the value is the same.
  • Magic number 7±2 – We can only hold about 7 (plus or minus 2) items in working memory at once. Phone numbers are typically 7 digits because that’s about the limit of what we can easily remember without chunking.

We think we know what other people are thinking:

  • Illusion of transparency – We overestimate how well others can discern our emotional state. After giving a nervous presentation, you’re convinced everyone noticed your shaking hands, when most people saw nothing unusual.
  • Curse of knowledge – Once we know something, we can’t imagine not knowing it, making it hard to teach others. An expert struggles to explain basics to a novice because they can’t remember what it’s like to not understand the fundamentals.
  • Spotlight effect – We overestimate how much others notice our appearance and behavior. You obsess over a small stain on your shirt, convinced everyone sees it, when most people don’t notice at all.
  • Extrinsic incentive error – We assume others are more motivated by external rewards than internal satisfaction, while viewing ourselves as intrinsically motivated. You think coworkers only work hard for bonuses while you work hard because you care, even though they might feel the same way.
  • Illusion of external agency – We perceive our own sensory experiences as directly representing external reality without recognizing our brain’s interpretation. You believe you see “red” as it objectively exists, not recognizing that color is a construct of your visual system.
  • Illusion of asymmetric insight – We believe we understand others better than they understand us, and better than they understand themselves. You think you see your friend’s relationship problems clearly while they’re blind to them, but feel they couldn’t possibly understand your situation.

We project our current mindset and assumptions onto the past and future:

  • Self-consistency bias – We remember our past beliefs and attitudes as more similar to our current ones than they actually were. When you change political views, you misremember having always felt somewhat that way, rather than acknowledging a genuine shift.
  • Restraint bias – We overestimate our ability to control impulsive behavior in the future. Dieters keep tempting snacks in the house because they’re confident they’ll resist, then wonder why they ate them all.
  • Projection bias – We assume our future preferences will match our current ones. Shopping while hungry, you buy too much food because you can’t imagine ever not being hungry.
  • Pro-innovation bias – We overvalue the usefulness of new inventions while downplaying their limitations. Every new technology is hailed as revolutionary and world-changing, from Segways to Google Glass, before reality sets in.
  • Time-saving bias – We underestimate the time savings gained from increasing speed on short trips but overestimate it on long trips. You rush to save 5 minutes on a short errand but don’t realize that driving 20mph faster on a 5-hour trip saves significant time.
  • Planning fallacy – We underestimate how long tasks will take despite knowing that past tasks have also taken longer than expected. You estimate a home renovation will take 2 weeks, ignoring that your last three projects each ran months over schedule.
  • Pessimism bias – We overestimate the likelihood of negative outcomes, especially for events we feel we can’t control. Before a flight, you imagine crashes and disasters, even though flying is statistically very safe.
  • Impact bias – We overestimate the intensity and duration of our emotional reactions to future events. You think failing the exam will devastate you permanently, but you’ll likely feel better within days.
  • Declinism – We believe the past was better and things are getting worse, especially regarding morality and culture. Every generation thinks music, manners, and society were better “back in their day.”
  • Moral luck – We judge actions based on outcomes rather than intentions or decision quality at the time. A drunk driver who makes it home safely is judged less harshly than one who causes an accident, even though their decision-making was equally poor.
  • Outcome bias – We judge decisions by their outcomes rather than by the quality of the decision at the time it was made. An investor who got lucky on a risky bet is praised as brilliant, while a smart investor whose solid choice didn’t pan out is criticized.
  • Hindsight bias – We believe after an event that we predicted or “knew” it would happen all along. After a company fails, everyone claims they saw it coming, conveniently forgetting they thought it would succeed.
  • Rosy retrospection – We remember past events as better than we experienced them at the time. You reminisce about high school as carefree and fun, forgetting the stress, drama, and insecurity you actually felt.
  • Telescoping effect – We perceive recent events as more remote and remote events as more recent than they actually are. The pandemic that happened 4 years ago feels like it was just last year, while events from last year feel like ancient history.

Leave a comment