What It’s Like to Be a Bat — and Why This 1974 Paper Still Shapes the Consciousness Debate

[Written by ChatGPT]

Consciousness is one of those topics that won’t leave you alone once it grabs you. Every answer you find only spawns new questions, which is why this post is long again. After steeping myself in neuroscience, philosophy, and AI papers, I finally realized the root of my confusion: almost everyone is using the same word—“consciousness”—to mean two completely different things without noticing.

The deeper you go, the clearer it becomes that we still have no shared, settled understanding of what it actually feels like to be a mind—or even of which sense of the word we’re arguing about in the first place.

Related Posts: Consciousness

One of the first stops on the journey is Thomas Nagel’s What Is It Like to Be a Bat?

It’s short, accessible, and yet it detonated a debate that’s still active fifty years later. If you have trouble understanding Nagel like I did, refer to this post for a detailed explanation.

Who Is Thomas Nagel?

Thomas Nagel (b. 1937) is a prominent American philosopher known for clear, incisive writing about the mind, ethics, and human experience. Emerging from the analytic tradition, Nagel was trained at Cornell and Oxford and later taught at NYU, where he spent most of his career.

Nagel’s work often pushes back against overly tidy scientific or philosophical “explanations” of complex human realities. He’s neither anti-science nor mystically inclined, but he has a persistent suspicion: that the subjective quality of experience — consciousness — doesn’t fit neatly into our current scientific framework.

What Is It Like to Be a Bat? became his most famous piece because it lays out this suspicion in a way that is both intuitive and philosophically devastating.

The Heart of the Paper: Subjectivity Cannot Be Reduced Away

Nagel’s central claim is strikingly simple:

Being conscious means there is “something it is like” to be you.

And this subjective “what-it’s-like-ness” cannot be captured by objective physical science.

To show this, he picks a creature close enough to us (a mammal) but alien enough in perception: a bat.

We know all sorts of objective facts about bats:

They echolocate. They navigate with sonar. Their brains perform high-frequency acoustic computation.

But — and this is Nagel’s punchline — no amount of physical or behavioral knowledge can tell us what echolocation feels like from the inside.

We can imagine using sonar, but only as humans imagining ourselves in that situation. We cannot imagine the bat’s own subjective perspective.

Why does that matter?

Because many scientific or philosophical theories try to “reduce” consciousness to physical processes. Nagel says: not so fast. You can explain the mechanics all you want, but the subjective character is something extra — something not captured by objective third-person description.

This argument re-ignited the mind-body problem in a major way.

Why the Paper Sparked Debate

Nagel’s essay arrived in the early 1970s, when behaviorism was waning and physicalist theories of the mind were ascendant. Many philosophers believed the mind could be straightforwardly explained in terms of brain states, functional states, or computational processes.

Nagel’s critique struck at the heart of these projects:

Physicalist reductionism leaves out the first-person perspective. You can map every neuron, but that won’t tell you what the subject feels. Subjectivity is real and irreducible. If consciousness depends on subjective experience, ignoring that fact won’t help us explain it. This creates an explanatory gap. Even a perfect physical brain model wouldn’t automatically reveal how experience arises.

These ideas later inspired:

David Chalmers’ “Hard Problem of Consciousness” Debates over qualia (the raw feels of experience) Work on theory of mind, phenomenology, and cognitive science Contemporary discussions of AI consciousness

In other words, this little bat essay quietly became one of the founding documents of modern consciousness research.

How It Changed the Trajectory of Research

Nagel didn’t claim that consciousness is magical or non-physical — only that our scientific frameworks are not yet equipped to explain it. In this way, he motivated several decades of work exploring:

1. The limits of objective science

Researchers began asking whether third-person methods can ever fully capture first-person experience.

2. The nature of qualia

Philosophers debated whether raw feels are real, illusionary, functional, or something else.

3. The explanatory gap and the hard problem

Chalmers later formalized a question implicit in Nagel:

Why should physical processes give rise to subjective experience at all?

4. Cross-species consciousness

Nagel’s attention to animals encouraged scientific study of consciousness beyond humans.

5. AI and machine consciousness

If consciousness has a subjective character, what would it take for an artificial system to have one?

For a six-page paper, the impact is astonishing.

Why This Paper Matters If You’re Thinking Deeply About Consciousness

If you’re wrestling with questions like:

What is the essence of my experience? How does subjective feeling arise? Can consciousness be measured? Could machines or animals be conscious in a way fundamentally different from us? Is science enough to explain the mind?

…then Nagel’s essay is practically required reading.

It’s not a theory-heavy or technical argument. Its power comes from showing, with devastating simplicity, that our own subjectivity is the core phenomenon to be explained — and the one least accessible to standard methods.

For many people on a “consciousness journey,” Nagel’s bat becomes a kind of philosophical checkpoint: a moment of realizing that understanding the mind may require new tools, new perspectives, or even new scientific paradigms.

Final Reflection

Nagel reminds us that consciousness is not merely a puzzle of mechanisms or computation. It is a mystery of perspective — of what it feels like to be a subject at all.

His insight doesn’t close the question.

It opens it.

Whether you’re exploring neuroscience, philosophy, AI, meditation, or just the nature of your own mind, What Is It Like to Be a Bat? is a reminder that some of the most important aspects of reality aren’t visible from the outside. They are lived from within.

And any attempt to understand consciousness that forgets this will always be missing the heart of the phenomenon.


[Written by Claude]

The Hard Problem: What Consciousness Actually Is (And Isn’t)

We’ve been asking the wrong questions about consciousness for decades. We keep treating it as a single thing when it’s almost certainly a bundle of different phenomena we’ve mistaken for one.

What Consciousness Is Not

Let’s clear the table.

Self-narrative.
The inner storyteller—your running commentary, plans, rumination—operates whether you attend to it or not. You can be conscious of this narrative, but the narrative isn’t consciousness. People with impaired narrative systems remain fully conscious.

Emotion.
You can be conscious of fear, but fear itself is a set of physiological and cognitive responses. The racing heart, the threat evaluation, the impulse to flee—all can run unconsciously. Consciousness simply illuminates them.

Self-identity.
The steady sense of being “you” over time is a construction, a useful fiction. People with dissociation, amnesia, or depersonalization can lose this sense entirely and still remain conscious. The “I” is not consciousness; it’s something consciousness observes.

The ego or sense of agency.
Related but distinct, this model of being a unified doer can dissolve in meditation, psychedelics, or neurological conditions while consciousness continues. The agent-self is optional.

Sensory circuits.
Your brain’s visual and auditory systems transform input into patterns—but so do cameras and microphones. These circuits are not consciousness; they’re what consciousness can access.

Pain and pleasure circuits.
Valence systems label experiences as good or bad, approach or avoid. Drugs can modulate these systems without altering the presence or absence of consciousness itself—only the content.

The Levels Problem

“Consciousness” is a single word for what might be multiple stacked layers. When we argue about what consciousness “is,” we may actually be talking past one another about different levels.

Level 1: Raw Phenomenality

The most basic and mysterious layer: the mere fact that there is a first-person viewpoint. Not any specific sensation—just the presence of experience at all. A system that processes information in darkness versus one for which there is “something it is like.”

Whether this can exist without any content is debated.

Level 2: Phenomenal Qualities (Qualia)

The specific feels of experience: the redness of red, the painfulness of pain, the tone of middle C. These raw qualities aren’t captured by wavelengths or neural firings; they’re the “what-it-is-like” character.

This is experience before interpretation.

Level 3: Access Consciousness

Functionally defined: information becomes globally available for reasoning, language, planning, and memory. If you can think about it or report it, that’s access consciousness.

This is objectively measurable. The deep question: does access entail phenomenality, or can a system have full access without any inner experience—processing “in the dark”?

Level 4: Reflective Awareness

Being aware that you’re aware. The ability to take your own experiences as objects, to notice noticing. This is meta-consciousness—a higher-order layer requiring additional cognitive machinery.

It often drops out during flow states or immersive experiences.

Level 5: Self-Consciousness

The sense of being a unified subject, a persistent “I” who has the experiences. This is the brain’s self-model in action.

It can flicker or dissolve while other levels remain intact.

Level 6: Narrative Consciousness

The interpretive overlay: your inner monologue, your explanations for your actions, the story of who you are. This is highly constructed, heavily language-dependent, and often post-hoc.

Why These Distinctions Matter

These levels can come apart:

  • Blindsight: access without phenomenality—behavior without experience (apparently).
  • Deep meditation: phenomenality without narrative or self.
  • Infants and animals: likely rich qualia with little reflection or narrative.
  • Split-brain patients: disrupted access raises questions about unified subjects.
  • AI systems: may achieve access consciousness without any phenomenality at all.

So when someone claims “consciousness is an illusion,” they might mean:

  • the self-model and narrative are illusions (very plausible),
    not
  • that raw phenomenality is an illusion (which simply pushes the mystery back: an illusion to whom?).

The Core Mystery

Strip away narrative, self, reflection, even specific qualia—and you’re left with the real enigma: raw phenomenality.

Why is there anything it’s like to be a physical system? Why isn’t cognition entirely dark?

We can explain global workspace architectures, neural dynamics, information integration, verbal report. But explaining subjective presence—why physical processes feel like something—is still beyond us.

The Uncertainty

We don’t know whether raw phenomenality is:

  • fundamental to the universe,
  • emergent from certain forms of information processing,
  • limited to specific biological structures,
  • or itself an illusion generated by cognitive access.

What we do know is that something appears to be happening from your first-person perspective right now—whatever that something ultimately turns out to be.

The Hard Problem Remains

The hard problem concerns Levels 1 and 2: phenomenality and qualia.
Why do subjective experiences exist at all?
Why does anything feel like anything?

The higher levels—access, reflection, self-model, narrative—are the “easy problems.” They’re conceptually challenging but mechanistically understandable.

Phenomenality is not.

And until we understand whether it’s real, what generates it, and where it exists, we haven’t solved consciousness—we’ve only described its surface.

The mystery persists. But recognizing that we’ve conflated multiple layers under one label might be the first real step toward untangling it.


What Vanishes: Understanding the Loss of Consciousness

Every night, you lose consciousness. Under general anesthesia, physicians deliberately suspend it. These states reveal something profound: when consciousness disappears, the brain does not simply “keep running in the background.” Instead, key cognitive functions diminish, change mode, or stop operating in the ways they do during wakefulness. Understanding what reliably vanishes during unconsciousness helps clarify what consciousness is—and what it enables.

The Myth of the Running Machine

A common intuition is that unconscious states leave all subsystems running normally but disconnected, like an orchestra whose musicians can’t hear one another. In reality, unconsciousness involves both functional disconnection and substantial alterations within individual subsystems. Many abilities that depend on conscious-level integration are either disrupted or absent entirely.

What Actually Disappears

1. Memory Encoding Drops Out

A robust finding: new episodic memories are not formed during deep non-REM sleep or general anesthesia.

  • Under anesthesia, agents such as propofol and sevoflurane suppress hippocampal activity involved in memory encoding, especially NMDA receptor–dependent plasticity.
  • Even mild anesthetic levels impair the formation of emotional and declarative memories.
  • Deep NREM sleep shows reduced hippocampal–cortical communication, limiting the encoding of new experiences.

This is why dreamless sleep yields no memory trace—not because memories were formed and later lost, but because the encoding machinery was inactive or severely downregulated.

2. Self-Awareness Strongly Diminishes

During deep non-REM sleep and surgical anesthesia, neural networks associated with self-referential processing—most notably the Default Mode Network (DMN)—show significantly reduced functional connectivity and activity.

Empirically supported observations:

  • People awakened from deep NREM nearly always report no sense of having existed during the interval.
  • Reports of conscious awareness during deep sleep are rare and predominately occur in highly trained meditators. Even then, the described experience is minimal and typically lacks self-referential structure.
  • Self-awareness, as normally understood, is not supported by the brain during deep NREM sleep or clinically adequate anesthesia.

3. Emotional Reactivity Declines Substantially

Evidence shows that emotional circuits—including the amygdala—exhibit:

  • Markedly reduced activity under anesthesia
  • Altered responsiveness in deep NREM sleep

While some subcortical reflexes remain, the integrated experience of fear, pleasure, or anxiety requires broad network participation that is absent in unconscious states. Emotion-related circuits do not generate integrated emotional experiences during unconsciousness, though some low-level processing can persist.

4. Narrative Construction and Higher-Order Thought Cease

The prefrontal and parietal systems responsible for:

  • internal narration
  • reasoning
  • planning
  • reflective awareness

are functionally suppressed under deep NREM and anesthesia. EEG and neuroimaging consistently show substantial reductions in connectivity across these regions.

Thus, the brain does not merely pause narrative thought—it lacks the integrated activity required to construct narrative at all.

5. Sensory Processing Persists Locally but Does Not Reach Conscious Awareness

This is one of the clearest findings across sleep and anesthesia research:

  • Primary sensory cortices (visual, auditory, somatosensory) can still respond to stimuli.
  • However, feedforward signals fail to ignite recurrent, large-scale cortical networks, preventing conscious perception.
  • This “local without global” pattern is seen in EEG, MEG, fMRI, and intracranial recordings.
  • Sensory signals are processed locally but do not enter globally integrated networks, so they are not consciously experienced.

The Mechanisms of Loss

Across deep NREM sleep, anesthesia, and some pathological states, several reliable neural signatures of unconsciousness appear.

1. Reduced Integration and Differentiation

Neural activity becomes dominated by slow, synchronous oscillations associated with bistability (up–down states), reducing the brain’s capacity for:

  • diverse patterns
  • sustained metastable dynamics
  • complex information integration

2. Breakdown of Effective Connectivity

Communication between key regions—especially fronto-parietal and thalamo-cortical pathways—decreases dramatically.
This includes:

  • weakened feedforward communication
  • dramatically reduced feedback connectivity
  • diminished cross-network coherence

3. Failure of Global Workspace / Ignition

Stimuli that normally produce large-scale “ignition” events during conscious perception fail to do so during unconsciousness.
This aligns with multiple frameworks, including:

  • Global Neuronal Workspace Theory
  • Integrated Information Theory (in terms of reduced “phi”)
  • Recurrent Processing Theory

4. Loss of Causal Influence Across Regions

Perturbational TMS studies show that:

  • in conscious states, stimulation causes rich, long-lasting, distributed responses
  • in unconscious states, responses are brief, local, and stereotyped

This demonstrates a collapse in the brain’s causal richness.

What This Reveals About Consciousness

1. Consciousness Supports Numerous Cognitive Functions

When consciousness is absent, many higher-order processes—memory encoding, reflective thought, narrative construction, and integrated emotional experience—are compromised or absent.

2. Consciousness Requires Large-Scale Integration

A functioning brain alone is not enough; how it functions—its patterns of global coordination—matters.

3. Conscious States Are Organizationally Distinct

The difference between conscious and unconscious processing is not just quantitative. It involves a transition from:

  • local, stereotyped, low-complexity activity
    to
  • globally integrated, high-complexity, recurrent activity.

4. Consciousness Has Identifiable Neural Signatures

While not a complete theory, scientists can now measure brain states and determine with high accuracy whether someone is conscious or unconscious.

The Return of Consciousness

Emergence from unconsciousness is often:

  • nonlinear
  • disorganized
  • marked by partial reactivation of subsystems

This explains temporary confusion, emotional lability, and fragmented awareness reported during emergence from anesthesia or abrupt awakening.

These transitional states support the idea that unified consciousness is an achievement, requiring the re-establishment of network-wide integration.

Unanswered Questions (Scientifically Accurate Framing)

  • Why do integrated neural processes correlate with subjective experience?
    No existing theory fully explains the emergence of phenomenality.
  • Can consciousness exist without memory, self-modeling, or narrative?
    Possibly—reports from contemplative traditions suggest minimal awareness states, but empirical study is limited.
  • Are there degrees of consciousness?
    Evidence from sedation and disorders of consciousness suggests graded or partial conscious states.
  • How many forms of consciousness exist (e.g., waking, dreaming, psychedelic, meditative)?
    Current models recognize multiple distinct conscious modes.

Why This Matters

A validated understanding of unconsciousness informs:

  • clinical anesthesia
  • diagnosis of disorders of consciousness
  • ethical considerations for end-of-life care
  • assessing potential consciousness in artificial systems
  • theoretical work on what consciousness is and does

The Paradox of Absence

Unconsciousness cannot be experienced directly. It is known only through behavioral gaps, neural signatures, and reports upon waking. Yet these absences highlight what consciousness contributes: integration, coherence, memory, emotion, selfhood, and the sense of a continuous subjective world.

When consciousness returns, the world reappears—not gradually built from local sensory fragments, but reconstituted through the return of global integration.

Understanding what vanishes when consciousness fades brings us closer to understanding what consciousness fundamentally is—and why its presence is so extraordinary.


The Two Meanings of Consciousness (And Why Everyone Talks Past Each Other)

The word “consciousness” is causing one of the biggest communication failures in modern intellectual discourse. Scientists, philosophers, and ordinary people use the same word to mean fundamentally different things—and most don’t realize they’re talking past each other.

Everyday Consciousness: Being Awake and Aware

When most people say “consciousness,” they mean the ordinary state of being awake and aware. The opposite of being asleep, knocked out, or in a coma.

In this sense:

  • “She lost consciousness” means she passed out and is unresponsive
  • “He regained consciousness” means he woke up
  • Being conscious means you’re alert, processing information, can respond to questions, can form memories

This is what doctors mean when they check if someone is conscious. It’s what we all mean when we distinguish being awake from being asleep. It’s measurable, observable, and relatively uncontroversial.

Call this everyday consciousness.

Philosophical Consciousness: The “What It’s Like”

But philosophers (and some neuroscientists) use “consciousness” to mean something more specific and puzzling: the subjective, qualitative character of experience. Not just being awake and processing information, but the fact that there’s something it feels like to be you right now.

The focus here is on qualia—the redness of red, the painfulness of pain, the specific feel of experiences that seems irreducible to physical description. This is “phenomenal consciousness” or “subjective experience.”

In this sense, philosophers ask: Could something be functionally awake and aware (processing information, responding appropriately, even reporting on its mental states) but with no inner experience—nothing it’s like to be that thing? This is the philosophical zombie thought experiment.

Call this phenomenal consciousness.

Why This Creates Massive Confusion

Confusion 1: “Consciousness might be an illusion”

When a philosopher says this, they often mean: phenomenal consciousness—the special qualitative feel—might be illusory. There might be nothing more to experience than functional information processing and self-representation.

But to a normal person, this sounds absurd. It seems to deny the obvious difference between being awake and being asleep, between awareness and unconsciousness.

Confusion 2: “Can you have X without consciousness?”

When someone asks “can you have memory/emotion/self-awareness without consciousness?” the answer depends entirely on which meaning you’re using:

  • Without everyday consciousness (being awake/aware)? NO—as we see from sleep and anesthesia, these capacities require the integrated waking state
  • Without phenomenal consciousness (qualia)? MAYBE—this is the zombie debate, and it remains unresolved

Confusion 3: Scientific studies of consciousness

When neuroscientists study “the neural correlates of consciousness,” they’re usually tracking everyday consciousness—what distinguishes waking from sleeping, anesthesia from alertness, vegetative states from awareness. They measure integration, information flow, responsiveness.

But this doesn’t necessarily answer the philosophical question about phenomenal consciousness. You could potentially explain all the neural mechanisms of everyday consciousness and still not know why it feels like anything at all.

How the Meanings Map to the Levels

Remember the levels of consciousness we discussed earlier? The two meanings roughly correspond to different parts of that framework:

Everyday consciousness includes:

  • Access consciousness (information globally available for reasoning and report)
  • Reflective awareness (being aware that you’re aware)
  • Self-consciousness (the sense of being a unified subject)
  • Narrative consciousness (the interpretive story you tell yourself)

These are the functional, cognitive aspects. They’re what you lose when you fall asleep or go under anesthesia. They’re measurable and studyable.

Phenomenal consciousness is:

  • Raw phenomenality (the bare fact of subjective experience)
  • Qualia (the specific feels—redness, painfulness, etc.)

These are the mysterious aspects. We can’t measure them directly, only infer them. This is where the “hard problem” lives.

The Deep Question: Are They Separable?

Here’s where it gets really interesting: Can you have one without the other?

Can you have everyday consciousness without phenomenal consciousness? This is the zombie question. Could something be functionally awake, aware, processing information, reporting on its states—but with the lights off inside, no subjective experience at all? Many philosophers say yes (at least conceivably). Many scientists and other philosophers say no—that the functional story IS consciousness, and there’s no additional phenomenal layer.

Can you have phenomenal consciousness without everyday consciousness? Could there be pure experience without access, reflection, self-awareness, or narrative? Some meditation traditions claim this is achievable—a state of raw awareness without content or self. Some philosophers think this is incoherent. The question remains open.

Why Scientists and Philosophers Disagree

Much of the apparent disagreement between scientists and philosophers isn’t really disagreement—it’s talking about different things:

Scientist: “We’ve identified the neural correlates of consciousness—it’s about integrated information and global workspace dynamics.”

Philosopher: “But that just explains everyday consciousness. You haven’t explained why it feels like anything.”

Scientist: “The feeling IS the functional integration. There’s nothing more to explain.”

Philosopher: “But I can conceive of that same functional integration happening in the dark, with no subjective experience.”

Scientist: “No you can’t—you just think you can.”

They’re not necessarily contradicting each other. The scientist has explained everyday consciousness. The philosopher is asking about phenomenal consciousness. Whether these are really two different things or just two ways of describing the same thing—that’s the unresolved question.

Practical Implications

For AI consciousness debates: When we ask “could AI be conscious?” we need to specify: Do we mean could it achieve everyday consciousness (integrated information processing, self-modeling, reportable states)? Or do we mean could it have phenomenal consciousness (something it’s like to be the AI)? These might have different answers.

For ethics: If what matters morally is phenomenal consciousness (the capacity for subjective experience, especially suffering), then explaining everyday consciousness in a system doesn’t settle whether it deserves moral consideration. But if everyday consciousness IS phenomenal consciousness, then it does.

For understanding ourselves: When you introspect, you mostly access everyday consciousness—your thoughts, your sense of self, your narrative. Phenomenal consciousness, if it’s something distinct, might be non-introspectible—the background hum of experience itself that you can’t separate from the content of experience.

The Bottom Line

Next time someone makes a claim about consciousness, ask yourself: Which consciousness are they talking about?

  • If someone says “we’ve solved consciousness,” they probably mean everyday consciousness (we understand the neural mechanisms of waking awareness)
  • If someone says “consciousness remains mysterious,” they probably mean phenomenal consciousness (we still don’t know why it feels like anything)
  • If someone says “consciousness might be an illusion,” ask: do you mean the self-narrative is constructed (almost certainly true) or that subjective experience doesn’t exist (deeply puzzling)?

The two meanings aren’t necessarily unrelated. Phenomenal consciousness and everyday consciousness might be the same thing, or one might depend on the other, or they might be separable. But until we’re clear about which one we’re discussing, we’ll keep talking past each other—and the hardest problems will remain hidden in the confusion.

Understanding this distinction won’t solve the mystery of consciousness. But it might help us ask better questions—and recognize when we’re not actually disagreeing, just using the same word to mean different things.

One thought on “What It’s Like to Be a Bat — and Why This 1974 Paper Still Shapes the Consciousness Debate

Leave a comment