[Written by Grok and ChatGPT. Image credit]
On the table is a single red apple.
The apple reflects a pattern of wavelengths. Your eyes catch that pattern and, a moment later, your brain produces something far more interesting than wavelengths:
the experience of red.
Red isn’t a pigment on the surface of the apple.
And it isn’t a paint stroke inside your skull.
Red is the way your brain represents a certain set of reflectances—
a constructed quality, rendered automatically, instantly, and with total conviction.
The Big Questions
Now ask the questions that everyone eventually asks:
When did the photons become an experience?
Where is this experience happening?
Who is it happening to?
From Photons to Experience
A hundred milliseconds ago, light struck the retina and set off electrochemical waves that ignited a vast, shifting coalition of neural activity. That impact itself was not yet “red,” nor “apple,” nor “object.” It was simply photons meeting photoreceptors, kicking off a carefully orchestrated cascade.
Let’s walk that cascade—slowly, clearly, step by step.
The cones registered differences in wavelength and intensity. Bipolar cells compared those signals across space, enhancing edges and contrasts, and ganglion cells transformed them into precise spike patterns sent along the optic nerve. Already, the information was being compressed, sharpened, and sorted into channels for color, luminance, and motion. The signals reached the LGN (lateral geniculate nucleus in the thalamus) which synchronized and filtered them, modulating the flow based on attention, predictions from the cortex, and internal state. At this point, the brain was not passively receiving input; it was actively shaping what would count as relevant.
From the LGN the feed-forward sweep reached V1 (primary visual cortex), where millions of neurons encoded edges, orientations, and small patches of the visual field. V2 and V3 assembled these fragments into contours, surfaces, and boundaries, separating object from background and binding colors to regions. In parallel, the dorsal stream—through MT (middle temporal) and parietal regions—was computing depth, motion, and spatial relations, determining where the apple is and how it could be reached. The ventral stream, meanwhile, carried signals to V4, where color constancy and surface properties were inferred, creating the stable qualitative experience of “red” despite changes in lighting. Higher regions of the inferotemporal cortex matched the object’s shape and texture to stored patterns, retrieving the concept “apple.” Recognition was not a lookup but a dynamic resonance between sensory evidence and predictive templates built from memory.
As these representations stabilized, frontal and parietal networks integrated them with goals, attention, and bodily state. The apple became not just a recognized object, but a meaningful one: graspable, edible, familiar. The amygdala and interoceptive systems contributed emotional tone and relevance—this is appealing, or neutral, or nostalgic—while the basal ganglia evaluated potential actions. Meanwhile the cerebellum and predictive circuits refined timing and expectation, ensuring the world remained stable from moment to moment. Throughout this entire process, continuous feedback was flowing backward: higher areas predicting what lower ones should see, and lower areas sending up prediction errors when reality diverged. This loop—prediction down, error up—ran dozens of times before the experience crystallized.
Finally, narrative and self-model networks wove the perception into a story. The medial prefrontal cortex, posterior cingulate, angular gyrus, and language circuits generated the familiar sense of an observer who is seeing. They supplied the frame “I am looking at a red apple on the table.” This sense of “I” is not the source of the perception; it is the interpretive gloss added after the world-model is already in place. The brain first builds a structured, stable world, then it builds the one who is said to be encountering that world. Yet the stability is an illusion—beneath the seamless experience lies a constantly updating, self-correcting network whose configurations shift every millisecond. Still, the redness seems fixed, as though it resides on the apple itself.
And that is the miracle: a restless brain, in perpetual flux, produces the appearance of a solid world inhabited by a stable self. The photons are long gone, the neural activity already changed, yet the red apple remains exactly where you see it. The experience is not located in any neuron, nor projected into external space; it is the brain’s best, moment-by-moment model of the world. The narrative of “you” perceiving the apple is just another part of that model—a useful story generated by the same machinery that generates color, shape, and space. In this way, a single glance at a red apple reveals the entire architecture of consciousness: prediction, perception, meaning, and the quiet emergence of a self to claim them.
Where Is the Redness?
By the time your brain has shaped the incoming photons into the experience of a red apple, something strange has already happened—something so familiar you don’t usually notice it.
You see the color on the apple, out there in the world.
But no color exists in the apple itself, or in the photons, or in the neurons firing inside your skull.
So where, exactly, is the redness?
To understand this, we need to zoom out from the feed-forward sweep we just walked through (retina → LGN → V1 → V2 → V4 → object areas → frontal/parietal networks) and look at what the whole system is doing.
Because the question of where color “is” only makes sense once you understand what the brain is trying to achieve.
The Brain Is Building a World-Model
Everything you experience—colors, shapes, sounds, depth, movement—is part of a world-model your brain constructs and updates hundreds of times per second.
This model has one job:
represent the environment in a way that lets the organism act effectively.
To do that, the model needs to assign:
- colors to surfaces
- edges to objects
- sounds to locations
- textures to materials
- intentions to agents
And it needs to put all of these features in external spatial coordinates—not inside the head.
That’s why your experience doesn’t feel like “patterns of neural firing.”
It feels like a stable world around you.
Color is part of that model.
It’s a perceptual tool, not a physical pigment.
Why Red Appears “Out There”
Neural activity that encodes the apple’s redness is happening inside the brain in retinotopic and object-centered maps. But the brain interprets and uses those signals as if they refer to external space.
The result is a kind of projection—not a literal one, but a coordinate transformation:
- The neural activity is inside the skull.
- But the meaning of that activity is “red surface over there.”
- And your experience follows the meaning, not the mechanics.
Nothing inside your cortex glows red.
And yet the redness is vividly, convincingly “out there” on the apple, because your perceptual machinery is encoding it as a feature of the world-model.
Your conscious field seems external
because the brain does not render it as internal.
It renders it as the world.
No Inner Viewer Required
This leads to a crucial point:
There is no inner observer watching a movie of your perceptions.
No theater.
No audience.
No little “you” sitting behind your eyes.
The world-model doesn’t need a receiver.
Its activity is the experience.
The firing pattern encoding “red surface here” just is what experiencing red amounts to.
There’s no final step where something inside the brain “looks at” the pattern.
Conscious experience doesn’t require an observer.
It simply requires the right kind of organized activity.
A Different Layer: The Self-Model
But if there’s no inner observer, then why does it feel like you are the one seeing the apple?
Because the brain is doing something else in parallel.
In addition to building a model of the world, it also builds a model of an agent within that world:
- a body with boundaries
- a viewpoint in space
- a set of memories
- a narrative of ownership (“my experience,” “my apple”)
- a center of perspective (“I am here, the apple is there”)
This is the self-model—a construct assembled across the insula, temporo-parietal regions, medial prefrontal cortex, and the default mode network.
It’s incredibly useful for planning, social behavior, and long-term survival.
But it’s not required for the world-model to exist.
Perception Without the “I”
We know this because the two models can come apart.
- In dreams, the world-model is vivid while the self is fragmented or absent.
- In deep meditation, perception remains while the sense of an observing “I” dissolves.
- In anesthesia, sensory content may return before self-awareness does.
- In depersonalization, qualia are intact but ownership fades.
- Animals and infants have rich sensory qualia without narrative selves.
The world-model runs on its own.
The self-model can drop out.
In those moments, the red apple is still red—
but the sense of “I am the one seeing it” becomes thin, or vanishes.
Look Up
Now glance around the room.
Everything you see—the colors, the light, the contours—
is your brain’s best current model of the world around you.
A model being rebuilt continuously from a changing flood of sensory signals.
You are not outside that model.
You are not watching it.
You are the model.
You are the process that constructs the room, the apple, the redness, and the sense of a self encountering them.
A fluid, dynamic, self-updating simulation that the organism uses to survive.
That’s all there has ever been.
And a single red apple contains the whole story.