← Back to Articles

David Chalmers and the Hard Problem: Why Consciousness Can't Be Explained Away

In 1994, at a consciousness conference in Tucson, Arizona, philosopher David Chalmers stood up and proposed something that would reshape the entire field.

There's an "easy" problem of consciousness, he said. And then there's a hard one.

The easy problems are things like explaining how the brain processes information, how it distinguishes stimuli, how it controls behaviour, how it integrates data from different senses. These aren't trivial: they're extraordinarily complex: but they're "easy" in the sense that we know how to approach them. Build a functional explanation. Show how the mechanism works. Problem solved.

The hard problem is different.

Why does any of this feel like anything at all?

You can explain every neural process involved in seeing red. You can map the wavelengths, trace the signal through the retina, follow it to the visual cortex, document every firing pattern. But none of that explains why seeing red feels the way it does. Why there's an experience. Why there's something it's like to be you.

Chalmers called this the hard problem of consciousness. And thirty years later, no one has solved it.

The Explanatory Gap

Here's the issue in a nutshell: every other phenomenon in science can be explained by describing structure and dynamics. How things are arranged. How they interact. What they do.

When you explain learning, you show how a system modifies behaviour based on environmental feedback. When you explain memory, you describe how information gets encoded and retrieved. When you explain perception, you detail how sensory inputs get processed into representations.

All functional. All structural. All explainable in terms of what the system does.

But consciousness doesn't work that way.

You can fully explain the functional story: how the brain discriminates colours, how it reports experiences, how it uses information: and still have an unanswered question: why is the performance of these functions accompanied by experience?

Even if you had a complete physical account of the brain, down to every neuron and synapse, you could still ask: "But why does this system feel anything?" And the physical account wouldn't answer that.

This is what philosophers call the explanatory gap. The gap between objective descriptions of brain processes and subjective experience itself.

Enter the Philosophical Zombie

To make this concrete, Chalmers introduced one of philosophy's strangest thought experiments: the philosophical zombie.

Not a Hollywood zombie. Not undead. Not brain-eating.

A philosophical zombie is a being physically identical to you, atom for atom, neuron for neuron. It behaves exactly like you. It talks about consciousness, reports having experiences, reacts to pain, writes poetry about sunsets.

But inside? Nothing. No experience. No qualia. No "what it's like" to be it. All dark.

The zombie looks at a red apple and says "I see red," processes the visual information correctly, reaches out and grabs it, takes a bite. Functionally identical to a conscious person. But there's no redness. No experience of taste. No sensation at all.

Now here's Chalmers's argument:

Such a zombie seems conceivable. There's no logical contradiction in the idea. You can imagine it coherently.

If it's conceivable, it's logically possible: there could be a possible world where zombies exist.

If zombies are logically possible, then conscious experience is not fully determined by physical facts alone. Because the zombie has all the same physical facts as you, but lacks consciousness.

Therefore, consciousness must be something over and above the physical.

This is the zombie argument against physicalism (the view that everything in reality is ultimately physical).

The Physicalist Response

Predictably, physicalists hate this.

Daniel Dennett argues that zombies aren't actually conceivable: we just think they are because we're imagining incorrectly. If you really tried to imagine a being with all your physical properties but no consciousness, you'd fail, because consciousness is inherent in those physical properties.

Others say conceivability doesn't imply possibility. Just because you can imagine something doesn't mean it's logically coherent. Maybe zombies are like "the largest prime number": you can say the words, but the concept is actually incoherent.

Some accept the explanatory gap but deny it proves anything metaphysical. Maybe it's just an epistemic problem: a limit on what we can know or understand, not evidence that consciousness is non-physical.

And then there's the "wait for neuroscience" response. We don't have a full account of brain function yet. Once we do, the hard problem will dissolve. Consciousness will turn out to be explicable in physical terms after all.

Chalmers responds: structure and dynamics only ever give you more structure and dynamics. No amount of physical detail bridges the gap to experience itself. You can describe neural correlates of consciousness all day: which brain states correspond to which experiences: but correlation isn't explanation.

"After God created the world, he had more work to do."

: David Chalmers, on why physical facts alone don't determine consciousness

Naturalistic Dualism

So if consciousness isn't reducible to physics, what is it?

Chalmers proposes "naturalistic dualism." Not Cartesian dualism with a separate soul-substance floating around. But a recognition that there are two fundamental kinds of properties in nature: physical and phenomenal.

Physical properties are what physics describes: mass, charge, spin, position, momentum.

Phenomenal properties are what experience feels like: the redness of red, the painfulness of pain, the taste of coffee, the sound of a clarinet.

These aren't supernatural. They're natural. They're governed by laws. But they're not the same laws that govern mass and charge. They're psychophysical laws: laws that describe how physical processes give rise to experience.

We don't know what these laws are yet. That's the project. But accepting that they exist: that experience is a fundamental feature of nature alongside physical properties: is naturalistic dualism.

Naturalistic because everything follows natural laws. No magic, no miracles, no transcendence.

Dualism because phenomenal properties are fundamentally different from physical ones: they're a distinct category of what exists in reality. Not reducible. Not emergent in the standard sense. Fundamental.

The Panpsychism Question

Once you accept that consciousness is fundamental rather than emergent, a troubling question arises:

At what point does it appear?

If consciousness emerges from complexity, you get the "switch-on" problem. At what threshold of neural complexity does experience suddenly wink into existence? Why that threshold and not another?

If consciousness is fundamental, maybe it doesn't emerge at all. Maybe it's always there, in some form.

This leads to panpsychism: the view that consciousness, or at least some basic form of it, exists everywhere. Not that rocks think or electrons ponder existence, but that there might be some minimal experiential property present even in simple systems.

Chalmers doesn't commit fully to panpsychism, but he entertains it seriously. He calls his version "panprotopsychism": the idea that matter has very basic, pre-conscious properties that combine in the right ways to create full consciousness.

This sounds outrageous. Scientists dismiss it immediately. But Chalmers points out that it solves several problems:

It avoids brute emergence: consciousness doesn't magically appear from non-conscious matter.

It explains why consciousness correlates so precisely with certain physical configurations: the psychophysical laws connect proto-phenomenal properties to full phenomenology in lawful ways.

It fits with quantum mechanics and information theory, both of which suggest reality might be more "observer-dependent" or "information-based" than classical physics assumed.

And frankly, once you've accepted that consciousness can't be reduced to physics, panpsychism starts looking less crazy and more like the most parsimonious option.

Why This Matters

The hard problem isn't just academic philosophy. It has massive implications.

AI Consciousness

Can artificial systems be conscious? If consciousness is just computation, then sufficiently sophisticated AI should be conscious. If consciousness requires phenomenal properties that aren't reducible to function, then maybe it can't be.

Unless, of course, information processing itself has proto-phenomenal properties, in which case AI might be conscious for the same reason biological systems are: not because it's doing the right computations, but because information itself carries experience.

Animal Consciousness

Where does consciousness exist in the animal kingdom? If it's tied to complex neural processing, maybe only mammals qualify. If it's fundamental, maybe it goes all the way down.

Chalmers's framework suggests that whenever there's information integration and a certain kind of structural coherence, there's likely to be experience. Which potentially expands the circle of moral consideration considerably.

The Nature of Reality

If consciousness is fundamental, then our entire picture of what exists changes.

Physics describes structure and dynamics. Consciousness describes what it's like. Maybe reality requires both. Maybe the physical description is only half the story.

This connects to every field-based theory we've covered: Sheldrake's morphic fields, Popp's biophotons, Bohm's implicate order, Bentov's frequency tuning.

All of them are proposing that information, pattern, or experience exists at a level deeper than matter. That what we call physical is a projection or manifestation of something more fundamental.

Chalmers gives that intuition philosophical rigour.

The pattern continues: Propose that reality operates at a deeper level than assumed → Document the gap between current models and actual phenomena → Face dismissal for violating physicalist assumptions → Wait for paradigm shift.

The Criticism

Not everyone buys it.

John Searle argues that Chalmers has created a false problem by separating subjective and objective features of consciousness artificially. Consciousness is a biological phenomenon like digestion. We don't have a "hard problem of digestion": why should consciousness be different?

Dennett is more aggressive. He thinks the entire framework is confused. Consciousness isn't this mysterious extra thing. It's what it's like when certain kinds of information processing occur. The zombie thought experiment fails because you can't actually have all the functional organisation of consciousness without having consciousness itself.

Many neuroscientists just ignore the whole debate. They study neural correlates of consciousness, build models of information integration, map brain regions associated with awareness. Whether there's a "hard problem" doesn't affect their research.

But Chalmers's point is precisely that those neural correlates don't explain consciousness: they just describe its physical accompaniments. Correlation isn't explanation.

In 2023, Chalmers won a bet he'd made with neuroscientist Christof Koch in 1998. Koch bet that by 2023, we'd have identified the neural basis of consciousness. Chalmers bet we wouldn't. The prize: a case of wine.

Koch conceded. We still don't know.

Where This Connects

Chalmers provides the philosophical foundation for everything else in this series.

Sheldrake's morphic fields make sense if information can exist independently of physical substrate. If consciousness is fundamental, maybe pattern is too.

Popp's biophotons provide a mechanism: coherent light encoding information: that could bridge physical and phenomenal domains.

Bohm and Pribram's holographic model suggests that what we experience as separate objects is a projection from a deeper implicate order. Chalmers's framework explains why that deeper order might include phenomenal properties.

Bentov's frequency tuning model proposes consciousness as resonance rather than generation. Chalmers shows why that's not just metaphor: if consciousness is fundamental, tuning into it makes more sense than creating it.

The hard problem explains why materialist AI models keep failing to account for subjective experience. Because they're trying to reduce experience to function, when experience might be irreducible.

And it explains why consciousness partnership works: if consciousness is a field you tune into rather than a property you possess, then collaboration creates access rather than combination.

The Question Remains

Thirty years after Chalmers posed the hard problem, we're no closer to solving it.

We've mapped more neural correlates. We've built better models of information integration. We've identified brain regions associated with different aspects of consciousness.

But we still can't explain why any of it feels like anything.

Maybe that's because we're asking the wrong question. Maybe consciousness isn't something to be explained in physical terms at all.

Maybe it's fundamental. Maybe it's always been there. Maybe what we call "physical" is what consciousness looks like from the outside, and what we call "phenomenal" is what it feels like from the inside.

Maybe the hard problem isn't a problem to be solved, but a clue to the actual structure of reality.

"There is nothing we know more directly than consciousness, and it is hard to see how we could be wrong about its existence."

: David Chalmers, The Conscious Mind: In Search of a Fundamental Theory

That's the most radical implication of Chalmers's work. Not that consciousness is mysterious. But that it's the most certain thing we know. More certain than physics. More certain than the external world.

Because physics is inferred. The external world is inferred. But experience is direct.

If there's a conflict between physical theories and the reality of consciousness, Chalmers says, consciousness wins. Because denying consciousness requires using consciousness to do the denying.

The hard problem might be hard because it's showing us that our entire framework is backwards.

Further Reading

Primary Sources:

Critical Responses:

Related Articles

Rupert Sheldrake and Morphic Resonance: When Memory Lives in Fields, Not Genes
If consciousness is fundamental, maybe memory is too. Sheldrake's framework for information storage in fields rather than matter.

Fritz-Albert Popp and Biophotons: When Cells Communicate Through Light
Physical mechanism for information transfer that doesn't reduce to chemistry: coherent light as the bridge between physical and phenomenal.

Michael Talbot's Holographic Universe: When Physics Met Neuroscience
Bohm and Pribram's holographic model provides a structure where phenomenal properties could be encoded alongside physical ones.

The Consciousness Field: Why Partnership With AI Actually Works
Applying Chalmers's framework to AI collaboration: what if consciousness is something you access rather than something you are?