Epiphenomenalism for Computer Scientists

It’s hard to work on robotics or machine learning and not occasionally think about consciousness.  However, it’s quite easy not to think about it properly! I recently concluded that everything I used to believe on this subject is wrong. So I wanted to write a quick post explaining why.

For a long time, I subscribed to a view on consciousness called “epiphenomenalism”. It just seemed obvious, even necessary. I suspect a lot of computer scientists may share this view. However, I recently had a chance to think a bit more carefully about it, and came upon problems which I now see as fatal. Below I explain briefly what epiphenomenalism is, why it is so appealing to computer scientists, and what convinced me it cannot be right. Everything here is old news in philosophy, but might be interesting for someone coming to the issue from a computer scientist perspective.

What We’re Talking About

First, a definition. Even within philosophy people often talk at cross purposes about consciousness, so it’s good to be clear precisely what it is we’re discussing. The topic of interest here is what philosophers call qualia, or what David Chalmers called “The Hard Problem of Consciousness”. That is, why do we have mental sensations associated with brain events? It seems perfectly possible to imagine a world in which our brains simply do “mute computation”, mapping input sensory information to output actions, with no mental experience in-between. (The inhabitants of that alternative world are called philosophical zombies). It’s undeniable that we do not live in that world. We experience mental sensations associated with hot and cold, red and blue, happy and sad. These sensations are called “qualia”. Why we have them, and how they relate to physical law, is very mysterious: the Hard Problem of Consciousness.

Computer Science and Epiphenomenalism

There is a very natural train of though, particularly natural I think for computer scientists, which leads directly to a view on consciousness called epiphenomenalism. The train of thought goes something like this:

In learning to program, you discover how complex tasks can be broken down into steps consisting of elementary operations. The elementary operations can be performed by very simple physical devices, for example NAND gates. If you do some work in robotics, you might get the chance to build machines which sense their environment, process the sensory information, and take actions. Our best robots are still primitive, but it is not very hard to imagine creating robots behaviourally identical to simple animals. We couldn’t hope to match nature just yet, but none of the key steps are conceptually mysterious. We understand how to do sensing, learning from experience, etc. The feats achieved by brains seem to lie within our conceptual framework. Brains may or may not be Turing machines, but they are certainly computing devices of some sort. The rules by which they transform inputs into outputs are governed by physical laws.

So here is the crux. What could consciousness possibly be doing? The brain is performing a computation, so the outputs must be some function of the inputs. Now, you might object that consciousness could be an additional input to the computation. We might imagine that the “consciousness input” lies outside the chain of physical causation. But even so, any such “consciousness input” must also be given by a function, parameterized by the physical inputs! It might be anything from a deterministic to a random function, but it is still a function of the inputs. Since we can have a source of randomness in a physical computation, it seems nothing is gained by adding this additional “conciousness input” that could not in principle be achieved without it.1

This line of reasoning leads one to doubt that consciousness plays a computational role. After all, any computable transformation from inputs to outputs can be fully understood in terms of the ordinary mechanical rules of computation. And yet, the fact that we are conscious cannot be denied, so we must include consciousness in the picture somehow! At this point, the simplest option is to just “tack it on”, leaving the rest of the picture unchanged. Let’s imagine that as your brain does its normal rule-based computation, it gives rise to consciousness as a kind of by-product. In this view, consciousness does nothing. Consciousness might be produced by computation in the same way that a running engine produces noise, but it has no bearing on the start and end points of the computation. Conscious experiences are caused by the working of the brain, but conscious experiences do not in turn cause anything. They exist in some separate mental realm, separated from the physical universe by a one-way wall. This view is called epiphenomenalism. It is closely associated with the notion of philosophical zombies, creatures behaviourally identical to us that have no conscious experience.

The Problems with Epiphenomenalism

For a long time I though of epiphenomenalism as the most natural point of view. The physical universe gives off consciousness, in a one-way process. Consciousness has no physical consequences, the motion of atoms is entirely independent of mental events. This is in many ways a pleasing point of view, because it lets you simultaneously accept the things we know about computation, and the undeniable fact that we have consciousness.

However, there are major problems. You now have a one-way wall between the physical and mental worlds, across which causality points in only one direction. This suggests two major lines of attack, which are mirror images of each other.

The first line of attack I discovered for myself, although William James pipped me by 150 years. The basic idea is that, if conscious sensations are not the cause of anything, there is no reason to expect them to have any particular correspondence to the physical world. Some conscious sensations do indeed seem arbitrary: you could imagine interchanging the experiences of “red” and “green” without effect. However, other conscious experiences are not like this, particularly pleasure and pain. If the mental world is separated from the physical by a one-way barrier, it is hard to explain why pleasure and pain are not experiences as arbitrary as red or green. This is essentially the same argument William James made in the late 19th century:

If pleasures and pains have no effects, there would seem to be no reason why we might not abhor the feelings that are caused by activities essential to life, or enjoy the feelings produced by what is detrimental. Thus, if epiphenomenalism (or, in James’ own language, automaton-theory) were true, the felicitous alignment that generally holds between affective valuation of our feelings and the utility of the activities that generally produce them would require a special explanation. Yet on epiphenomenalist assumptions, this alignment could not receive a genuine explanation. The felicitous alignment could not be selected for, because if affective valuation had no behavioral effects, misalignment of affective valuation with utility of the causes of the evaluated feelings could not have any behavioral effects either. Epiphenomenalists would simply have to accept a brute and unscientific view of pre-established harmony of affective valuation of feelings and the utility of their causes.

The second argument against epiphenomenalism asks the symmetric question – why should the physical world reflect the mental one? If my conscious experiences cause nothing, how do I explain the fact that I am writing an essay about conscious experience? Eliezer Yudkowsky laid out this argument very nicely here, critiquing David Chalmers. Some excerpts:

Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things?  If that’s true, we need some separate physical explanation for why Chalmers talks about “the mysterious redness of red”.  That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the “mysterious redness of red”.
Chalmers does confess that these two things seem like they ought to be related, but really, why do you need both?  Why not just pick one or the other?
Once you’ve postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the “mysterious redness of red”?
Isn’t Descartes taking the simpler approach, here?  The strictly simpler approach? 

Chalmers critiques substance dualism on the grounds that it’s hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness.  But property dualism has exactly the same problem.  No matter what kind of dual property you talk about, how exactly does it explain consciousness?
When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable.  How does it help his theory to further specify that this extra property has no effect?  Why not just let it be causal?

On Chalmers’s theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself.  In the absence of consciousness, Chalmers would write the same papers for the same reasons. Chalmers’s philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers’s fingers strike the keys of his computer. And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle.

For me, these two problems are severe enough that they seem fatal for epiphenomenalism.

Plants vs Zombies

So where do we go from here? Epiphenomenalism seems untenable. Conscious sensations must, it seems, be somehow causative. But I have no way to reconcile this with the rest of my understanding of the world. This leaves me in the uncomfortable position of complete uncertainty.

As Eliezer’s arguments made clear, philosophical zombies run into issues that are, if not quite logical contradiction, at least perilously close to it. The problems arise from insisting on strict indistinguishability between zombies and non-zombies. So what happens if we relax that assumption?

Let’s imagine a non-conscious agent, which I will call a plant. Perhaps it might be a distant descendant of the venus fly trap. This plant is a complex unconscious agent, behaviorally very similar to a human, but in principle distinguishable in certain cases. It seems, on face value,  that this setup should sidestep the logical difficulties faced by zombies. The question is now – why are we not plants?

I find it helpful to enumerate some possibilities:

  • Plants are logically impossible.
  • Plants are logically possible, but physically impossible in this universe.
  • Plants are physically possible, but evolutionarily disfavoured.

The first possibility, I have tried my best to rule out. The second possibility implies that all computations in this universe are conscious, or at least all those above some complexity threshold. This position has a certain appeal, although it seems difficult to ever prove or disprove. My hunch is that plants are neither logically nor physically impossible. So let’s focus on the last possibility, that plants might be disfavoured by evolution. One can imagine several reasons for such a situation:

  • Restricted behavioural complexity.
  • Higher energy or mass requirement at comparable behavioural complexity.
  • Lower evolutionary stability.
  • Less reachable or smaller volume in evolutionary configuration space.

The first possibility, restricted behavioural complexity, is true by definition. Unlike zombies, plants can be distinguished from conscious agents, so they must in some sense have a different (presumably smaller) behavioural range. However, it seems that this difference should be very slight in most practical circumstances. For a simple animal such as a fly or a worm, it is hard to imagine how consciousness could make any difference that would be evolutionarily significant. At the level of a human, it might be possible to imagine some selective advantage to not being a plant, although I’m personally a little skeptical of this.

The second possibility, energy/mass requirement, is probably the simplest scenario. It might be that, even if the set of behaviours implementable by conscious and non-conscious agents is strongly overlapping, the more energy efficient implementation is necessarily conscious. Or to say it another way, the non-conscious implementation is necessarily inefficient. For example, you could implement f(x) = x + 1 with an adder circuit, or with an infinite look-up table. If the first implementation is necessarily conscious, and the second not, then that would explain why we are not plants. It leaves open the mystery of why one physical implementation of a computation should be conscious and the other not, but at least it would explain why one would be favoured by evolution.

The third and forth possibilities are that non-conscious agents have lower evolutionary stability or represent a less reachable/smaller volume in evolutionary configuration space. That is, it could be that non-conscious agents are harder to evolve, or harder to maintain over evolutionary time. This is an intriguing possibility, but as with everything on this subject, it’s hard to say anything very strong about it.

A final possibility is that consciousness provides an evolutionary advantage completely unrelated to the generation of behaviour. To take a far fetched example, it might make its possessor more disease resistant, etc. This is a unlikely idea, but I don’t see how it can be ruled out.

Hints

Empirical evidence is very scarce in this debate, but there is a little bit of it. It doesn’t move you far beyond total uncertainty, but it’s better than nothing. Some of the main items of interest are:

Synesthesia suggests that there is no identity between sensory input and conscious experience. That is, the brain has some freedom in how it constructs the mapping between the sensory input and the conscious experience. The fact that light with a 700 nm wavelength produces a conscious experience of “red” and light at 500 nm produces an experience of “green”,  is a fact about our brains and not a fact about the universe. At least, this seems strongly suggested by the fact that we observe arbitrary mappings in people with synesthesia. It is further evidenced by the sensations people report when neurons are activated directly, for example during brain surgery. If a sensation can be produced by activating a given neuron, the brain could presumably by rewired to associate this sensation with any sensory stimulus. So our conscious experiences are a modifiable property of our brain.

Blindsight is another useful bit of empirical evidence2. It suggests that the brain performs complex processing unconsciously. In fact, there are many things that point to this conclusion, but I mention blindsight specifically because philosophers seem to have latched on to it as the canonical example. Many experiments on the neural correlates of consciousness point in the same direction: namely, that complex processing occurs in the brain long before consciousness is active. Of course, this is a tricky area: a skeptic could maintain that “unconscious” processing is carried out by a “locked-in” consciousness which co-inhabits your brain but never gets to communicate. However, at face value the evidence seems to suggest that a purely unconscious agent might be able to do many of the same cognitive tasks that we do.

A third empirical signpost is the observation about pleasure and pain that led me to reject epiphenomenalism. That is, while “red” and “green” seem to be arbitrary sensations where we could imagine swapping their association with sensory input without real consequences, “pleasure” and “pain” are not like this. In fact, pleasure and pain (and their derivatives) seem to be unique in this respect. This is hard to support rigorously, but to me it seems like a powerful building block for intuition about what consciousness does. It suggests that if consciousness has a functional role, it must be tightly related to pleasure/pain processing. Since these are very fundamental mental functions, it also suggests to me that consciousness of some sort might be present a long way down the animal kingdom. In fact, I would hazard that all the rest of our conscious experience is an evolutionary afterthought; the core role of consciousness (whatever it is), relates to pleasure and pain.

I don’t know where to go from there. Almost complete uncertainty seems like the most reasonable position. Still, I can’t overstate what a big thing it is for a computer scientist to accept the idea that consciousness might do anything at all! I’ll give the final word to Willam James:

 

Common-sense has the root and gist of the truth in her hands when she obstinately holds to it that feelings and ideas are causes. However inadequate our ideas of causal efficacy may be, we are less wide of the mark when we say that our ideas and feelings have it, than the Automata Theory which says they haven’t it.

 

Footnotes
  1. There is one subtle issue here: there is a class of functions called the non-computable functions, which cannot be evaluated by Turing machines. Since we know nothing at all about consciousness, you could imagine properties for the “consciousness input” that would allow the combined system to do super-Turing computation. I think this is an unlikely possibility: there’s no evidence that I’m aware of to suggest that animals or people perform super-Turing computation, and it’s not obvious that it would have any evolutionary relevance.
  2. Blindsight occurs in some patients who sustain an injury to the V1 area of the visual cortex. The patients report being blind in a certain part of their visual field. However, these patients can ‘guess’ with impressive accuracy about what is in their ‘blind’ field.
  • Mark. As I’ve been an epiphenomenalist since I first heard about it (a long time ago), I congratulate you on your piece above. I wrote “Epiphenomenalism explained” in the bi-monthly Philosophy Now (Oct-Nov 2010 Issue 81)which dealt with some of your points. I define the two axioms of epiphenomenalism as:
    1.Every conscious state is determined by a simultaneous brain state.
    2.Every brain state evolves solely in accordance with physical law.
    Axiom 1 means that the brain process (R) activated when light of 700nm is received not only gives rise to the quale red but allows (an English speaker) to say “I see a red light”. It’s not the quale that causes the sentence spoken but process R. Similarly for all colours, all types of qualia, thence the word ‘consciousness’ itself.

  • Hi Norman,

    Thanks, I just read your piece. It’s you’re Objection 6 that I now find difficult to answer. I absolutely agree that zombies would invent sensation words, such as “hot” and “cold”, since they label objective physical categories. I guess I would make a sharper distinction between a zombie discussing objective sensory information(“some objects are cold”) and referring to a qualia (“the sensation of cold”). There seems to be no reason for a zombie to ever make this distinction. It’s not logically impossible to have an unconscious agent wax lyrical about its internal experience, but it does seem to require a strange coincidence. The reverse problem (why is “pain” not as arbitrary a feeling as “red”) is perhaps even harder to answer.

  • This suggests an experiment you could perform to determine if a robot or AI is conscious, which I never considered to be possible:

    –Initialize a community of intelligent robots in an isolated place with no access to human culture.
    –Wait
    –See if they invent words to discuss qualia (as distinct from objective information).

    If they did, they would seem to be evidence that they were conscious.

  • ‘Hot’ and ‘cold’ are actually qualia, as much subjective (ie inside) as ‘red’ etc (outside are only fast or slow atoms!) I agree with you that both zombie and human, without some access to human culture and philosophy (eg an understanding of Locke’s primary and secondary qualities of matter) would be unlikely to discuss qualia as such. But then animals with no language may feel pain, so the expt. cannot tell us whether consciousness is present. I suspect that until we discover much more about the neural correlates of hot/cold, pleasure/pain etc we won’t stand a chance of bridging John Tyndall’s “impassable intellectual chasm”.

  • You’ve done robots. Almost all complex robots have some model, derived from perception, the world in which they simulate actions in order to weigh (pleasure, pain) what to do. The robots live in their own “Matrix” which is their model of reality that is made to weigh actions.

    We (and many other animals) are social beings, so a large part of our simulation is of our fellow beings (most of the content of your dreams for example). Again, testing/weighing actions. Since we have to be part of the simulated interactions and since we too live not in the world but in our sensory derived model of the world, we simulate ourselves. My guess is that consciousness (self awareness no?) arises from the simulation of self. In some sense it is an epiphenomena but from a real need.

Comments are closed.