The relationship between mind and body is one of those areas where persistent philosophical mistakes continue to impede progress in science. These mistakes go back many centuries and are not easy to shake off. Let me pose the classic mind-body (or mind-brain) debate in terms favored today both by philosophers and computer scientists and try to correct some of those famous mistakes.
Could a machine think?
If by a “machine” we mean any physical system capable of performing certain functions, and if by “thinking” we mean the sort of thought processes that I am engaging in right now, then it seems to me quite obvious that the brain is precisely a machine. So some machines—human and certain animal brains—clearly do think already.
Furthermore, I see no reason in principle why we couldn’t produce an artifact, an artiﬁcial machine, that could think. Just as the heart, though a biological organ, is also a machine that pumps blood, so an artiﬁcial heart, though not a biological organ, is a machine that duplicates the causal powers of the heart to pump blood. I see no reason in principle why we could not build an artiﬁcial brain as we have built an artiﬁcial heart.
Notice that to produce an artiﬁcial brain, we must produce a machine that does what the brain does. The machine would have to duplicate and not merely simulate the causal powers of the brain to cause conscious thought processes, for example. It might duplicate these causal powers in some other medium, but at least it would have to have the threshold causal powers that the brain has to get us over the hump from neurobiological processes, described at the level of neurons and synapses, to actual conscious thought processes.
Well, if you agree that machines can think, then why are we having all these debates about computers? Surely a computer is a machine. Why couldn’t a computer think?
There is no reason why something couldn’t both be a computer and think. Indeed, if by computation we mean the ability to carry out calculations using symbols, then it is obvious that I am a computer, because for example I can add 2 + 2 to get 4, and I can also think. So, there isn’t any question about whether or not computers can think. We are thinking computers.
The question at issue between me and many people in Artiﬁcial Intelligence is this: Could a machine think solely in virtue of carrying out computational operations, as computation is deﬁned in the current literature? Computation so deﬁned consists entirely in the manipulation of meaningless symbols. These are normally thought of as zeroes and ones, but any symbol will do. So, the question is not “Could a computer think?” in the sense of “Could something be both a computer and a thinking thing?” The answer to that question is clearly yes.
The proper question is, “Could computation deﬁned entirely as manipulation of formal symbols be constitutive of thinking?” The answer to that question is no. Just shufﬂing meaningless symbols is not sufﬁcient for thinking. I proved this almost 20 years ago with what has become known as “The Chinese Room Argument.” Just imagine that you carry out the computer program, shufﬂing symbols, for some language you don’t understand. In my case, I don’t understand Chinese, so I imagine myself shufﬂing what are, to me, meaningless symbols, even though to Chinese people they are meaningful. I manipulate these symbols according to a rule book written in English—that is, according to the computer program. Now, though I might give the right Chinese answers to the right Chinese questions, I don’t understand Chinese. And if I don’t understand Chinese on the basis of implementing the computer program for understanding Chinese, then neither does any computer solely on that basis. Therefore, simply carrying out computational operations will not by itself guarantee that either I or some other kind of machine will be thinking.
So, to repeat, the question is not “Could a computer think?” The answer to that question is yes. The question is “Is computation, as deﬁned in computer science, by itself sufﬁcient for thinking?” and to that the answer is no.
Paradoxically, the problem with computers is not that they are machines, but that “computation” does not name a machine process in the sense in which, for example, internal combustion names a machine process. Computation is a logical or mathematical process deﬁned entirely in terms of the manipulation of formal symbols. And, though it requires a physical system to manipulate these symbols, any physical system capable of manipulating the symbols will do. The computational process is indifferent to the physics of its realization.
Well, if computation isn’t sufﬁcient for thinking, then what is? What is the relation between the mind and the brain, if it is not the same as the relation of the computer program to the hardware? At least the computational theory of the mind has a solution to the mind-body problem. The mind is to the brain as the computer program is to the computer hardware. If you are rejecting that solution, you owe us an alternative solution.
We must keep reminding ourselves that the brain is above all a biological organ, and like other biological organs it works on speciﬁc biochemical principles. There are certain special principles of the brain having to do with neurons and synapses. We don’t know the details of how the brain works, but we know the broad outline. It is this: All of our mental states, everything from feeling pains to reﬂecting on philosophical problems, is caused by lower level neuronal ﬁrings in the brain. Variable rates of neuron ﬁring at synapses, as far as we know anything about it, provide the causal explanation for all of our mental life. And the mental processes that are caused by neurobiological processes are themselves realized in the structure of the brain. They are higher level features of the brain in the same sense that the solidity of this paper or the liquidity of water is a higher level feature of the system of molecules of which the table or the water is composed.
To put this in one sentence, the solution to the traditional mind-body problem is this: Mental states are caused by neurobiological processes and are themselves realized in the system composed of the neurobiological elements.
But doesn’t that have the consequences that consciousness cannot make any difference to our behavior? That is, if it is all explainable in terms of neurons and synapses, then the neurons and synapses are doing all the work and the consciousness is just going along for a ride. Consciousness doesn’t make any difference.
In any science, the level of explanation counts. That there are lower levels of explanation of any phenomena does not mean that the higher levels are inefﬁcacious or unreal. It is important, for example, to the germ theory of disease, or the DNA theory of heredity, that in the case of germ theory, we realize that it is the microorganism that causally explains diseases, and in the DNA theory of heredity, it is the level of the molecule and not that of the subatomic particles. Now, all physical phenomena, including genes, DNA, consciousness, and everything else, are explainable in terms of more fundamental particles, until you reach the most fundamental subatomic particles of physics. But this does not show that higher levels, such as the levels of molecules and micro-organisms, are inefﬁcacious or unreal.
Exactly this point needs to be made about the brain. Of course, higher level processes, such as thought processes, are explainable in terms of more basic principles of atomic physics. But that doesn’t mean that the higher levels of neurons or thoughts are inefﬁcacious or unreal. On the contrary, just as the solidity of the piston is explained by the behavior of the molecules of the metal alloys, but all the same solidity functions causally in the operation of the car engine, so my conscious intention to raise my arm can be explained causally in terms of neuron ﬁrings and synapses, but all the same it functions causally in making my arm go up. Just watch as I now raise my arm.