Thursday, October 01, 1998

Brains and Machines:

Correcting Some “Famous Mistakes”

By: John R. Searle D. Phil.

Philosopher Searle looks at one of contemporary philosophy’s (and computer science’s) liveliest controversies: Could a machine think? He sees a short slide into the famous and still harmful mistakes of the mind-brain debate.

Could a machine think? In the era of computing machines, that has become our way of posing centuries-old philosophical questions about the mind and body (including the brain).

Philosopher John Searle, author of Minds, Brains, and Science and The Rediscovery of Mind, says that the answer to that question is obviously “yes,” but the answer to the question that the modern reductionists actually are asking is just as obviously “no.” They keep repeating the “famous mistakes” of the mind-body debate— to the detriment of progress in science.

The relationship between mind and body is one of those areas where persistent philosophical mistakes continue to impede progress in science. These mistakes go back many centuries and are not easy to shake off. Let me pose the classic mind-body (or mind-brain) debate in terms favored today both by philosophers and computer scientists and try to correct some of those famous mistakes.

QUESTION 1:

Could a machine think?

If by a “machine” we mean any physical system capable of performing certain functions, and if by “thinking” we mean the sort of thought processes that I am engaging in right now, then it seems to me quite obvious that the brain is precisely a machine. So some machines—human and certain animal brains—clearly do think already.

Furthermore, I see no reason in principle why we couldn’t produce an artifact, an artificial machine, that could think. Just as the heart, though a biological organ, is also a machine that pumps blood, so an artificial heart, though not a biological organ, is a machine that duplicates the causal powers of the heart to pump blood. I see no reason in principle why we could not build an artificial brain as we have built an artificial heart.

Notice that to produce an artificial brain, we must produce a machine that does what the brain does. The machine would have to duplicate and not merely simulate the causal powers of the brain to cause conscious thought processes, for example. It might duplicate these causal powers in some other medium, but at least it would have to have the threshold causal powers that the brain has to get us over the hump from neurobiological processes, described at the level of neurons and synapses, to actual conscious thought processes.

QUESTION 2:

Well, if you agree that machines can think, then why are we having all these debates about computers? Surely a computer is a machine. Why couldn’t a computer think?

There is no reason why something couldn’t both be a computer and think. Indeed, if by computation we mean the ability to carry out calculations using symbols, then it is obvious that I am a computer, because for example I can add 2 + 2 to get 4, and I can also think. So, there isn’t any question about whether or not computers can think. We are thinking computers.

The question at issue between me and many people in Artificial Intelligence is this: Could a machine think solely in virtue of carrying out computational operations, as computation is defined in the current literature? Computation so defined consists entirely in the manipulation of meaningless symbols. These are normally thought of as zeroes and ones, but any symbol will do. So, the question is not “Could a computer think?” in the sense of “Could something be both a computer and a thinking thing?” The answer to that question is clearly yes.

The proper question is, “Could computation defined entirely as manipulation of formal symbols be constitutive of thinking?” The answer to that question is no. Just shuffling meaningless symbols is not sufficient for thinking. I proved this almost 20 years ago with what has become known as “The Chinese Room Argument.” Just imagine that you carry out the computer program, shuffling symbols, for some language you don’t understand. In my case, I don’t understand Chinese, so I imagine myself shuffling what are, to me, meaningless symbols, even though to Chinese people they are meaningful. I manipulate these symbols according to a rule book written in English—that is, according to the computer program. Now, though I might give the right Chinese answers to the right Chinese questions, I don’t understand Chinese. And if I don’t understand Chinese on the basis of implementing the computer program for understanding Chinese, then neither does any computer solely on that basis. Therefore, simply carrying out computational operations will not by itself guarantee that either I or some other kind of machine will be thinking.

So, to repeat, the question is not “Could a computer think?” The answer to that question is yes. The question is “Is computation, as defined in computer science, by itself sufficient for thinking?” and to that the answer is no.

Paradoxically, the problem with computers is not that they are machines, but that “computation” does not name a machine process in the sense in which, for example, internal combustion names a machine process. Computation is a logical or mathematical process defined entirely in terms of the manipulation of formal symbols. And, though it requires a physical system to manipulate these symbols, any physical system capable of manipulating the symbols will do. The computational process is indifferent to the physics of its realization.

QUESTION 3:

Well, if computation isn’t sufficient for thinking, then what is? What is the relation between the mind and the brain, if it is not the same as the relation of the computer program to the hardware? At least the computational theory of the mind has a solution to the mind-body problem. The mind is to the brain as the computer program is to the computer hardware. If you are rejecting that solution, you owe us an alternative solution.

We must keep reminding ourselves that the brain is above all a biological organ, and like other biological organs it works on specific biochemical principles. There are certain special principles of the brain having to do with neurons and synapses. We don’t know the details of how the brain works, but we know the broad outline. It is this: All of our mental states, everything from feeling pains to reflecting on philosophical problems, is caused by lower level neuronal firings in the brain. Variable rates of neuron firing at synapses, as far as we know anything about it, provide the causal explanation for all of our mental life. And the mental processes that are caused by neurobiological processes are themselves realized in the structure of the brain. They are higher level features of the brain in the same sense that the solidity of this paper or the liquidity of water is a higher level feature of the system of molecules of which the table or the water is composed.

To put this in one sentence, the solution to the traditional mind-body problem is this: Mental states are caused by neurobiological processes and are themselves realized in the system composed of the neurobiological elements.

QUESTION 4:

But doesn’t that have the consequences that consciousness cannot make any difference to our behavior? That is, if it is all explainable in terms of neurons and synapses, then the neurons and synapses are doing all the work and the consciousness is just going along for a ride. Consciousness doesn’t make any difference.

In any science, the level of explanation counts. That there are lower levels of explanation of any phenomena does not mean that the higher levels are inefficacious or unreal. It is important, for example, to the germ theory of disease, or the DNA theory of heredity, that in the case of germ theory, we realize that it is the microorganism that causally explains diseases, and in the DNA theory of heredity, it is the level of the molecule and not that of the subatomic particles. Now, all physical phenomena, including genes, DNA, consciousness, and everything else, are explainable in terms of more fundamental particles, until you reach the most fundamental subatomic particles of physics. But this does not show that higher levels, such as the levels of molecules and micro-organisms, are inefficacious or unreal.

Exactly this point needs to be made about the brain. Of course, higher level processes, such as thought processes, are explainable in terms of more basic principles of atomic physics. But that doesn’t mean that the higher levels of neurons or thoughts are inefficacious or unreal. On the contrary, just as the solidity of the piston is explained by the behavior of the molecules of the metal alloys, but all the same solidity functions causally in the operation of the car engine, so my conscious intention to raise my arm can be explained causally in terms of neuron firings and synapses, but all the same it functions causally in making my arm go up. Just watch as I now raise my arm.



About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.