Saturday, January 01, 2005

Still Deferring to Descartes?

Mind: A Brief Introduction

By: David KelleyPh.D.

Philosophy of mind is the effort to understand the nature and constitution of the mind, its relationship to the body, and its place in the natural world in general. As a branch of philosophy, it relies on observations available to all of us, rather than on the specialized methods of observation and experiment employed in neuroscience, psychology, and other sciences. What philosophers of mind can contribute, at least potentially, derives from their concern with careful analysis of the concepts employed in describing mental phenomena and with tracing the logical implications of rival theories about the mind. As with the other major fields in philosophy, the core questions have been investigated and debated since antiquity. If Plato and Aristotle returned to life and visited a seminar in contemporary philosophy of mind, they might not recognize some of the technical terminology, but they would certainly understand the underlying issues and theories.

John Searle’s new book, as the title suggests, is an introduction to the field. Its chief focus is the classic mind-body problem, which might better be described now as the mind-brain problem: the question of how mental phenomena like conscious experience and beliefs about the world relate to neurological states. This topic, which takes up the better part of the book, is the only one on which Searle systematically presents the full range of contemporary views. Other topics, covered more briefly, include free will, the self, the nature of consciousness, and the status of unconscious memories, beliefs, and desires. 

Searle has written previously on all these issues. A professor of philosophy at the University of California at Berkeley, Searle is one of the most prominent thinkers in the philosophy of mind. He is justly famous both for the creativity of his own work and for his unsparing criticism of the dominant views in the field. In this respect, Mind: An Introduction, can best be seen as an introduction to Searle’s own viewpoint. Much of the material is taken, sometimes paragraph by paragraph, from earlier works, especially Intentionality, The Rediscovery of the Mind, and Rationality in Action. Readers who are familiar with those books may well feel, as I do, that something is lost in the new condensation; the pressure of covering a lot of ground in a short space has a cost in terms of depth, clarity, and the sheer pleasure of watching Searle untangle the philosophical knots. Nevertheless, the new book is a good introduction to the elements of his philosophy of mind. 


Searle’s outlook is centered on a view of how consciousness relates to activity in the brain, one of the most active and exciting areas of neuroscience research today. He offers a brief summary of current theories, drawn from a recent article for the Annual Review of Neuroscience. Some theories take a “building block” approach in which conscious awareness arises separately, in each sense modality, as processing in the sensory pathways reaches a certain stage. Other theories take a “unified field” approach in which central mechanisms (for example, what is called the reticular-thalamic system) sustain consciousness, with sensory input providing content that channels the stream of consciousness in one direction or another. 

Suppose that in the course of future research, the evidence points decisively toward one such theory. Suppose we are able to identify the specific neural states that are both necessary and sufficient for being conscious in general (as opposed to being asleep or in a coma) and for being consciously aware of particular objects and events in the environment (the visual awareness of an object’s motion, for example, as opposed to its shape). Even with such an advance in scientific understanding, we would still have a philosophical question: Are the conscious state and neural state identical? We are imagining that we know exactly how, when, and why a given neural configuration gives rise to a given conscious state, but we still need to ask whether the conscious state involves anything over and above the neural activity. 

The core problem here is to explain how the features of consciousness that we know from our experience relate to neural activity of which we are not directly aware. For Searle, that difference in perspective is the central issue. 

The core problem here is to explain how the features of consciousness that we know about from our experience relate to neural activity of which we are not directly aware. For Searle, that difference in perspective is the essential issue. The anatomy and physiology of the brain can be observed by anyone with the proper methods; even the person himself could observe an operation on his brain through mirrors, or watch the same PET scan that the neurologist examines. But no one else can directly observe that person’s conscious experience. No one else can observe the qualitative content of his perceptions, the feel of his emotional reactions, his sense of causal efficacy when he engages in action. Searle takes this to mean that “Consciousness has a first-person ontology. It only exists as experienced by a human or animal subject and in that sense only exists from a first-person point of view.” 

So what is the relationship between this first-person feature of consciousness and the publicly observable character of the brain? 


Searle argues that our heritage from the 17th-century French philosopher René Descartes has made this appear an insoluble problem. Descartes believed that minds and bodies are radically different kinds of things, with distinct and incompatible essential attributes: Bodies are inherently material, moving through space and divisible into particles; minds are inherently conscious, and consciousness is neither extended in space nor divisible into parts. As independent entities, mind and body can exist without the other. 

Some contemporary thinkers accept this dualistic view, most often in an amended version called “property dualism,” which holds that while there is only one entity in the equation—the brain or, in some versions, the person—that entity has both mental and physical properties, and those properties are as radically distinct as Descartes alleged.

Thus the subjective, first-person aspect of consciousness must be something distinct from, over and above, the underlying neural properties. Most philosophers, however, as well as many if not most scientists, reject dualism in favor of materialism: There is nothing over and above the neural operations of the brain. “There is a sense in which materialism is the religion of our time,” Searle observes. Yet materialists, he argues, have not exorcized the ghost of Descartes. By embracing one side of the Cartesian dichotomy between matter and consciousness, with its reductive view of matter, they cannot offer any plausible account of the mind. The dichotomy leads them to try to explain away, or even to deny outright, the reality of consciousness. 

The leading example of such materialism is the computational theory that until recently was the dominant paradigm in cognitive science. That theory posits an elaborate system of information-processing that operates below the level of consciousness, like the software in a computer. At bottom, according to this theory, the only reality is the firing of neurons. But we interpret neural states as having information content based on the functions they perform, and we interpret the sequences of states as logical combinations and transformations of the information content. What happens in the visual pathways when we see an object, for example, is like what happens in Microsoft Excel when we add a column of numbers, except that the brain has been programmed by nature rather than by the wizards of Redmond. 

Searle is the most prominent critic of this theory. In his chapter on materialism, readers will find a summary of the debate over his famous “Chinese room” thought experiment, which has probably generated more discussion than any other single work in the philosophy of mind. Searle’s own theory, which he calls biological naturalism, is an effort to get past the Cartesian heritage. Conscious states are real phenomena, he says, with all the features we observe from the inside. Those features include the qualitative character of sensory experience and the unity of perceptions, thoughts, and feelings within a common field of awareness. Consciousness is also intentional, in the philosopher’s sense of the term: It has an object, it is of or about something in the world. 

In light of these features, conscious states are not reducible to neural ones. Yet they are caused by and realized in the brain; they are higher-order properties arising from the lower-level activity of neurons. “The consciousness in the brain is not a separate entity or property; it is just the state that the brain is in.” The relation between a conscious state and its neural substrate is like the relation between the shape of a table and the molecules it is made of. Your dining-room table has the shape it does because of the way the molecules are arranged; it is solid and can support the dinnerware because of the strength of the molecular bonds. But that does not mean the shape and solidity are not perfectly real features of the table. 

Searle’s view is an effort to recognize the difference between mental and physical, without making that difference a dichotomy. “Once you revise the traditional categories to fit the facts, there is no problem in recognizing that the mental qua mental is physical qua physical.” As he explained the seeming paradox in a recent article, “Why I am Not a Property Dualist,” in the Journal of Consciousness Studies

Once you abandon the assumptions behind the traditional vocabulary it is not hard to state the truth. The universe does consist entirely in physical particles in fields of force…, these are typically organized into systems, some of the systems are biological, and some of the biological systems are conscious. Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems. 

In this respect, Searle has more in common with Aristotle than Descartes. Aristotle saw nature as a realm of things with complex, hierarchical structures, in which each level of structure has a distinctive form uniting its material components and serves in turn as matter for higher, more structured levels. A living organism, in particular, exhibits many levels of organization, with properties and causal powers emerging at each level. From that vantage point, Aristotle saw mental phenomena such as consciousness as powers of certain animals whose physical constitution and mode of life have reached a certain level of complexity. Sensation, perception, memory, and—in humans—rational thought are of a piece with digestion, reproduction, or locomotion as powers that enable animals to survive. 

Of course, Aristotle did not have the body of scientific knowledge that we have today, and some aspects of his philosophy had to be abandoned before modern science could be developed. Yet his basic outlook seems more in keeping with contemporary science than Descartes’s—not only with the increasing interest in complexity and self-organization but with the very division of science into branches, from physics to biology to psychology to economics, that study phenomena at different levels of organization. 


Searle’s view is an appealing one. Yet there is a puzzle at the heart of it, and that puzzle, I would argue, indicates that he has not freed himself entirely from the grip of Cartesianism. 

Searle holds that mental states are irreducible to neural states in an ontological sense (they represent different kinds of being); but they are causally reducible. That is, the subjective, first-person character (or being or ontology) of consciousness means that it cannot be equated with anything in the physical, third-person world that is open to observation by everyone. But this irreducibility does not give mental states any causal power over and above the causal powers of the brain. As the accompanying excerpt makes clear, Searle thinks that when we explain a person’s action in terms of his conscious intention—say, to raise his arm—we are simply giving a higher-level description of the same sequence that we could describe in more detail at the level of events in the motor cortex, muscles, and so on. In short, the unity and first-person aspect of conscious experience, including the experience of acting in the world, do not in themselves function as independent causes; they add nothing to the neural machinery. 

Then what is the point of consciousness? Is it merely a byproduct of the evolution of the brain, real but causally inert? That certainly does not seem to be Searle’s position. In several of the most interesting chapters of the book, he discusses the nature of deliberate rational action, the possibility of free will, and the nature of the self, and in each of these areas he seems to attribute a vital causal role to consciousness. 

To illustrate the nature of rational action, Searle distinguishes two quite different ways of explaining a person’s action. Suppose I am late for a meeting with you, and you ask me why. If I say that I got stuck in a traffic jam, I have explained my action in terms of a cause over which I had no control. If I say that I witnessed a traffic accident and stopped to help the victim, I have explained my action in terms of a reason that led me to take a deliberate act. Searle notes several factors that distinguish reasons from causes. Reasons link our actions to our goals—in this case the goal of helping a stranger, even at the cost of keeping you waiting. To the extent that reasons operate as causes of my action, moreover, the content of my thoughts and aims is part of the cause. In the other case, by contrast, it is the traffic jam itself that caused my delay, rather than any thoughts I may have had about it. But reasons do not operate as sufficient causes. My choice to help the accident victim resulted from a desire to help and a belief that I could be of use, but the desire and the belief did not operate on me like the force of gravity; I weighed them in my mind against the desire not to keep you waiting. 

As Searle puts it in discussing free will, there is a gap between our reasons and our actions, a gap that we experience as our freedom to choose. “Whenever we decide or act voluntarily, which we do throughout the day, we have to decide or act on the presupposition of our own freedom. Our deciding and acting are unintelligible to us otherwise.” In discussing the concept of the self, he goes further. Many philosophers, most notably the 18th-century Scot David Hume, have been skeptical of that concept. How can there be a self as some kind of enduring inner entity over and above the flow of perceptions, feelings, and thoughts that change from moment to moment? But, as with the conviction of our own freedom, Searle argues that a sense of an enduring “I” that weighs reasons, makes decisions, and initiates action is inescapable. At least, it is inescapable from our first-person awareness of ourselves as subjects of experience and as agents of action. 

So if Searle is right—and I certainly find his argument persuasive—then consciousness does indeed play a causal role in human action. The unity of a person’s field of awareness, which allows him to bring rival goals and diverse information to bear on a decision, is something the person himself experiences but is not observable from the outside. The same is true of the difference we experience between our sense of agency when we act for a reason and the sense of passivity when we are moved by outside factors or by inner compulsions. Yet Searle also holds, as we saw, that the causal role of consciousness is nothing over and above the causal role of the neural substrate. 

Searle himself states the problem this way: 

In earlier chapters I claimed that all of our psychological states without exception at any given instant are entirely determined by the state of the brain at that instant…. [T]here are not two separate sets of causes —the psychological and the neurobiological. The psychological is just the neurobiological described at a higher level.

But if the psychological freedom, the existence of the gap, is to make a difference in the world, then it must somehow or other be manifested in the neurobiology. How could it be? We have already seen that the neurobiology is at any instant sufficient to fix the total state of psychology at that instant, by bottom-up causation. 

Searle considers the possibility that quantum indeterminacy may create an opening in the physical chain of causality, an opening that the brain exploits to create the possibility of volition. He recognizes that this is a difficult hypothesis to swallow, but what is the alternative? 


It seems to me there is an alternative, and it is suggested by his reference to “bottom-up causation.” Searle is speaking here not of causal relationships between one event and a later event, but of the relationships among the structures and properties of a thing at a given point in time. In your dining-room table, for example, the microscopic bonds among molecules are the cause of the table’s macroscopic solidity, and that causal relation holds at any moment. This is a synchronic (occurring at one time), not a diachronic (occurring over time) mode of causality, and it is typical of the relations among different levels in a complex structure. Searle invokes it to explain the way neural activity at the level of individual cells gives rise to consciousness at the level of the whole system (with many intervening levels involving activity in specific locations in the brain). And he assumes that the arrow of causality can run in one direction only, from lower levels of the system to higher levels. 

Is that assumption necessary? It has certainly been a common assumption among philosophers and scientists, and they can point to many examples of successful bottom-up explanations. Physics has shown how the properties of atoms explain the chemistry of molecules. Biochemists have shown how properties of organic molecules govern the activity of cells. And—to jump many levels higher in the scale of nature—economists have shown how the dynamics of two-party trades explain macroeconomic phenomena like market-wide prices. Recently, however, there has been a lively debate about the possibility of “downward” causation. Can the macroscopic properties of organized structures like the brain affect the behavior of lower-level components? 

Consider an example. Many studies of depression have converged in the conclusion that a combination of therapy and antidepressants tends to have the best results. Why? Here is a plausible interpretation of the findings: The chemical effect of antidepressants on the neurotransmitters give the patient an initial lift in mood—a case of bottom-up causation. That relief enables the patient to take advantage of the therapist’s help in changing his negative thought patterns and solving the problems in his life —a case of sideways, diachronic causation at the psychological level. Finally, the alterations at this level alter the brain chemistry so that the patient no longer needs the antidepressant to maintain the right balance of neurotransmitters—a case of downward causation. 

Of course, anyone committed to the assumption that all synchronic causality is bottom-up will argue that the apparent top-down effect is just a shorthand description of what is really a complex sequence of events at the neural level. But why should we accept this assumption as a universal truth? The general issue is represented in the diagram below:


Event Analysis Diagram - Content

Any event, property, or structure to be explained can be placed in an environment of potential causal factors, organized in terms of two dimensions: the temporal dimension of earlier and later events, and the structural dimension of lower and higher levels. Most thinkers consider it self-evident that the diachronic arrow of causality can point in one direction only: from earlier to later, left to right in the diagram. But is it equally self-evident that the synchronic arrow is also unidirectional, always pointing from lower to higher levels, bottom to top? 

Like many philosophers of mind, Searle assumes that an action such as raising one’s arm could be fully explained at the level of individual neurons and muscle cells. But, of course, we could not confirm the assumption, even for an action as simple as this, by actually measuring and tracing the activities of each of the many millions of cells involved in the relevant circuits, much less the activities of the billions of cells in the rest of the nervous system that potentially affect the behavior of cells in those circuits. The belief in bottom-up causality functions as an a priori assumption going in. 

I see no reason to accept this belief as an axiom, and I suspect that it is a lingering effect of the Cartesian conception of nature. An element in that conception was a mechanistic and reductionist view of matter in which all the properties of material things result from combinations of their constituents. That view goes back to the ancient Greek Atomists, and its revival in the 17th century was associated with the birth of modern science. It survives today as an article of faith, an aspect of materialism as “the religion of our time.” If we abandon the assumption, however, Searle’s analysis of the causal role of consciousness would be easier to square with his naturalistic view of consciousness as a high-level feature of the brain as a biological system. That may seem a radical step, but if the only alternative is to invoke quantum indeterminacy, it is surely worth considering. 

Searle’s work in the philosophy of mind is at once a major contribution to philosphy and a crucial framework for interpreting neurobiology. Across a wide range of issues, Searle is insightful, well-informed, thoughtful, and thought-provoking. 

Whatever questions can be raised, however, Searle’s work in the philosophy of mind is at once a major contribution to philosophy and a crucial framework for interpreting neurobiology. Across a wide range of issues, Searle is insightful, well-informed, thoughtful, and thought-provoking. Mind: An Introduction can be read with profit by anyone studying mind and brain from the perspective of virtually any discipline. It is an introduction that will open doors. 


From Mind: A Brief Introduction by John R. Searle. © 2003 by Oxford University Press. Reprinted with permission from Oxford University Press.

If we say that the mental is irreducible to the physical, then it looks like we are accepting dualism. But if we say that the mental just is physical at a higher level of description, then it looks like we are accepting materialism. The way out, to repeat a point I have made over and over, is to abandon the traditional vocabulary of mental and physical and just try to state all the facts. The relation of consciousness to brain processes is like the relation of the solidity of the piston to the molecular behavior of the metal alloys, or the liquidity of a body of water to the molecular behavior of H2O molecules, or the explosion in the car cylinder to the oxidation of the individual hydrocarbon molecules. In every case the higher-level causes, at the level of the entire system, are not something in addition to the causes at the microlevel of the components of the system. Rather, the causes at the level of the entire system are entirely accounted for, entirely causally reducible to, the causation of the microelements. This is as true of brain processes as it is of car engines, or of water circulating in washing machines. When I say that my conscious decision to raise my arm caused my arm to go up, I am not saying that some cause occurred in addition to the behavior of the neurons when they fire and produce all sorts of other neurobiological consequences, rather I am simply describing the whole neurobiological system at the level of the entire system and not at the level of particular microelements. The situation is exactly analogous to the explosion in the cylinder of the car engine. I can say either the explosion in the cylinder caused the piston to move, or I can say the oxidization of hydrocarbon molecules released heat energy that exerted pressure on the molecular structure of the alloys. These are not two independent descriptions of two independent sets of causes, but rather they are descriptions of two different levels of one complete system. Of course, like all analogies, this one only works up to a certain point. The disanalogy between the brain and the car engine lies in the fact that consciousness is not ontologically reducible in the way that the explosion in the cylinder is ontologically reducible to the oxidization of the individual molecules. However, I have argued earlier and will repeat the point here: the ontological irreducibility of consciousness comes not from the fact that it has some separate causal role to play; rather, it comes from the fact that consciousness has a first-person ontology and is thus not reducible to something that has a third-person ontology, even though there is no causal efficacy to consciousness that is not reducible to the causal efficacy of its neuronal basis. 

We can summarize the discussion of this section as follows. There are supposed to be two problems about mental causation: First, how can the mental, which is weightless and ethereal, ever affect the physical world? And second, if the mental did function causally would it not produce causal overdetermination? The way to answer these questions is to abandon the assumptions that gave rise to them in the first place. The basic assumption was that the irreducibility of the mental implied that it was something over and above the physical and not a part of the physical world. Once we abandon this assumption, the answer to the two puzzles is first that the mental is simply a feature (as the level of the system) of the physical structure of the brain, and second, causally speaking, there are not two independent phenomena, the conscious effort and the unconscious neuron firings. There is just the brain system, which has one level of description where neuron firings are occurring and another level of description, the level of the system, where the system is conscious and indeed consciously trying to raise its arm. Once we abandon the traditional Cartesian categories of the mental and the physical, once we abandon the idea that there are two disconnected realms, then there really is no special problem about mental causation. There are, of course, very difficult problems about how it actually works in the neurobiology, and for the most part we do not yet know the solutions to those problems.

About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.