Every discipline needs an outspoken disciple, an apostle so convinced of the field’s promise of success that any nay-saying is dismissed as propaganda promulgated by the enemy and so erudite that she can convince all in her reach of the truth of her vision. Neuroscience has Patricia Churchland, professor of philosophy at the University of California, San Diego. Churchland has made a career of substituting brain science for the philosophy of mind, and has earned her reputation by showing how many of the classic questions in philosophy can and should be rewritten to reflect current scientific knowledge. Brain-Wise offers, in the form (more or less) of an undergraduate textbook, a tour of her corner of the world. In particular, it showcases recent work in neuroscience that bids to change the way we think about our selves and the world.
We can divide contemporary analytic philosophy into three categories: the study of what there is (metaphysics); the study of what we know (epistemology); and the study of what is good (ethics). This book tackles the ﬁrst two areas head-on and genuﬂects to the third with a brief look at the philosophy of religion. As Churchland admits, what she covers in philosophical metaphysics is idiosyncratic; the usual topics concerning the fundamental nature of the universe or of time have been eschewed in favor of discussions of the nature of self. No surprise there, given the subject matter of neuroscience.
The ﬁrst half of Brain-Wise is concerned with what makes me me. Descartes held that we are things that think, will, desire, and imagine; we are nonphysical, conscious beings only accidentally attached to mechanical bodies. Science has a different view. It begins with the assumption that there are only mechanical bodies in the world and reasons from there. In doing so, it appears that the “self” is jettisoned.
Churchland counters that our very notion of self is confused and confusing. We employ the term inconsistently, using “self” to refer to our bodies (“I hurt myself”), to a person (“I am proud of myself”), to a cluster of persons (“my career self doesn’t ﬁt well with my mommy self”), or to a project (“I want to improve myself”). Which of these might we mean when we talk about the self of Descartes, the thing that allegedly exists continuously across time?
But Churchland puts aside worries about the continuity or identity of self by focusing on our capacities for self-representation. Viewed in this fashion, the self need no longer be a thing or even singular. It could be—and likely is—no more than a series of ways we remember, project, and think about our bodies and their mental lives. We carry in our heads models of how we ﬁt into the environment and of our beliefs and goals, and we use those to rehearse internally what particular behaviors might attain those goals. These emulator systems, as she calls them, enable us to predict the world around us and to make effective decisions about how to act in it without costing us too much in terms of muss or fuss. As an added bonus, these emulator systems give us a sense of self.
If, through an injury or other problem with our brain, we lose an emulator system, or even part of one, we also lose the corresponding self-representation. For example, some patients with a lesion in their brain’s right parietal lobe will deny that one of their limbs is their own. Known as “anosognosics,” these patients can appear completely normal, except that they cannot understand how someone else’s arm got attached to their bodies. They have damaged the part of the system or systems that maps their bodies, and, as a result, their sense of physical self is diminished.
If we are going to view the self and other mental phenomena as the result of a neurobiological process, then that must go for consciousness, as well.
Notions of self are entwined with our understanding of awareness, for what we really care about is how things seem to us consciously. If we are going to view the self and other mental phenomena as the result of a neurobiological process, then that must go for consciousness, as well. Consequently, Churchland spends some time outlining what we do know about brains and consciousness.
Unfortunately, the short answer is that we do not know much. Here, I think, more philosophy could help. Churchland tends to paint philosophy with broad, cartoonish strokes, and to discuss only skeptics and stalking horses. What is missing in this, and in much of the book, is a consideration of how advances in philosophy of science, particularly in the philosophy of neuroscience, might actually help to answer some of neuroscience’s more difﬁcult theoretical challenges. No better example of this lacuna can be seen than in her chapter on the neurobiology of consciousness.
Here Churchland begins by illustrating how our folk understanding of consciousness is confused and fractionated and, hence, likely to change as science progresses. She suggests that there are two approaches to studying consciousness.
First, there is the direct approach, in which investigators try to locate the parts of the brain that are active when we are conscious, but not otherwise. If we can find all of those places, then many believe we will have localized consciousness in the brain—one giant step, according to Churchland, toward explaining it. Second, there is the indirect approach, in which we proceed by triangulation to find areas or activities of the brain that seem to be related to awareness, based on information obtained from other kinds of experiments or other fields.
It no longer is clear where one should look in the brain for consciousness. In single cells? Activation intensities across groups? Feedback loops?
Elegant studies of late do seem to tie the experience of consciousness to the activity of single cells in the brain, so the direct approach may appear to be paying off. But the results ﬁt uneasily, at best, with what has been learned through other research techniques intended to localize consciousness in the brain. For example, research in artiﬁcial neural networks shows that many of the mental functions we normally associate with consciousness (memory, attention, assigning meaning to stimuli) require massive—and therefore complex— feedback loops. This process approach to studying consciousness belies the pinpoint approach of single-cell studies. If we toss into this mix the additional hypothesis that neuronal groups have to reach a certain level of activation before awareness occurs (the threshold effect), we end up, so far as I can tell, with a theoretical mess. It no longer is clear where one should look in the brain for consciousness. In single cells? Activation intensities across groups? Feedback loops?
Churchland does little to sort out this mess. Instead, she simply lists the possibilities without worrying about (or even noticing) that they do not ﬁt together well. Recently, philosophers of science have struggled with how to put together effective explanations in neuroscience and use different sorts of data to determine which level of organization would best account for the phenomenon under question. It would have been nice had Churchland availed herself of some of this research—although, to be perfectly honest, I believe that consciousness research today is all over the theoretical map. We are far from developing even a line of attack on a framework that would support a comprehensive theory for it.
Using an indirect approach to look for the explanation of consciousness fares little better. Here, cognitive scientists seek to explain attention, memory, dreams, problem solving, self-representation, and so forth, expecting that, once these are successfully accounted for, a theory of consciousness will become evident. Unfortunately, however, the explanations they come up with often have little to do with one another; how to ﬁt them together to tell us something about consciousness is a bit mystifying. The whole matter becomes more confusing still when Churchland gives us different theoretical frameworks based on the direct and indirect approaches without any comment on how they might connect— if they do.
Churchland is interested in how brains represent the world around them and how they learn.
KNOW THY BRAIN
Churchland next tackles epistemology—the theory of knowledge—or at least her version of it. I fear that most epistemologists in philosophy would not recognize her discussion as compatible with their research. Traditional epistemology is concerned with questions like what truth is, how propositions can be justiﬁed, and what counts as an adequate explanation. Churchland is interested, instead, in how brains represent the world around them and how they learn. These questions may be tangentially related to epistemological concerns, but they are by no means central.
Be that as it may, Churchland makes the case that we need to claim that brains actually contain models of the world (and are not mere stimulus-response machines) and sets out what a theory of what those neural models, or representations, might be. It is important to keep in mind, she says, that our brains have evolved to help us feed, ﬂee, ﬁght, and reproduce. This contrasts with digital computers, say, which are designed to compute. The cognitive science of yore generally took minds to be analogous to computers and so tried to build theories of representation that spanned what both computers and humans do. Churchland effectively skewers that folly.
Having cleared the way conceptually, she goes on to explain how neurons work, implicitly operating under the assumption that individual neurons are going to be the representational engine that drives our brains. She acknowledges that this assumption is not universally accepted in neuroscience, but it does provide a good starting point. The difﬁculty with looking at the behavior of individual neurons is that it rapidly becomes extremely complicated. Neurons do not just activate and ﬁre, as one might imagine; they jitter around constantly. Nor is it clear what of this jittering is simply biological noise and what is important for information processing.
For that reason, many investigators of brain representation carry out their research by looking at artiﬁcial neural nets, computer programs designed to mimic some aspects of neural interactions. These are much simpler and easier to control than what Mother Nature has given us, so it is much easier to design and carry out experiments on them. On the other hand, we cannot be sure that the behaviors we get out of them are relevantly similar to what we see in the brain. At any rate, what representations look like in neural nets are “hot spots” in a multidimensional imaginary space, something quite different from what traditional philosophers envision when they speak about representations. Whether this approach to explaining representations will replace traditional ones remains to be seen.
Part and parcel of the problem of understanding representations is understanding learning—how we get representations in the ﬁrst place. Churchland rightly points out that philosophy’s old standby, the dichotomy of innate versus learned knowledge, is wrong from the get-go. Put simply: the codes carried in our genes take us, at best, as far as proteins, but organisms develop in particular environments and often require those environments to develop at all. Hence, any remotely complicated structure that we had believed might be innate has to be the product of an environment-organism-gene interaction. No knowledge is going to be strictly speaking innate, but none is going to be purely learned either.
Furthermore, the brain landscape is remarkably constant across its various structures. Cortex is cortex is cortex. Areas of the cortex become specialized as a result of experience. The postulated mechanism for carving out such areas is nothing more than what is called “Hebbian learning,” a mechanism first articulated in the 1940s: repeated activation will cause future activation to be easy; decreased activation will make future responses more difficult. But this process requires a learning mechanism driven by a reward system. To learn about an event, we need a reason to repeat it. Things that feel good we repeat. Things that do not, we do not. Much is now known about reward networks, especially those involving fear conditioning in rats. These give us some clues about how all sorts of reward-based learning might be going on in our brains, but a complete explanation will be unquestionably more complicated.
The final brief section of Brain-Wise discusses belief in God and the brain. Churchland surveys two common arguments advanced for believing in God, the argument from design and the argument from first cause. That is, we know that God exists because our universe is so orderly that only a supremely intelligent being could have created it and we know that God exists because there had to be an original cause of everything. Both of these arguments fall far short, and Churchland surveys their failures.
More interesting is her discussion of what clinical neurology can tell us about the feeling of revelation. We now know that epilepsy and other sorts of focal brain lesions can make one feel the presence of another and feel great religious fervor. Our feelings of faith, if we have them, probably have a similar brain-based explanation.
Churchland sees a big divide between science and faith, noting that science marshals evidence for its conclusions, while faith does not. She fails to see that science, too, has articles of faith.
Churchland sees a big divide between science and faith, noting that science marshals evidence for its conclusions, while faith does not. She fails to see that science, too, has articles of faith: Scientists believe that their approach to understanding the world is the best way to proceed, and they believe (professionally, at least) that the universe is material and non-magical. Churchland tries to call upon a history of success in science to bolster these basic assumptions so that she can draw a clear distinction between science and religion. But unfortunately for her, we must recognize that the history of science is a history of failures as much as successes. Science is a peculiarly human activity, done by foible and ambitious agents, working together in a particular social and economic milieu. While it certainly deserves pride of place in our culture, I am not as convinced as Churchland that it is going to answer all the questions we want answered.
So I return to the themes with which I started. Churchland is zealous in her belief in the power of science, so zealous, at times, as to be blinded to science’s shortcomings and to the contributions that contemporary philosophers might be able to make to the project of understanding ourselves. Many philosophers today not only believe that understanding the brain is relevant to understanding the mind, but actually know quite a lot about neuroscience. Their work should be not ignored in ﬁguring out the connections between biology and the humanities. We who work in philosophy have much to offer those engaged in the mind/brain sciences, particularly where neuroscience is currently rudderless. It would have been nice to see more of the connections that others are making and fewer criticisms of the outmoded skeptics, for that would give those interested in the connections a more accurate picture of the current state of affairs.
from Brain-Wise: Studies in Neurophilosophy by Patricia Churchland © 2002 by Patricia Church-land. Reprinted with permission from MIT Press.
Even if I do have noninferential (and hence direct) knowledge of my body, the dualist may argue, I can be wrong about the state of my body, whereas I cannot be wrong about the state of my conscious mind. I have noninferential and infallible knowledge of “discriminable simples.” Such infallibility, the argument continues, entails something metaphysically special about the mind. Note that for the argument to make any headway, the infallibility claim has to be exceedingly strong. “Infallible” here has to mean not just that one is usually right, or even in fact one is always right. It has to mean that one cannot—in principle—ever be wrong. As we shall see, this messes up the dualist....
Pathological conditions give [a] ... dimension of error in the self-reporting of mental states. A patient with a sudden lesion to his primary visual cortex may fail to realize that he is blind, even when this is pointed out to him, and even when he repeatedly stumbles into the bed or the door. Described as blindness unawareness, Anton’s syndrome is a rare, but well-documented deficit. ...
The mystery of Anton’s syndrome is worth dwelling on because visual experience seems so self-evident. If anything seems dead obvious, it is that one can or cannot see, and it is hard to imagine being wrong about which is which. Nevertheless, the patients with Anton’s syndrome present us with a compelling case where the brain is simply in error about whether or not it has visual experiences. To insist that such subjects must be having visual experiences if they think they are, because one cannot be wrong about such things, is of course, to argue in a circle. The question precisely at issue is whether one can ever be wrong about such matters. Prima facie, at least, Anton’s patients present evidence that one can be wrong and that there is a neurobiological reason why they are wrong. More than a mere a priori conviction of infallibility is needed to reverse the hypothesis or reinterpret the data.
From the point of view of cognitive neuroscience, whether or not someone’s recognitional skills deploy explicit reasoning appears less important than certain other properties, such as the neural pathways involved, the contribution of affective components, the nature of cross-modal and top-down effects, how much learning has gone on, and how the brain automates cognitive skills. The predilection...for taking the differences between inferential and noninferential judgments to be a momentous metaphysical division looks about as misguided as believing, as pre-Galilean physicists did, that the difference between the sublunary realm and the superlunary realm marks a momentous metaphysical division concerning the structure of the cosmos.
Of course, there is a difference between superlunary space and sublunary space, and the difference means something to humans, because of the proximity of the Moon to Earth. But it does not mark a metaphysical difference, or even a difference in what principles of physics apply. Similarly, there is a difference between inferential and noninferential judgments, but we should hesitate to attach profound metaphysical significance to these two types of neural processing.
Dualism is implausible at this stage of our scientific understanding. In the business of developing an ongoing research program, dualism has fallen hopelessly behind cognitive neuroscience. Unlike cognitive neuroscience, dualist theories have not even begun to forge explanations of many features of our experiences, such as why we mistake the smell of something for its taste, why amputees may feel a phantom limb, why split-brain subjects show disconnection effects, why focal brain damage is associated with highly specific cognitive and affective deficits. In truth, dualism does not really even try.
To be a player, dualism has to be able to explain something. It needs to develop an explanatory framework that experimentally addresses the range of phenomena that cognitive neuroscience can experimentally address. While it is conceivable that this can be done, the bookies will give long odds against its success. Until at least some distinctly dualist hypotheses are on the table, dualism looks like a flimsy hunch still in search of an active research program.