Since the publication 30 years ago of his popular book on neuroscience, The Conscious Brain, Steven Rose, Ph.D., has maintained a dual career: researcher on the biochemical alterations in the brain associated with memory and keen observer and commentator on the interactions between science and society.
I mention this up front because, when reading The Future of the Brain, it is important to keep Rose’s two career tracks in mind. If you do not, you could conclude that The Future of the Brain was written by two authors: an articulate, well-informed neuroscientist and a freewheeling and sometimes prickly arbiter of what neuroscience should and should not be doing.
First, the neuroscience. The Future of the Brain could serve as an excellent introductory textbook on the brain’s history, its evolution and development from infancy through old age, and its relationship to mind. Rose discusses in the ﬁrst 50 pages of the book possible scenarios for the origins of life, the transition from protocells to multicellular organisms, the evolution of brains from ganglionic clusters of cells, and the evolution of the mammalian brain that culminated in the human brain. On these subjects, Rose demands heavy lifting from the average reader of popular science.
The going gets easier with the chapter on development “From 1 to 100 Billion in Nine Months.” Here, the reader ﬁnds a concise and well-written overview of early human brain development and an introduction to the book’s main theme:
All of life is about being and becoming; being one thing, and simultaneously transforming oneself into something different. It is really like...rebuilding an aeroplane in mid-ﬂight.
This process of self-creation starts at the moment of conception and continues throughout our lives. As an example, Rose turns our attention to the seemingly simple act of walking:
While fully developed walking is a general-purpose skill, gait learned in the context of a carpeted living room ﬂoor will not be the same as gait learned in the savannah or desert; like all aspects of behavior, walking is walking in context.
Because we all learn to walk in contexts that vary from one person to another, the gaits that we ﬁnally end up with are highly individual. Think back to the last time you recognized a friend from several blocks away simply by observing his gait. It really did not matter if you observed him from front or back. The gait—the walk—said it all. But if that friend had experienced a mild stroke since you last saw him, his gait would have differed in ways signiﬁcant enough to render instant identiﬁcation difﬁcult. In short, human walking, like many other expressions of the brain’s structure and functioning, is not ﬁxed but changes according to life experiences.
In the chapter “Having a Brain, Being a Mind,” Rose focuses on the dynamism of the brain. “Not merely composition but structure and relationships are fundamental to any understanding of the brain,” he writes. Thus, terms such as “architecture” are inappropriate when applied to the brain:
If this be architecture, it is a living, dynamic architecture in which the present forms and patterns can only be understood as a transient moment between past and future. Truly the present state of any neuronal connection, any synapse, both depends on its history and shapes its future.
In “Aging Brains: Wiser Minds?” he argues that aging does not occur uniformly throughout the brain but is prevalent particularly in parts of the frontal lobes, hippocampus, cerebellum, and basal ganglia. More important, the normal age-associated thinning of neurons is not necessarily a bad thing, Rose suggests. “The space that the neuron occupied may become ﬁlled with glial cells or dendritic and axonal branches from adjacent cells, at least during early senescence, thus perhaps actually enhancing connectivity and complexity among the remaining neurons.” If true, this is a hopeful insight often missing from discussions of the brain changes associated with aging.
MISUNDERSTANDING AND NEVER UNDERSTANDING THE BRAIN
The sections of the book on evolution are especially good and highlight an important truth. “Neither nervous systems nor behavior leave much by way of fossil traces,” and one must rely on “inferential arguments,” such as estimating the brain capacities of earlier hominids from internal casts of their skulls. In this murky undertaking, data are thin and interpretation is difﬁcult:
Scarcely more than a couple of suitcases full of bones comprise the evidence over which the almost legendary ﬁghts amongst paleoanthropologists about the dating and reconstruction of hominoid fossils have raged ever since the fossilized remains of quasihumans were ﬁrst discovered in the late nineteenth century.
But this dearth of hard data has not slowed a burgeoning body of literature of contentious claims about the mentalities, behaviors, and social organizations of our ancestors, “based on little more than a few shards of bones, the fossilized remains of digested foods and strangely marked stones and skulls.”
As chief offenders in this enterprise, Rose points to the evolutionary psychologists who, in their enthusiasm to come up with an all-explanatory theory of human origins, have put their own spin on the fossil ﬁndings. To put it mildly, Rose has little patience with this group—led in this country by the much-quoted but little tested Steven Pinker, Ph.D.—and characterizes them as “highly articulate and over-zealous theorists” who have “hijacked” the term “evolutionary psychology” in the interest of offering a reductionist account of human evolution using presumed genetic and evolutionary explanations to “imperialize and attempt to replace all others.” (If I had to guess, I’d suspect that Rose prefers his coffee strong and his liquor neat.)
As an example of the allegedly nefarious activities of the evolutionary psychologists, Rose points to their interpretations of hunter-gatherer societies. “Descriptions that evolutionary psychology offers of what human hunter-gatherer societies were like read little better than Just-So accounts, rather like those museum-and cartoon-montages of hunter-dad bringing home the meat whilst gatherer-mum tends the ﬁreplace and kids.” Rose terms this variant of presentism “Flintstone psychology,” claiming that the imagined past explains the present.
Nor has Rose much patience with facile claims that we will soon be able to deﬁnitively understand or explain the brain. He points to several brain paradoxes as limiting factors in the achievement of such quixotic goals. For instance, brain functions are both localized and distributed.
Nor has Rose much patience with facile claims that we will soon be able to deﬁnitively understand or explain the brain. He points to several brain paradoxes as limiting factors in the achievement of such quixotic goals. For instance, brain functions are both localized and distributed. Although the brain is a ﬁxed structure, it also is true that “Today’s brain is not yesterday’s and it will not be tomorrow’s.” This openness of the brain to the inﬂuence of its environment makes it unlikely that scientists will be able to formulate a Uniﬁed Theory of the brain:
Molecules, macromolecules, neuronal and synaptic architecture, ensembles, binding in space and time, population dynamics; all clearly contribute to the functioning of the perceiving, thinking brain. At what level does information processing become meaning, awareness, even consciousness?
Another paradox: We use our brains to understand our brains and in the process discern the relationship of brain to mind. We must ﬁrst aim at assembling “the four-dimensional, multi-level jigsaw of the brain...in space and time” before turning our attention to “the real business of decoding the relationships of mind and brain.” One way of making this easier, suggests Rose, is to stop thinking of the brain as a unitary organ:
It is perhaps a mistake to speak of “the brain” in the singular. In reality it is an assemblage of interacting but functionally specialized modules. It is a plural organ—but one that normally works in an integrated manner.
An additional modiﬁcation in our thinking suggested by Rose involves doing away with the “false simpliﬁcations” of “the environment” and “the gene.” Both are phony terms. How could one even imagine a gene existing completely free of some environment? In addition, environments exist at multiple levels. Even within the fertilized ovum no “genetic program” exists independent of the context in which it is expressed:
The developing fetus and the unique human that it is to become are always both 100 percent a product of their DNA and 100 percent a product of the environment of that DNA—and that environment includes not only the cellular and maternal environment but the social environment in which the pregnant mother exists.
Rose is strongest when he focuses on the explanatory incompleteness of the natural sciences. “There is more than one sort of knowledge, and we all need poetry, art, music, the novel, to help us understand ourselves and those around us.” Yet, as he ruefully observes, this is a concession not every biologist is prepared to make. He expresses particular disdain for recent efforts to explain the emergence of art, literature, and the other humanities as arising from our evolutionary past. For instance, he says this about the consilience theory of Edward O. Wilson, Ph.D.: “The triviality of such an all-embracing and ultimately irrelevant ‘explanation’ would scarcely be worth dwelling on had it not become so much part of cultural gossip.”
IS THERE SUCH A THING AS A MEMORY?
From here, Rose focuses on his research on the molecular basis of memory. After more than three decades of admirable work, he has some original and intriguing things to say on the subject. For one, he is convinced that “to talk about ‘memory’ is a reiﬁcation, a matter of turning a process into a thing.”
To understand what Rose is saying here, think back for a moment to your high school graduation. At the moment, your recall of that event is considerably less detailed than it was a few weeks after your graduation. Yet, if someone asks you if you “remember” it, you readily respond in the afﬁrmative. But if your recollection has changed so much over the years, does it make sense to postulate the existence of a unitary memory of your graduation experience? Indeed, is there even such a thing as “a memory” for that or any other event in your life? Of course, you might respond to this by acknowledging that you have lost many of the details, but you still remember the essentials of the event. But what are the essentials, and would they not differ from one person to another?
Considerations such as these have led Rose to conclude rightly that the computer model of memory (so many bits for each memory) does not even make sense. “How might one decompose an experience, a face, the taste of a meal, the concerns of St. Augustine, into such bits? For that matter how many bits are involved in learning to ride a bike? Real life...seems much too rich to delimit like this.”
In addition, varieties of memory are many, each conforming to different rules. Recollection, for instance, is far more fragile than recognition. To get a feeling for this difference, imagine yourself looking brieﬂy at 20 objects. Later, you are asked to recall them. How would you do? Most adults only recall about 12 to 15. But in a test of recognition memory, you will do vastly better, successfully recognizing up to 10,000 previously seen objects when they are reshown to you after some delay.
If that seems incredible to you, consider that neuroscientists are not entirely certain how memories are made, how they are stored in the brain, or what the processes are leading up to their retrieval. According to Rose:
We are unsure whether memory capacity is ﬁnite or bounded, whether we forget, or simply cannot access old memories, nor even how our most certain recollections become transformed over time.
Thus, one does not have to be a Freudian to claim that our past continues to inﬂuence us, whether or not we can remember it. In fact, it is possible that memories never completely disappear. William Faulkner captured that insight in Requiem for a Nun: “The past is never dead. It is not even past.”
To dramatize the vexations experienced by anyone interested in understanding memory, Rose compares two popular textbooks on the subject. One, written by a molecular neuroscientist, consists of descriptions of molecules and neural circuits. The second, written by a leading cognitive psychologist, contains diagrams referring to such terms as “central executive” and “visual sketchpad.” Except for use of the term “memory,” the books seem to have little in common, because the books use different specialized vocabularies. A typical reader of the ﬁrst book would likely ﬁnd the “visual sketchpads” of the second overly vague and simplistic; the typical reader of the psychology text, in turn, would probably stop reading the book by the molecular neuroscientist because of incomprehensible details and perceived irrelevancies. Rose writes:
Within the neurosciences, we don’t all speak the same language, and we cannot yet bridge the gap between the multiple levels of analysis and explanation, of the different discourses of psychologists and neurophysiologists.
It should come as no surprise, therefore, to learn that the “mystery” of memory remains unsolved: “If we can’t relate what is going on down among the molecules to what might comprise the workings of a visual sketchpad, we are still in some trouble invoking either to account for what happens in my mind/brain when I try to remember what I had for dinner last night,” Rose writes.
Moving on to the topic of consciousness, an interest of Rose’s going back at least to The Conscious Brain, Rose reminds us that the term lends itself to many understandings and uses: social consciousness, class consciousness, ethnic consciousness, and feminist consciousness, to name just a few. Yet many neuroscientists ignore these varied forms and insist on reducing consciousness to simply being awake and aware. Rose disagrees:
Being conscious is more than this; it is being aware of one’s past history and place in the world, one’s future intents and goals, one’s sense of agency, and of the culture and social formations within which one lives.
In short, the brain can only be understood, according to Rose, if we keep in mind that “we are social beings and that our minds work with meaning not information.” Thus, the expansion of our mental capacities has coincided with the evolution “not just of the brain, but the brain in the body and both in society, culture and history.” And, with this comment, Rose slyly leads us into matters germane to his second career track.
USES AND ABUSES OF BRAIN SCIENCE
When looking at the nexus between neuroscience and society, Rose is obviously not happy. Many of the applications of neuroscience to our lives remind him of scenes from Aldous Huxley’s Brave New World (a favorite, much-quoted reference of Rose). For instance, while acknowledging the epidemic of depression that the World Health Organization has identiﬁed as the chief health hazard of this century, he believes that depressed patients are being overmedicated with antidepressant drugs in lieu of our asking the simple question: Why is this dramatic rise in the diagnosis of depression occurring? Rose believes that society does not ask this question “for fear it should reveal a malaise not in the individual but in the social and psychic order.”
At another point, he inveighs against drug treatments for attention-deﬁcit hyperactivity disorder (ADHD) as “a cheap ﬁx to avoid the necessity of questioning schools, parents, and the broader social context of education” and suggests caution lest we “again ﬁnd ourselves trying to adjust the mind rather than adjust society.” (Additional elaboration on this theme can be found in a revealingly titled paper he coauthored with his wife, Hilary: “Do Not Adjust Your Mind, There Is a Fault in Society.”)
But this approach seems to evade several important questions. How does one deﬁne “malaise,” and what exactly is meant by “a broader social context of education”? Who is going to deﬁne that context and by what criteria? The same may be asked about “adjusting” society. Certainly, such Herculean tasks are beyond the expertise and experience of neuroscientists qua neuroscientists.
As a practical point, what does Rose suggest should be done with the burgeoning numbers of depressed patients? From his own research ﬁndings, he suggests that psychotherapy may produce chemical effects similar to those brought about by antidepressants. But, even granting this highly debatable point, psychotherapy is costly, it is less effective than the currently available antidepressants in reversing depression, and “it is notoriously difﬁcult to devise ‘controls’ for the psychotherapeutic experience.”
Of course, Rose is correct in asserting that problems exist with antidepressants and other psychotropic drugs, perhaps greater problems than are yet realized. That is one of the reasons patients should be informed of everything known at present about the effects of these drugs. But does the existence of “problems” justify withholding potentially helpful drugs from all patients simply because some of them may experience unintended consequences? At one point, Rose seems to admit that it does not, writing that “There is always a danger of overstating the threats of new technologies.”
But even in those cases in which control seems desirable, such as proﬁling based on brain imaging, he believes the real enemy is neither science nor technology but society. “The real issue is probably not so much how to curb the technologies, but how to control the state.” For example, when discussing thought control, he refers to a “world whose media, TV, radio and newspapers are in the hands of a few giant and ruthless corporations.” Who could argue with that? But he then adds, “There isn’t much more that transcranial brain stimulation can add.”
When discussing the psychopharmacologic revolution, he inveighs against the failure of the main political parties to question such prerogatives of money and power as “buying a more privileged personalized education for one’s children via private schools” and asks, “Compared with this, what difference will a few smart drugs make?” Although such questions are worth pondering, what do they have to do, speciﬁcally, with neuroscience? Explorations of the social and ethical implications of neuroscience deserve more than ideological statements draped in the vocabulary of neuroscience.
On occasion, Rose buttresses his political arguments (and that is exactly what they are) by setting up a straw man and then indignantly tearing it down. To mention one of the more egregious examples, the Human Genome Project is not, as Rose claims, offering “to lay bare our innermost predispositions and predict our futures thereby.” Indeed, in the earlier, more restrained portions of the book, Rose eloquently sets out some of the reasons why such a project would not be possible. “The evolutionary path that leads to humans has produced organisms with highly plastic, adaptable, conscious brains/minds and ways of living,” he writes.
At times, Rose gets so carried away with what he believes will be the likely future applications of neuroscience that the reader may wonder whether the author is putting him on. For example, Rose writes:
The future offers the prospect of an entire population drifting through life in a drug-induced haze of contentment, no longer dissatisﬁed with their life prospects, or with the more global prospects of society in general.
When Rose discusses imaging the brains of children diagnosed with ADHD, he declares that “There is no way of telling whether the claimed differences from ‘normal’ are the consequences of the medication or the ‘cause’ of the behavior which has led the children to be medicated, or indeed, whether the ‘behavior’ itself has ‘caused’ the changes in the image.” He seems unaware of imaging studies performed on unmedicated children with ADHD.
Turning to abnormal positron emission tomography (PET) scans of the brains of psychopaths, he suggests that the experience of having committed murder and being jailed might itself be responsible for the changes in the scan. In one of the more bizarre sentences in the book, he wonders whether a similar abnormality to that observed in psychopathic killers might be found in a PET scan of British Prime Minister Tony Blair “who has sent troops into battle across three continents in ﬁve years.”
Nor is The Future of the Brain free of outright factual errors. The dopaminergic cells of the substantia nigra connect with the striatum (caudate and putamen) not with the thalamus; Ronald Reagan was not “diagnosed with incipient Alzheimer’s even during his presidency in the 1980s” but after having left ofﬁce; it is not true that “there is no evolutionary payoff to selecting out genetic variants such as Huntington’s disease with its relatively late onset” because of the existence of a juvenile form of Huntington’s.
Despite some reservations and a few problems like these, though, I think this is a highly worthwhile book—indeed, a brilliant one—especially in the early chapters when Rose concentrates on neuroscience rather than ideology. I am also happy to report that The Future of the Brain is a more optimistic book than his earlier effort. Writing in 1973, he concluded The Conscious Brain with this prediction: “The chances of humanity’s survival into the 21st century are at best only middling.”
From The Future of the Brain: The Promise and Perils of Tomorrow’s Neuroscience by Steven Rose. © 2005 by Steven Rose. Reprinted with permission of Oxford University Press.
Ask neuroscientists what they see as the next big step in their science, and the answers you will get will vary enormously. Molecular neuroscientists are likely to offer proteomics—the identification of all of the hundred thousand or so different proteins expressed at one time or another in one or other set of neurons. Developmental neuroscientists are likely to focus on a better understanding of the life history of neurons, the forces that shape their destiny, control their migration, determine which transmitters they engage with and how experience may modify structure, connectivity and physiological function. Imagers might hope for better resolution of mass neural activity in time and space—a sort of merger of fMRI and MEG. But I suspect that underlying all of these specific advances there is something more fundamental required. We simply do not yet know how to move between the levels of analysis and understanding given by the different brain discourses and their associated techniques. Thus imaging provides a mapping—a cartography—of which masses of neurons are active under particular circumstances. It is, to use Hilary Rose’s memorable phrase, a sort of internal phrenology, more grounded, to be sure, than that of Gall and Spurzheim, but still at best providing a description, not an explanation. At quite another level, proteomics is likely to provide another, entirely different, cartography. But we don’t know how the two, the proteomics and the imaging, relate, or how either and both change over time. There are multiple spatial and temporal scales and dimensions engaged here.
One way now being advocated of moving between levels would be to combine brain imaging with single-cell recording by means of electrodes placed within or on the surface of individual neurons. These procedures cannot be ethically performed on humans except for well-defined medical purposes. However, they could in principle be allowed on primates. This was one of the scientific arguments being advanced for the controversial proposal to build a primate centre in Cambridge, but eventually shelved because it was judged that the costs of protecting it from angry animal-righters would be too great. The claimed justification for the Centre, that it would aid in research on Alzheimer’s and Parkinson’s Disease, was to my mind always pretty spurious. Whilst I agree that the combination of single cell recording with imaging would help move our understanding of brain mechanism beyond pure cartography, I share with many the sense of unease about whether this end justifies these particular means—the use of animals so closely kin to humans. I’ve no doubt that such studies will be pursued elsewhere in countries less sensitive than Britain about the use of animals. It would then perhaps be possible to test the idea that combining imaging and single-cell recording will increase our understanding of global neural processes such as those that enable the brain to solve the binding problem. But if critics like Walter Freeman are right, and what matters is not so much the responses of individual cells but the much wider field effects of current flow across their surfaces, then an essential element in interpretation will still be lacking.
A second approach with powerful advocates is to use the now common techniques of genetic manipulation in mice, systematically knocking out or inserting particular genes and observing the consequences on development, anatomy and function. I don’t have quite the same ethical problems about working with mice as I do about primates—I’m not going to spend time here explaining why—but I fear that the results of this approach may produce more noise than illumination. Brain plasticity during development means that if an animal survives at all, it will do so as a result of the redundancy and plasticity of the developmental system which ensures that as far as possible the gene deficit or surplus is compensated for by altering the balance of expression of all other relevant genes. The result is that often, and to their surprise, the geneticists knock out a gene whose protein product is known to be vital for some brain or body function, and the report ‘no phenotype’—that is, they can see no effect on the animal because compensation has occurred. But the converse is also true. Because many genes also code for proteins involved in many different cell processes, knocking the gene out results in a vast range of diffuse consequences (pleiotrophy). Once again, dynamic complexity rules.
Many neuroscientists argue that there is greater priority than acquiring new data. The world-wide effort being poured into the neurosciences is producing an indigestible mass of facts at all levels. Furthermore the brain is so complex that to store and interpret these data requires information-processing systems of a hitherto undreamed-of power and capacity. It is therefore necessary to build on the experience of the human genome project and invest in one or more major new neuro-informatic centres, which can receive new data in appropriate forms and attempt to integrate them. This seems an innocent, if expensive new project, likely to find favour in Europe, where such a centre is under active consideration as I write. But once again a small whisper of scepticism sounds in my ear, and if I listen hard enough I can hear the fatal acronym GIGO—garbage in, garbage out. Unless one knows in advance the questions to be asked of this accumulating mass of data, it may well prove useless.
This is because behind all of these potentially laudable moves forward there lies a vacuum. Empiricism is not enough. Simply, we currently lack a theoretical framework within which such mountains of data can be accommodated. We are, it seems to me, still trapped within the mechanistic reductionist mind-set within which our science has been formed. Imprisoned as we are, we can’t find ways to think coherently in multiple levels and dimensions, to incorporate the time line and dynamics of living processes into our understanding of molecules, cells and systems.