Monday, April 01, 2002

The Perils of Prediction

The Next Fifty Years: Science in the First Half of the Twenty-First Century

By: Gualtiero Piccinini Ph.D.

rev_v4n2piccinini_3

Fifty years ago, the neuropsychologist Karl Lashley and the psychologist B. F. Skinner were at the top of their fields. Skinner theorized that all behavior could be explained by conditioning. Lashley viewed the brain as a general-purpose machine for associating stimuli and responses, and spent much of his career attempting to prove that the brain is largely homogeneous (“equipotential,” he would say). If asked about the next 50 years, Skinner might have answered: “We’ve shown how conditioning explains a wide range of animal behavior in exquisite detail; in the next 50 years, we’ll explain human behavior in the same way.” Lashley might have echoed: “We’ve shown that the brain is equipotential; the challenge for the next 50 years is to discover how the same piece of neural tissue can perform so many different functions.” But they both would have been wrong.

Predicting the science of the future is risky, to say the least. Given Skinner’s and Lashley’s views, those would have been the right predictions, but the assumptions they are based on have been rejected many times over. In wondering about the future of science, we must remember that many of our own current assumptions may be similarly rejected—perhaps sooner than we expect. But if we do not take ourselves too seriously, predicting the future is a thrilling way to test our most speculative and daring intuitions—the kind that sometimes lead to genuine breakthroughs.

Tapping intuitions about the future is precisely the task that editor John Brock-man assigned to the scientists contributing to The Next Fifty Years. The result is 25 brief essays on the present and future of some of the most current and exciting scientific enterprises. Of these essays, a staggering 16—on which I will focus in this review— revolve around the brain and behavioral sciences. That is without counting Martin Rees’s essay on the search for extraterrestrial intelligence, Paul Ewald’s essay on a novel theory of many diseases (including some mental disorders); and the essay by Brian Goodwin, whose scientific future reserves for us panpsychism, according to which all matter is alive and conscious and goal-directed.

The fanciest predictions are by Marc Hauser, who thinks we will learn what it is like to be a dog or a bat by implanting their neural tissue into our brains. He also imagines that we will download the neuronal signals from any animal, creating a kind of hard-drive library of their thoughts while they were interacting with the world. We would be able to read the mind of an animal as it eats, sleeps, has sex, communicates. At some level we would have a deep sense of what it is like to be them. We might even be able to match our own brain waves with theirs, thereby experiencing a kind of inter-species harmony never before achieved.

This is charming science-fiction. As much as we’d like to experience the way things look to an eagle or smell to a dog, at present neither Hauser nor anyone else knows if this will ever be possible—let alone how. Most authors in this collection, however, are more down to earth.

NATURE OR NURTURE?

Many of the authors in The Next Fifty Years take a stance on a fundamental question: What makes us what we are, nature or nurture?

About 2,500 years ago, Plato argued that some of our knowledge, such as that of mathematics and morality, is too universal and basic to be learned. In his view, our minds have an innate but unconscious knowledge of numbers, justice, and other subjects, and all that our experiences (including math classes) can do is trigger our cognitive mechanisms to recall what they were storing all along. For Plato, in our minds there is no genuine learning but only the activation of unlearned structures. But Plato’s disciple Aristotle argued that our cognitive mechanisms can and do extract knowledge from our experiences and interactions with our environment. The debate over the respective roles of unlearned structures in our minds and environmental influences in explaining knowledge and behavior—the dreaded nature versus nurture debate—was thus started, and it has not been resolved since.

During the 19th century, evolutionary theory, the rise of the social sciences, and other scientific developments exacerbated the dispute. Some argued that, in light of Darwin, we had better believe that our cognitive and behavioral endowment is native and therefore unlearned. Others replied, in light of sociology, that we should admit that the environment makes the crucial difference in our development. Skinner and Lashley stood on the nurture side of the fence, but they were soon bashed by Noam Chomsky—the most influential nativist (“pro-nature”) of the 20th century.

On the one hand, you are not born fully able to romance members of the opposite sex or discuss particle physics, so the environment must play some role in your development. On the other hand, you cannot learn without a learning mechanism.

All participants in the controversy know, at least in the back of their minds, that in order to square the equation, you have to have both nature and nurture in it. On the one hand, you are not born fully able to romance members of the opposite sex or discuss particle physics, so the environment must play some role in your development. On the other hand, you cannot learn without a learning mechanism. Perhaps you acquire this learning mechanism by learning? Well, then you must have some other mechanism to do the learning. Some learning mechanism, somewhere in your brain, must be unlearned, on pain of infinite regress. The problem, therefore, is not whether nature or nurture alone can explain behavior: They cannot. It is how the two fit together—what their respective roles are in explaining behavior. What we call nativists put more weight on unlearned structures (nature), whereas empiricists assign the main role to learning or other environmental influences (nurture).

The good news is that, according to many of Brockman’s authors, 50 years from now we will know the answer. The bad news is, they still disagree on what that answer will be.

EVOLUTIONARY ARGUMENTS

The modern nativist camp is represented by what has recently been its most vocal brand—evolutionary psychology, which is based on three assumptions:

The brain is modular, that is, divided into specialized modules that work largely independently;

Each salient behavior and cognitive function (for example, vision, hearing, motor planning) is governed by its own module;

Each module-cum-behavior is an adaptation. Ancestral evolutionary pressures selected it because it provided some advantage in the struggle for survival and reproduction. 

All five nativists in the collection— Paul Bloom, Mihaly Csikszentmihalyi, Geoffrey Miller, Judith Rich Harris, and Nancy Etcoff—make a case for evolutionary psychology, or something similar, as the future of psychology. For instance, Harris asserts with confidence that “any tendency for children to behave similarly in different contexts is due almost entirely to genetic influences on their behavior. The environmental influences won’t transfer from one situation to another.” Bloom announces that “in the next fifty years, ‘evolutionary psychology’ will be an anachronism, as the label suggests that there is some other field of psychology that does not attend to considerations of selective advantage, adaptive design, and so on (Creationist psychology?).” In a similar vein, Miller submits that criticisms of evolutionary psychology will soon melt away: “The charge that evolutionary psychology is a set of ‘just-so stories’ will vanish.”

Behavioral scientists should welcome evolutionary considerations and pay careful attention to genetics.

To the extent that evolutionary psychologists draw attention to the role of evolution and genetics in affecting behavior, their contribution is hardly controversial. Behavioral scientists should welcome evolutionary considerations and pay careful attention to genetics. Harris makes the important point that in behavioral research, it is essential to control for genetic factors. For example, we are used to claims such as that watching violent TV programs correlates with violent behavior. But, as far as we know, there might be a common cause—for instance, a genetic cause—of both wanting to watch those programs and behaving violently. It would be pointless to study the effects of television on behavior without controlling for genetic influences, she says, because whatever correlations are discovered will be “uninterpretable.” Too much behavioral research has been—and still is— conducted without genetic controls.

Harris forgets to mention that it is just as easy to search for genetic influences without controlling for environmental factors as the other way around, and the results are just as uninterpretable (witness the controversy surrounding The Bell Curve). Disentangling genetic influences from environmental ones is a tricky affair, and the more behavioral scientists become good at it, the better.

But the assumptions of evolutionary psychology are a different matter. When it comes to defending those assumptions, instead of arguments, readers of The Next Fifty Years will find generic appeals to evolution. Bloom, for example, goes in one short step from evolutionary theory to the conclusion that brain modules are adaptations: “In the last several years, there has been a growing acceptance of the Darwinian notion that the brain, like any other biological organ, has evolved through the process of natural selection, and so the capacities of the brain can be profitably understood as adaptations and by-products of adaptations.” Etcoff draws a similar inference: “The brain, like every other bodily organ, has been shaped by natural selection and has evolved mental modules that enhance reproductive fitness and help ensure survival.”

To pretend that evolutionary psychology is of a piece with the rejection of creationism does a disservice to the reader.

In a book for the general public, we should not expect scholarly argumentation, but this is too loose. To pretend that evolutionary psychology is of a piece with the rejection of creationism does a disservice to the reader. Even if the brain is the product of evolution, it does not follow that evolution therefore divided it into independent modules. Even if the brain is modular, it would not follow that each salient behavior is controlled by its own module; and even if that were the case, it still would not follow that each module-cum-behavior is an adaptation. Yet these are the assumptions of evolutionary psychology.

AN UNRESOLVED DEBATE

Maybe part of the brain is modular and part is not. Maybe some modules control specific behaviors, while others cooperate in controlling many behaviors at once. Maybe some modules-cum-behaviors are adaptations and others are not. Or maybe we have a lot of unlearned knowledge of the world but our behavior is largely learned, or behavior is largely controlled by genes without the brain being modular. The possibilities are many, and some have their own supporters (not represented in this book).

It may even be that the empiricists are right, and the brain contains general-purpose mechanisms that learn most of what we know and do from the environment. Empiricism has many defects, but inconsistency with evolutionary theory is not one of them.

The developmental psychologist Alison Gopnik, an empiricist, has predicated her research program on the assumption that from the time we are babies, we gain most of our knowledge through a process of formulating theories and then testing them against our experiences. She argues that in 50 years, everyone will accept her ambitious view, which aims to explain even the process of scientific theorizing: “The greatest achievement of a unified theory of learning...may be to demonstrate that the most brilliant scientists and the most ordinary kids are engaged in the same enterprise.” Gopnik might be right, but at this stage, her case does not appear any stronger than that of her opponents. Curiously, neither camp acknowledges the existence of a genuine controversy between them. The nativists announce their credo as a scientific revolution in which their new paradigm is displacing the old one, while Gopnik ignores any alternatives to her view.

Although the debate is as unresolved now as it was in the time of Plato and Aristotle, progress in neuroscience will likely have a big impact on it in the next 50 years. After all, most scientists agree that the immediate source of our behavior is the neural machinery contained in our skulls. Both nature and nurture affect us largely through the way they affect our brains. If we ever come to understand neural mechanisms, we will tell a sketchy story that goes like this: Here is how genes coax cells into generating the brain, how neural development is affected by genes and environmental factors, and as a result, how neural mechanisms collect information from the environment and push people around in the world. If and when we have this story, participants in today’s arguments about nature versus nurture will come to look like proverbial blind people describing different parts of an elephant.

Over the past 50 years, the main place where psychologists have looked for insights into neural mechanisms has been not neuroscience but computer science. 

MINDS AND COMPUTERS

Over the past 50 years, the main place where psychologists have looked for insights into neural mechanisms has been not neuroscience but computer science. Both Gopnik and the evolutionary psychologists are explicitly working in this tradition. According to the computational theory of the brain, the brain is a computer and neural processes are computations. In its modern form, this idea is due largely to Warren McCulloch, a neurophysiologist and psychiatrist with a penchant for philosophy. In 1943, McCulloch and Walter Pitts published a theory of the brain that portrayed neurons as logic gates that perform simple inferences. According to McCulloch, the dynamic relations between neuronal spikes embody computations, which explains how humans can gain knowledge about the world.

Unfortunately, the computational theory of the brain did not help resolve the nature-nurture debate. McCulloch was in the nativist camp: He thought the wiring and connection strength of our neural mechanisms, which are unlearned, embody knowledge that constrains how we process sensory signals. Other scientists pushed McCulloch’s view further by developing the idea of an unlearned machine language of the brain, which would come already equipped with a battery of concepts, also unlearned. Empiricists responded largely by abandoning McCulloch’s view of unlearned wiring, opting instead for the view that loosely connected neurons wire themselves into small networks that adjust their connections to compute certain functions.

The computational theory of the brain has attracted little enthusiasm from neuroscientists, because it has always been hard to relate any computational schema to what is known about neural mechanisms. But McCulloch’s theory piqued the minds of computer scientists, starting with pioneers of the field such as John von Neumann and Alan Turing. Their followers founded the field of Artificial Intelligence (AI), which attempts to put McCulloch’s idea into practice by building thinking computers. In 1950, Turing predicted that by the year 2000, computers would converse fluently with people, and prominent AI researchers have repeated similar predictions. Most interested observers (including many in AI) have been laughing ever since— not because building thinking machines is impossible (after all, our bodies do it admirably well), but because AI ideas on how to do so have seemed simplistic and ineffective vis-a-vis the complexity of the nervous system and our current lack of understanding of it. 

Most interested observers have been laughing ever since—not because building thinking machines is impossible (after all, our bodies do it admirably well), but because AI ideas on how to do so have seemed simplistic and ineffective vis-a-vis the complexity of the nervous system and our current lack of understanding of it.

BOLD PREDICTIONS

Modern-day AI researchers are undeterred by the poor record of AI predictions. Four of the five essays by computer scientists in this collection read like a race to emulate Turing. Their predictions are not limited to building intelligent computers or even conscious robots—which, according to John Holland, will probably take more than 50 years. Holland also expects “artificial immune systems that can counter both living viruses and computer viruses” and  “technological automation of drug design and production.” Rodney Brooks adds that very soon we will witness the merging of brains and computers: “Over the next ten to twenty years...[p]eople who are not blind may choose to have a device sensitive to infrared or ultraviolet installed in one of their eyes. Or we may all be able to have a wireless Internet connection installed directly in our brains.” With similar verve, Roger Schank predicts the demise of traditional education: “Fifty years from now, school as we know it will have atrophied from lack of interest. Education in such a society will be a matter of what virtual (and later real) worlds you have entered and how much you have learned to do in those worlds.” He also says, “teachers and classrooms and textbooks will be almost laughable in fifty years.”

David Gelernter agrees with Schank and generalizes the point: “95 percent of the world’s universities will be dead in fifty years.” He adds: “The institutional office buildings that shape our landscape today will disappear. Stores and shops are already on the way out...commerce and education are moving inexorably into cyberspace... People will need houses and convenient, generic, local public gathering spaces. We won’t need cities anymore, except as gigantic museum/theme-park/shopping-malls.”

These sorts of unsupported predictions have always given AI a bad name, and they are likely to strike outsiders as no more plausible than their unfulfilled predecessors. It is a pity, because they obscure the genuine progress and useful applications generated by AI research.

In contrast to this optimism, the collection contains one black-sheep computer scientist, Jaron Lanier, who exposes his colleagues’ wishful thinking to ridicule: “Members of elite computer science departments...believe in an inevitable ‘singularity,’ which is expected sometime in the next half century. This singularity would occur when computers become so wise and powerful that they not only displace humans as the dominant form of life but also attain mastery over matter and energy so as to live in what might be described as a mythic or godlike way, completely beyond human conception. While it feels odd even to type the previous sentence, it is an accurate description of the beliefs of many of my colleagues.”

Lanier struggles with the limitations of current computing technologies and offers some vague but suggestive ideas on how progress might be made. He argues that in many areas, from economics to agriculture, the problems might be too complex to be solved simply by increasing our computing power.

LOOKING INTO THE BRAIN

One of those problems might be the complex workings of the brain. In his elegant essay, for example, Robert Sapolsky argues that we know so little about depression that we are likely still to be in the dark 50 years from now. The game of guessing which computations might explain behavior has produced insights about cognition, but it has shed little light on neural mechanisms. Instead of asking what computations could account for a certain behavior, many psychologists find it more productive to ask where in the brain a certain function is performed.

Imaging technologies developed over the last couple of decades, like positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), draw pictures of brains while organisms perform tasks; the pictures indicate which neural areas are active at which times. Several authors in The Next Fifty Years are excited about the potential of these new tools. Gopnik says that just by doing imaging studies we will answer the question of how neural mechanisms work, and the neuroscientist Joseph LeDoux adds that we will gain “a new level of understanding of the relation of the human brain to the human mind.” These are, I believe, exaggerations. Even if we can improve the resolution and reliability of imaging techniques, all imaging studies can tell us is where in the brain something happens. This is useful, but it is not what we really want to know, namely, what is happening and how it produces the observed behavior. To find that out, we need something more.

Fortunately, there are increasingly sophisticated and effective neurophysiological techniques, which often involve recording the activity of many neurons during a behavioral task. By using those techniques as well as imaging studies, LeDoux predicts that we will learn more about memory, emotion, and language, as well as how these processes interact in the brain. Most important, he says, we will start learning how “variations between individuals determine the unique qualities that account for the self or personality.” A crucial element in learning about individual differences, however, is not emphasized by LeDoux—that is, genetics.

CHANGING OUR SELF-IMAGE

Genetics and molecular biology have made huge gains in knowledge over the last few decades, which are expected to bear fruit soon. Many essays in The Next Fifty Years stress that by tracking down genes that contribute to neural development and function, we will gain insight into the normal and pathological conditions that affect how our brains work. The essay by Samuel Barondes focuses on this and on the benefits for psychiatric patients: “All this genetic information will guide the selection of treatments for individual patients. The DNA data will also help to redefine the boundaries between different mental illnesses, which often overlap. So, too, do the boundaries between the patterns of behavior we call normal and those we classify as disorders. Combining information about gene variants with studies of brain function, formal psychological tests, and a detailed life story will make it possible to replace crude diagnostic categories with a rich individual profile for each patient.”

Barondes points out that psychiatry has affected our self-image in the past, first by showing how powerful, unconscious passions affect our thoughts and behavior, and then by developing drugs that demonstrate our dependence on simple brain chemicals such as serotonin and dopamine. In the future, psychiatry will change our self-image again by linking our mental conditions to specific genes.

The accumulation of empirical bits of information is not by itself going to tell us how the brain, at any given stage of its development, does what it does.

TOOLS FOR A COHESIVE THEORY

The combined efforts of genetic, molecular, neurophysiological, and imaging studies is very likely to increase our understanding of brain mechanisms. But the accumulation of empirical bits of information is not by itself going to tell us how the brain, at any given stage of its development, does what it does—how it perceives, thinks, and acts, or even how to build an artificial brain. The information coming from the lab has to be put together into a cohesive theory. Here is where neuroscientists are going to need the help of mathematicians like Steven Strogatz, whose piece on the mathematics of complex systems is one of the most compelling in the collection.

During the last 50 years, while psychologists and computer scientists were using computers as models of the brain, some mathematicians and physicists were developing new conceptual tools from the mathematical study of physical systems that evolve in time and interact with each other in complicated ways—a field known as nonlinear dynamics. Networks of neurons are an excellent example of this kind of system, and, as Strogatz hints, most progress in the theoretical understanding of neural mechanisms during the past fifty years has come from nonlinear dynamics.

Strogatz points out that this field is still in its infancy; we do not yet have the mathematical language to really understand large networks of neurons. But mathematicians are working on it, and a new science of complex networks—including neural networks—is emerging from their work. Paradoxically, as Ian Stewart points out indirectly in his essay on the future of mathematics, the best advice to funding agencies who want to promote the progress of science, including neuroscience, is probably to give more money to mathematicians.

EXCERPT

From The Next Fifty Years: Science in the First Half of the Twenty-First Century, edited by John Brockman. ©2002 Vintage Books.Reprinted with permission.

From “Brain Scans, Wearables, and Brief Encountersby NancyEtcoff

Here's an easy prediction: the finger-pointing at either nature or nurture—the tyranny of the single cause—will be thrown into the dustbin with history’s other useless ideas. Psychiatric problems are as unlikely to be caused by a single gene or a single neurotransmitter (serotonin, dopamine, and so on) as they are to be caused by witnessing the primal scene or discovering that girls don’t have penises.

The origin and cause of most disorders is a complex interaction of genes and “the environment,” a term that covers all non-genetic causes, including chance. It is likely that multiple genes acting as probabilistic risk factors will influence most psychiatric disorders.

A less obvious but inevitable development is that psychotherapists will no longer get away with thinking that the brain is irrelevant to what they do. In fifty years the study of the mind and the brain will not be divided among separate academic department or professions, as it is now. The vicious territorial squabbles of the nineteenth century between psychiatry and neurology were settled by ceding the brain and its “organic” and “nervous” disorders to psychiatry. But of course all mental processes drive from computations in the brain, and research into the mind and the brain are part of a continuous terrain of knowledge.

For those psychoanalytic or humanist therapists who cannot imagine getting into bed with neuroscience, I offer for contemplation the human brain—an admittedly unaesthetic object when viewed with the naked eye but truly sublime in its beauty on closer inspection. The three-pound organ— packed with billions of neurons rivaling in their numbers the stars in our galaxy and equipped with up to two hundred thousand synaptic connections to other neurons—is the most complex structure in the universe. To scientists who use brain-imaging devices to watch the brain as it remembers, imagines, and desires, it is awe-inspiring. But the large question remains: How can this ebb and flow of blood, this intricate web of connections, become the experience of our feelings, the content of our thoughts? It is precisely this question that will occupy us in the next fifty years. As the geneticist Francois Jacob has written, “The century that is ending has been preoccupied with nucleic acids and proteins. The next one will concentrate on memory and desire.

Will it be able to answer the questions they pose?”

What does brain science have to do with the practice of psychotherapy? The argument has been made (most forcefully by the neurobiologist Eric Kandel) that psychotherapy not only changes your mind, it changes your brain—literally. Effective therapy works in the same way and by the same mechanisms as any other form of intensive learning: It produces changes in gene expression that in turn change the strength of synaptic connections and produce structural changes that alter the pattern of interconnections between nerve cells in the brain. One might make the analogy to the training of a professional musician. The neurologist Alvaro Pascual-Leone has shown that the brains of professional musicians undergo functional and structural changes as they train, changes that can be documented by neuroimaging techniques. Pascual-Leone suggests further that even intensive mental rehearsals can cause such changes.

An increasing number of studies are comparing the effects of psychotherapy with those of drugs such as Prozac by looking at brain images before and after treatment. Such studies have been done for OCD (obsessivecompulsive disorder) and major depression. What they find is that when both forms of treatment are effective, they produce similar brain changes. The research suggests a common final pathway for complex psychological changes. In the future, simply by scanning the patient’s brain, we may solve the seemingly unsolvable conundrum of how to judge whether a treatment is effective and when therapy should be terminated.



About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.