The 21st century’s ﬁrst Nobel Prize in Physiology or Medicine went to three brain scientists whose work began in the 1950s, when the still-recent discovery of things such as the newly named “action potentials,” “sodium ion channels,” and “neurotransmitters” was providing scientists with a new model of the brain.
Neuroscientist John Byrne writes that this new knowledge had done relatively little to illuminate learning and memory, and not a single psychiatric disorder was understood in terms of the brain’s actions. Over the next half a century, three scientists changed that picture. Byrne, a distinguished researcher on memory and learning, looks at where they started, what they accomplished, and what it means for neuroscience today and tomorrow.
It is December 12, 2000. In Stockholm, at the banquet honoring the winners of the first Nobel Prizes of the 21st century, the dishes have been cleared away. The scientist who now rises from his seat has been given two minutes in which to acknowledge the prize in physiology or medicine graciously, capture the essence of more than four decades of research, and look to the future. For Eric Kandel, M.D., apparently, two minutes is enough time. Perhaps that is why he has been tapped by his fellow laureates, Paul Greengard, Ph.D., and Arvid Carlsson, M.D., to speak for them all.
Properly acknowledging Sweden’s king and queen, other royal family, and members of the Nobel Assembly, Kandel begins his story at the start of Western civilization and modern science:
Engraved above the entrance of the Temple of Apollo at Delphi was the maxim “Know thyself.” Since Socrates and Plato first speculated on the nature of the human mind, serious thinkers through the ages...have thought it wise to understand oneself and one’s behavior...They have asked: Are mental processes different from physical processes? How do new experiences become incorporated into the mind as memory?
These questions have been transmitted over millennia like an amulet of gold: endlessly touched, polished and repolished, examined from every angle, but never changed—a kind of intellectually inert philosophical puzzle. How can physical matter, atoms and molecules, amassed in however intricate an arrangement in the three pounds of tissue that make up the human brain, retain for decades, and recreate at will, the color and taste and smell of a ripe apple, or the thought that the universe is curved?
For centuries, this query remained the single most profound question for science. Kandel, speaking on this occasion not only for his two colleagues but by extension for brain scientists everywhere, can state conﬁdently that at the beginning of the 21st century, that question has ﬁnally begun to be answered:
The three of us whom you honor here tonight, and our generation of scientists, have attempted to translate abstract philosophical questions about the mind into the empirical language of biology... We three have taken the first steps in linking mind to molecules by determining how the biochemistry of signaling within and between nerve cells is related to mental processes and mental disorders. We have found that the neural networks of the brain are not fixed, but that communication between nerve cells can be regulated by neurotransmitter molecules.
Here, then, is the Great Question in utterly new language that grapples not with “soul,” “sense perception,” or “dualism,” but with intricate molecular machinery. This new language and what it describes reﬂect, as Kandel acknowledges, the work of a whole generation. Representing it this evening in Sweden are men in their seventies (Kandel at 70, Greengard at 74, Carlsson at 77) who cumulatively have spent the better part of a century at the laboratory bench, the microscope, and the computer. Their work is already part of the tower of knowledge that constitutes our understanding of the brain, the foundations of which were laid only in the 20th century.
Commenting on the announcement of the prize a few months earlier, National Institute of Mental Health Director Steven Hyman, M.D., had ventured that it was “in some sense...almost overdue because these scientists have been producing very, very important discoveries for a long time.” Hyman’s point was that this research, although conducted on the most fundamental level of molecules and cells and on the nervous systems of primitive creatures (in Kandel’s case, the sea snail Aplysia), already is being used to ﬁnd better drugs for diseases such as Parkinson’s and to understand the workings of age-related memory loss, mental retardation, and Alzheimer’s disease, among other afﬂictions.
Whether overdue or just in the nick of time (as the Decade of the Brain closes), this Nobel Prize celebrates an achievement different in kind from previous observation, speculation, and investigation of the brain. For the ﬁrst time, an unambiguously mental phenomenon—memory—has been explained in wholly material, mechanical terms. The hypothesis of a separate, nonmaterial, otherworldly realm has become superﬂuous. A banquet is not the place to spin out these disturbing implications, but Kandel does acknowledge them, for those who will hear, by returning to where his story began—“Know thyself ”:
We already have gained initial biological insights toward a deeper understanding of the self. We know that even though the words of the maxim are no longer encoded in stone at Delphi, they are encoded in our brains. For centuries the maxim has been preserved in human memory by those very molecular processes in the brain that you graciously recognize today, and that we are just beginning to understand.
“Just beginning.” It is not false modesty. Kandel will later sum up more than four decades of work with the comment that it is “a very nice beginning.” Not the 20th century but this, the 21st, will be the century of the biological mind:
...[O]ur generation of scientists has come to believe that the biology of the mind will be as scientifically important to this [new] century as the biology of the gene has been to the 20th century...[and] will not only improve our understanding of psychiatric and neurological disorders, but will also lead to a deep understanding of ourselves.
Kandel has spoken for his colleagues, for brain science, for science itself, and even for the future. But scientists are human. Kandel is a man of the 20th century—at its best and its tragic worst. Born in Vienna, he ﬂed Nazi-occupied Austria with his family in 1939, grew up in Brooklyn, and attended Harvard. Standing on the world stage at the eve of the new millennium, he reminds us of who he is, how he arrived at this moment, and what his achievements represent, with characteristic charm:
Since this is the first time I have had the privilege of dining with a King, I cannot resist exercising my first opportunity to express the Hebrew blessing to be recited only in the presence of a King: “Blessed be the Lord, King of the Universe, who shares his glory with mortals.” Skoal!
THE SPARK AND THE SOUP: BRAIN SCIENCE AT MID-CENTURY
To look more closely at the contributions of these three architects of brain science, we must go back to the 1950s, when they began their work, and see what was then known about the brain.
Modern electronic instruments and the invention of the tiny microelectrode had just enabled scientists to probe individual nerve cells to fathom their chemical and, in particular, their electrical properties. The basic mechanism by which these electrical signals are generated and propagated was discovered by Alan Hodgkin and Andrew Huxley. An electrochemical stimulus in a neuron that reaches a critical threshold leads to initiation of an all-or-nothing nerve impulse, or “action potential.” Once initiated, the impulse moves along the nerve axon without changing its momentum. The crown jewel in the quest for understanding the electrical properties of nerve cells was the discovery that this electrical impulse was created by the rapid, sequential opening of sodium and potassium channels in the nerve-cell membranes. This explanation at the molecular level—the level of chemicals converting into electrical potential—for the ﬁrst time began to translate brain processes into terms that chemists, physicists, and electrical engineers knew well. The laying of this cornerstone of knowledge was acknowledged, in 1963, by the awarding of the Nobel Prize to Hodgkin and Huxley.
Another big story at mid-century was what was being learned about how one neuron communicates with another. When the initial electrical impulse reaches the synapse (a specialized contact point between neurons, where one talks to another), electrical changes are produced in the postsynaptic neuron (the next neuron in the chain). In the 1930s and 1940s, there was much contention about whether the link happened through a direct electrical junction that allowed the nerve impulse to jump from one neuron to the next, or through an indirect chemical relay messenger that was released from the presynaptic neuron by the impulse. (These were labeled the “spark” and the “soup” hypotheses.) By the 1950s, the ﬁeld generally began to accept the existence of chemical relay messengers, or “neurotransmitters.” The discovery of these chemical messengers, another cornerstone in neuroscience, by Henry Dale, Otto Loewi, and Ulf von Euler, and the elucidation by John Eccles and Bernard Katz of the biophysical mechanisms through which they acted, were acknowledged by their receipt of Nobel Prizes in 1936, 1963, and 1970.
The work of these scientists established that chemicals called acetylcholine and the catecholamine norepinephrine were neurotransmitters. Several other agents, such as the indolamine serotonin and the amino acid GABA, were also suspected of being transmitters. But why did more than one transmitter exist? A speciﬁc type of neurotransmitter seemed to be used by some neurons to excite (depolarize) postsynaptic neurons, allowing a signal to pass on, whereas another type of transmitter was used by other neurons to inhibit (hyperpolarize) postsynaptic neurons, thus preventing the signal from connecting. The common principle established by Eccles and Katz was that neurotransmitters attached themselves briefly to specific receptors, where they caused the opening of ion channels to produce fleeting changes, lasting mere milliseconds, in the membrane potential of the postsynaptic neuron—the brain cell on the receiving end of a signal.
This, then, was the foundation of the tower of knowledge of the nervous system taking shape by 1955. It might seem far from lifting the veil on the mystery of memory and what we call the self, but links were being made with other knowledge. The new biological understanding seemed to map nicely onto the understanding of the great technological achievement of the era, the digital computer. The nerve cell was viewed as the transistor or circuit and the brain as a computer. The nerve cells (each considered essentially identical at that time) added up the input from the synapse and, when a critical threshold was reached, ﬁred an all-or-nothing, binary (yes/no) electrical signal. The electrical signal, or action potential, caused the release of a ﬁxed amount of a chemical transmitter (one of several known to exist), which diffused across the synaptic gap between neurons. There, by opening channels on the postsynaptic membrane, a transient electrical signal (called the postsynaptic potential) was produced. This signal, along with those generated by other presynaptic neurons, was integrated by the next neuron in the chain.
No one questioned the existence of an internal biochemistry of the cell. All these “guts” were thought merely to provide the infrastructure that kept the electrical and chemical messages ﬂowing.
GAPS IN THE FOUNDATION
Although the task was daunting, the next step in understanding the hardware of the brain was to explore its wiring to see how billions of similar neurons might be connected to process information.
Some nagging questions, and the implications of certain new discoveries, however, also began to suggest gaps in our knowledge. The computer analogy was already looking inadequate. First, two neurotransmitters seemed sufﬁcient (one to excite and another to inhibit); but more were known to exist, and perhaps there were still more to be discovered. Was more than simple excitation and inhibition going on? Or did some circuits in the brain have speciﬁc neurotransmitters dedicated to their exclusive use?
Nor did this brain model, as it stood, go very far to solve the mystery of learning and memory. How could richly interconnected but stable neurons store a memory? Perhaps memories were stored in reverberating loops of electrical activity in interconnected neural networks. Or perhaps synapses might themselves be plastic and growing. If so, exactly how did they change? Finally, and, perhaps most important, what were the underlying mechanisms of various brain diseases? Despite progress at the level of basic understanding, by the 1950s not a single neurological or psychiatric disorder was understood at a mechanistic level. There were precious few treatments, few drugs available, and no understanding of why the drugs worked.
THE NEW ARCHITECTS
At mid-century, the very different paths of Arvid Carlsson, Paul Greengard, and Eric Kandel began to converge on these problems. For Carlsson, the path was anything but predetermined. Growing up in Sweden in an academic family, he had done his doctoral work at the University of Lund on calcium metabolism, and planned to pursue this research as an independent faculty member.
Fate would have it otherwise. As he related in his Nobel lecture, he was rejected for a professorship because it was deemed that work on calcium metabolism was not central to pharmacology. Discouraged, he left the ﬁeld and decided to become involved in biochemical pharmacology, which he felt had a great future. His decision was a loss for the ﬁeld of calcium metabolism (which turned out to have a great future, too) but a red-letter day for the emerging ﬁeld of neuroscience. Another such day came when Carlsson, in 1955, decided to spend ﬁve months in Bernard Brodie’s laboratory at the U.S. National Institutes of Health. Brodie, studying the neurotransmitter serotonin, had shown that the antipsychotic drug reserpine led to depletion of serotonin from the brain. Moreover, Brodie had worked out a technique called spectrophotoﬂuorometry that enabled him to monitor the levels of serotonin in the brain. Equipped with this new technology, Carlsson returned to Sweden to examine whether reserpine might also affect the levels of catecholamines, a family of substances structurally related to serotonin. He was about to discover a Rosetta stone for understanding neuronal signaling.
Paul Greengard, a bit younger than Carlsson and born half a world away in New York City, started his scientiﬁc career in neurochemistry, but it would be many years before he embarked on the work that would eventually lead to the Nobel Prize. His doctoral work at the Johns Hopkins University examined chemical changes associated with degeneration and loss of function in nerve cells. He then spent ﬁve years in England, where he did postdoctoral research at the University of London, Cambridge University, and the National Institute for Medical Research. There followed a stint as director of the department of biochemistry at Geigy Research Laboratories in Ardsley, New York, where he worked on drugs to treat depression.
A change of course occurred in 1968, when he accepted an invitation to spend a year as a visiting professor in Earl Sutherland’s laboratory at Vanderbilt University. It was Sutherland’s work on the stimulation of glucose production in liver cells that inspired Greengard’s pioneering work on signaling in the nervous system. Scientists knew that the formation of glucose from glycogen was catalyzed by the enzyme phosphorylase, but Sutherland (and later Edwin Kreb) found that phosphorylase was activated through a complex series of actions by intermediary chemical messengers.
When Greengard left Sutherland’s lab for Yale University, his goal was to ﬁnd out whether the multistep signaling that seemed critical for hormones in the liver was also used for signaling by neurotransmitters in the brain. It turned out that it was, and that in the brain this signaling was present in bewildering diversity and complexity.
Eric Kandel’s career likewise took several twists before he settled into investigation of the role of the synapse in memory. Although he did start out in neuroscience, Kandel’s particular subdiscipline, psychiatry, was at the time intellectually farther from the synapse than any other subdiscipline of neuroscience. During his last year in medical school and during his internship, Kandel had became fascinated with memory. To deepen his understanding, he spent three years working in the neurophysiology laboratory of Wade Marshall at the NIH, where he began to collaborate with Alden Spencer, another young postdoctoral fellow who shared his interest in memory. Their collaboration led to the first intracellular recordings from the hippocampus, a brain structure important for some forms of memory. Although the results were a technical breakthrough, the mechanisms of memory remained elusive.
A moment of classic creative inspiration took place in 1959 when Kandel was attending a lecture by Ladislav Tauc, a visiting scientist from Paris who described his work on the marine mollusk Aplysia. Kandel realized immediately that the simple nervous system of Aplysia, with just a few hundred huge, readily accessible neurons, might be exploited to get at the mechanisms of memory. In 1962-63 he spent 15 months working with Tauc, during which time he discovered a new type of synaptic plasticity called “heterosynaptic facilitation,” an increase in the synaptic strength of one neuron resulting from activation of a second, independent pathway. Here was a mechanism that, at least in principle, could mediate learning.
Admittedly, Aplysia would not be a model system for understanding complex forms of memory, such as remembering a face, but it might be a model of simpler forms of conditioning such as those described by the Russian physiologist Ivan Pavlov. Why not, after all? If studies on the bacteria E. coli and the fruit ﬂy Drosophila could lead to breakthroughs in understanding molecular biology and genetics, respectively, perhaps Aplysia could do the same for memory. Encouraged by this possibility, Kandel returned to the United States and directed his energies to developing a behavior in Aplysia—a defensive ﬂick of the gill—that could be modiﬁed by learning and would be amenable to a cell-biological analysis.
Three young scientists with the courage to tackle the daunting mystery of brain and mind had set their very different trajectories, but trajectories that in years ahead would cross, re-cross, and finally converge—and bring them together in Stockholm at century’s end.
THE NEW CORNERSTONE
Returning to the University of Lund, Carlsson began examining how the serotonin-like catecholamines were affected by reserpine, the anti-psychotic drug. He improved his ﬂuorimetric technique and used it to discover that reserpine led to the depletion not only of serotonin in the brain but also of catecholamines. Moreover, injection of animals with L-DOPA (one of the substances from which norepinephrine is formed) led to a rapid reversal of the decreased functioning caused by the reserpine. This reversal turned out to be caused not by increased synthesis of norepinephrine but by an increase in the catecholamine dopamine, the immediate precursor of norepinephrine. Carlsson realized, therefore, that dopamine itself was a neurotransmitter.
What is more, this transmitter became a key to understanding and eventually treating Parkinson’s disease. When Carlsson examined the distribution of dopamine in the brain, he found that it was highly localized in bilateral structures called basal ganglia, which are important for motor function. This led him to the hypothesis that the motor deﬁcits associated with Parkinson’s disease result from loss of dopaminergic function. Carlsson proved to be right. By 1967, L-DOPA was being used to treat Parkinson’s disease.
Carlsson’s work also had a major impact on understanding depression and schizophrenia. He showed that drugs commonly used to treat schizophrenia acted by blocking dopamine receptors. Carlsson’s work also paved the way for the development of selective serotonin reuptake inhibitors such as Prozac and Paxil, and so for today’s revolution in the treatment of depression.
Although Carlsson made enormous strides in establishing the existence of catecholamine transmitters in the central nervous system, it was not clear how they functioned. For example, did they brieﬂy bind directly to ion channels and cause them to open, as was believed for other neurotransmitters that had been studied? It was Greengard who found that in fact they did not function like conventional transmitters. The catecholamines—and in some cases the classical transmitters, too— had actions that were slow and long lasting and worked via very different mechanisms.
Inspired by the work of Sutherland and Krebs, Greengard, in his new laboratory at Yale, began investigating the role of biochemical pathways in neuronal signaling. His rapid succession of early discoveries became the foundation not only for his continued work but also for work in hundreds of other labs around the world. In 1971, Greengard found that dopamine activated adenylyl cyclase, the enzyme that leads to the synthesis of cAMP, a previously identiﬁed intermediary messenger. Then he showed that the brain, like the liver, has another intermediary messenger, an enzyme called cAMP-dependent protein kinase. What was unfolding with remarkable speed was an early glimpse of the principles of a new class of signaling by neurotransmitters.
Neurotransmitters, it turned out, bind to two fundamentally different types of receptors. One type of receptor led to the direct opening of an ion channel and thus changed the potential of the postsynaptic neuron. The second type was not linked to ion channels in the conventional way, but rather to enzymes that led to the production of second messengers (like cAMP). These second messengers activated protein kinases that added phosphate onto proteins to change their function. There is a vast diversity of affected proteins. Indeed, Greengard himself identiﬁed more than 100, including membrane channels, ion pumps, and receptors for transmitters and proteins that regulate gene expression. Consequently, a transmitter like dopamine can profoundly affect the target neuron.
The initial action of such a transmitter is slow because it takes several seconds for all the biochemical pathways to be engaged, but the process can be long-lasting because it takes time for the proteins to be dephosphorylated. Nor are these pathways simply linear cascades. As Greengard and others found, multiple slow-acting transmitters converge on a single target neuron. Within the cell there are feed-forward and feed-back pathways interacting within a given cascade as well as inﬂuencing different cascades. Thus, the engagement of one second-messenger cascade by a neurotransmitter can affect the extent to which another transmitter can activate its downstream second-messenger cascade and produce or modulate a cellular response. With these discoveries, Greengard had laid the foundation for understanding the neuron’s biochemical software.
WHEN GENES CREATE THE STUFF OF MEMORY
But what exactly did this software compute? Returning from an extremely productive stint with Tauc at NIH, Kandel set up a new laboratory at New York University to investigate whether the synapse was the site for learning and storage of memory. He and his colleagues exploited the experimental advantages of Aplysia to establish a new sphere of knowledge in the ﬁeld. In a key series of papers in 1970, he showed that the animal could exhibit simple forms of learning and, for the first time, revealed that a site for that learning was at the synaptic connection between a sensory neuron and its postsynaptic motor neuron. The next challenge was to determine the mechanisms for these changes; and in 1976, he showed that the synaptic enhancement associated with one form of learning, sensitization, was due to an increased level of cAMP in the sensory neuron and that cAMP enhanced the release of neurotransmitters.
Here, now, was a remarkable convergence in the thinking of Greengard and Kandel. In the early 1980s they even collaborated on several papers. Putting their work together, we see how the synapse was modiﬁed because learning led to the activation of second-messenger cascades, which, through protein phosphorylation, modiﬁed proteins that regulated the release of neurotransmitters. Kandel went on to show that there were at least two time frames for these changes in synaptic strength. For long-term memory, there was a form of synaptic plasticity that was dependent on synthesis of new protein, while short-term memory was independent of protein synthesis. There was, however, a common thread. Both involved the activation of cAMP and phosphorylation of proteins. Key in inducing long-term changes was the protein CREB, which is a “transcription factor” regulating the activation of genes. Kandel found that among the affected genes are those associated with growth of projections of the neuron that allow for additional connections.
Although there are some differences in the details, Kandel’s learning synapse bears some conceptual resemblance to the signaling pathways in Greengard’s cell. Now, more than 30 years after Kandel’s pioneering work began, the idea that the synapse is a site of memory storage is almost taken for granted. So, too, is the principle that learning involves activating second messenger systems. Kandel did more than make seminal discoveries.
His work led to a paradigm shift in the ﬁeld of learning and memory. For the ﬁrst time, an important aspect of cognitive function was demystiﬁed, as memory became amenable to cell biological and molecular analyses. The way was also clear to begin analyzing other memory systems in the brain.
RETHINKING THE COMPUTER ANALOGY?
Beyond doubt, there is more to be learned about the catecholamine neurotransmitters, including dopamine, elucidated by Carlsson. Additional modulatory transmitters that target neurons will be discovered, as will additional second messengers, protein kinases, and substrate phosphoproteins. If the historical trend continues, it is likely that our understanding of the guts of the learning synapse will also expand. It seems equally plausible, however, that the basic conceptual insights will not change—that we can now move conﬁdently to the next level of the tower of knowledge of the nervous system.
What problems will we encounter there? All three laureates contributed to our understanding of what actually happens in the brain. In particular, the model of feed-forward and feed-back communication between neurons illustrates the complex processing that can occur. But the very complexity makes it difﬁcult to predict exactly how a neuron will respond to any one stimulus, much less simultaneous, multiple stimuli. Here intuition and visual illustration may break down; mathematical modeling and computer simulations will be required to achieve a deeper understanding of the software. The very metaphor of the brain as a computer and the neurons as its integrated circuits may need revision. The brain is more like a computer network, with each neuron—indeed, each synapse— a separate computer on that network.
Kandel’s brilliant analysis led to the discovery that the synapse is a site for memory storage. The memory for a conditioned response can be stored in a reﬂex pathway in Aplysia and other animals. Kandel’s recent work, and that of others, indicates that synaptic changes occur in areas of the brain involved in more complex memory tasks. But a single synapse, while very smart, cannot store a complex memory like a mother’s face. It can be likened to a single pixel on a computer monitor or TV screen, where the memory is the entire screen. Complex memories also have multiple modalities (smells, sounds, sights), each seemingly stored in a different brain region. Thus when we recall a complex memory, our brains reconstruct it from fragments drawn from different brain areas. Memory recall has been likened to a paleontologist who reconstructs a dinosaur from partial fragments of bone. Currently, we have no idea of the neural representation of the paleontologist within our brain.
The awarding of the 2000 Nobel Prize acknowledges great achievements in understanding how the nervous system communicates, but these discoveries also have profound implications for understanding brain diseases. Carlsson’s work led directly to a rational treatment for Parkinson’s disease; it seems likely that the discoveries of Kandel and Greengard will benefit the treatment of other diseases such as age-related memory loss, mental retardation, and Alzheimer’s disease. But much remains to be done. There is a treatment for Parkinson’s disease, but not a cure. What are the mechanisms that lead to the degeneration of the dopamine neurons, and how can we prevent this process? Similarly, there are treatments for schizophrenia and depression, but the underlying mechanisms are unknown.
WHAT ARE THE CHALLENGES FOR THE FUTURE?
Awarding the Nobel Prize to three brain scientists at the end of the Decade of the Brain is a ﬁtting tribute to progress made during that decade and indeed the entire twentieth century. It is sobering, though, to contemplate what still must be learned. A new generation of architects is building upon the foundation laid by past Nobel Laureates—including Arvid Carlsson, Paul Greengard, and Eric Kandel—to understand the mysteries of the brain and the ways in which it can be repaired when it malfunctions or is injured.
It is not at all clear what the ultimate shape of our tower of knowledge of the brain will be, or when—or even if—it will be completed. The age-old question of whether the brain can ever fully understand itself remains. Indeed, when asked this question during an interview, the three new laureates answered: “No,” “Yes,” and “Probably.”