Friday, October 01, 1999

Inside Modern Memory Research

Memory: From Mind to Molecules

By: Richard Morris

 rev_v1n2morris_2

The latter part of the twentieth century has seen spectacular progress in our understanding of both the psychological and molecular aspects of memory. The new findings have nowhere been so effectively summarized as in Larry Squire and Eric Kandel’s new book.

Beautifully illustrated and engagingly written, the book is built around three core ideas. One is a fundamental distinction between two qualitatively different forms of memory called “declarative” and “nondeclarative” memory. The second is the possibility of achieving a detailed understanding of the cell-biological mechanisms that underly these types of memory in the brain. Third is the hope that by understanding things at the level of the molecules that carry out mnemonic processes, we will not only achieve a scientifically satisfying account of how memory operates but also pave a rational path towards developing better treatments of memory disorders. These may be disorders of old age, as in Alzheimer’s disease, or the equally significant problem of learning difficulties in young children. The subtitle of the book— From Mind to Molecules—conveys this hope. These treatments will incorporate both our better understanding of the cell-biological mechanisms involved (leading to better pharmacological treatments) and, no less important, better appreciation of the mental processes underlying normal memory and how some continue to function long after others have broken down.

An article in the June 1 London Times illustrates this last point. Describing the resident of a nursing home, the article said “Bob was physically fit and strong, but withdrawn and lost in his own thoughts. His memory had gone and he seemed to know nothing of his past or present.” But the nursing staff discovered that he was once a keen gardener, so they outfitted him with old clothes, a pitchfork, and a spade, and took him to a patch of lawn outside the nursing home that had been put aside as a potential vegetable plot. “He knew instinctively what to do and for a few hours each day, gardening drew him back to a familiar world and gave his life renewed meaning.” Bob’s vegetables became the pride of mealtimes at the home. Designers of nursing-home gardens often use props such as old-fashioned clothes lines to help elderly patients feel at home in the lost world of long ago, which they can, in fact, easily recall.

This is Ribot’s Law in action. At the end of the nineteenth century, Thodule Ribot developed his now famous law (infamous to some) that forgetfulness induced by head injury or disease generally impairs recent memories more than those of the distant past. The basis of this law has been the subject of the most exacting behavioral studies by one author (Larry Squire) and its molecular basis is yielding to the neurobiological studies of the other (Eric Kandel).

Memory: From Mind to Molecules is also gracious; whenever an experiment from the authors’ laboratories has been done by colleagues, or by others elsewhere, they are credited. Good science involves teamwork, guided by inspiring leaders such as the authors of this book, but teamwork nonetheless. That leadership is reflected throughout the book, with the text often taking readers aside to explain concepts that were initially introduced in quite different fields (such as principles governing the release of chemical transmitters in the brain, first described by the British neurophysiologist Sir Bernard Katz). The text then doubles back to explain the relevance of this other work to the main thread of the argument.

Quite apart from reflecting deep scholarship, this feature makes the book akin to a “patchwork quilt” of individual bits and pieces that can only be fully appreciated when one stands back to see the bigger picture. Reading through from cover to cover, and then again for the detail, I began to see the different levels at which the book communicates. Parts of the quilt are straightforward; one can snuggle down and profitably skim-read. Other parts demand a more trained eye to see the poetry in the patchwork. Fortunately, we are guided in the more difficult parts by sophisticated diagrams.

THE “HOW” AND THE “THAT” OF MEMORY

Consider the first core idea of this book, namely, that there are multiple types of memory and, in particular, that they can be classified as either propositional (declarative) or non-propositional (nondeclarative). The gist of this distinction is that some experiences in life are remembered in a manner that later enables them to be called to mind and consciously “declared” (usually in the form of language). This is the everyday sense of memory, as in remembering facts or events such as that Mickey Mantle played for the New York Yankees, that Ernest Hemingway often propped up Harry’s Bar in Venice or, more prosaically, that one had toast for breakfast.

In contrast to this declarative form of memory, other life experiences lead to the development of skills, or influence the way we behave in similar situations in the future, but do not encode information in a form that we need to recall consciously when carrying out a behavior. Motor skills are a case in point. We may have a vivid memory of a skiing holiday in the French Alps, but recalling this does not help us perform again the delicate turn on the ice that we learned there. The “knowledge” that accrues in mastering a skill, such as riding a bicycle or performing a triple jump, becomes embedded or “encapsulated” into sensorimotor sequences that, of necessity, are run rapidly in executing it. There is simply no time to stop and think in a conscious manner. This kind of “knowledge” can be manifested but not declared.

Two of Squire’s best contributions to research on the cerebral organization of memory have been to drive home the relevance of this distinction for the different kinds of memory dysfunction seen in certain patients; and to devise or adapt tasks that can be used to show the astonishing depth of mnemonic processing that can be carried out in the absence of conscious awareness. Several chapters of the book describe this process well—such as Squire’s demonstrations of absolutely normal non-declarative learning in otherwise amnesic patients whose memory of events is exceedingly poor. In work with Neal Cohen, now of the University of Illinois, he showed that such patients can learn to read mirror-reversed words quickly despite being unable to remember the experience of learning to read in a mirror. With Barbara Knowlton of the University of California, Los Angeles, he showed that patients can learn to make accurate, probabilistic weather predictions but fail to remember the individual facts upon which these predictions are based. Such dissociations between abnormal memory for events and normal memory for cognitive skills fascinate because they seem to reveal, in the British neurologist Sir Henry Head’s phrase, “normal function laid bare.”

DISQUIET—TRIVIAL AND PROFOUND

The distinction between declarative and nondeclarative memory, albeit in other terminology, belongs in any accessible treatment of memory. But readers could be forgiven for thinking that the distinction between declarative and nondeclarative forms of memory is widely accepted as beyond dispute. This it most certainly is not. There are both trivial and profound reasons for disquiet. 

“Certain scientists,” Squire tells audiences, “would no more use another’s terminology than they would use another person’s toothbrush.”

The trivial semantic point is that there are almost as many ways of presenting the classification of types of memory processing as Baskin-Robbins has flavors of ice cream. The authors rightly suspect that some of the current confusion stems from downright jealousy. “Certain scientists,” Squire tells audiences, “would no more use another’s terminology than they would use another person’s toothbrush.” The implication is that we are all really talking about the same conceptual distinction—declarative vs. non-declarative—and should stop using other terminology. If that were all there was to it, I would gladly accept his toothbrush.

But no one is sure which of many distinctions about memory—short-term vs. long-term, episodic vs. semantic, explicit vs. implicit—are physically reflected in the nervous system. Some scientists, such as the Canadian experimental psychologist Endel Tulving and the British neuropsychologist Lawrence Weiskrantz, doubt the adequacy of binary distinctions or simple taxonomies. They favor the idea that remembering is different from knowing, that knowing is different from representing, and that representing is, in turn, different from merely doing. Like Squire, they link “remembering” to consciousness but, importantly, they argue that this consciousness must involve a sense of “the self.” A recalled event did not merely happen, it may have happened “to me” or “to them,” where “me” or “them” are agents that the person (or animal) doing the remembering knows about and can distinguish. They also doubt that the propositional components of memory can be so neatly identified with the medial temporal lobe, as Squire argues here, and I am bound to say that there are good grounds for their caution. Several emerged in recent brain imaging studies using PET and fMRI, a frenetically active field only lightly touched upon in the book.

Problems with the binary declarative/ nondeclarative distinction are actually on display in the descriptions of nondeclarative memory offered here. Of nonassociative learning (such as habituation), we are told that “a subject learns about the properties of a single stimulus—such as a loud noise— by being exposed to it repeatedly.” Of associative conditioning, we are told that “the animal will learn to associate pressing a bar or a key with the delivery of food: when it presses the bar, it expects to receive something to eat.” But hold on a moment. What is “nondeclarative” about either description? Learning the “properties” of a noise is surely information that we can declare—that it is high-or low-pitched, loud or soft, and so on. Similarly, if the animal learns an expectation, is this not a mnemonic representation that can be declared? 

My own understanding of both these forms of learning is somewhat different. Edward Thorndike, whose development of the first paradigms for instrumental conditioning is recognized in the book, claimed that the cat in his Columbia University puzzle box did not learn to expect anything; it merely learned to increase the probability of whatever action resulted in a satisfying state of affairs. This is certainly one way in which instrumental conditioning might work, but ingenious experiments by the Cambridge University experimental psychologist Anthony Dickinson have established that animals not only sometimes learn habits (as Thorndike claimed) but also sometimes learn actions (in which the outcome of the behavior is encoded as an expectation). Thus, the classification of conditioning as “nondeclarative” is misleading. And merely putting “simple” in front of “conditioning” will not do.

Interestingly, elsewhere in this book Kandel’s account at the cellular level of both nonassociative and associative conditioning is similar to Thorndike’s in that the marine mollusk Aplysia that Kandel has studied so intensively does not appear to learn about the “properties of a stimulus” (sic) at all—merely to change its reactivity to it. The mollusk does not know whether it has been touched gently or prodded angrily—it does not have the representational capacity or appropriate circuitry to acquire such knowledge. But that does not mean it cannot learn. It can and it does, and no one has taught us more about how it learns what it does learn than Eric Kandel and his colleagues.

CONSCIOUS LEARNING

There is yet more to ponder, notably in Squire’s inconsistent insistence that declarative memory be “conscious.” The caption for an illustration in the book puts it strongly: “The faculty of declarative memory,” we are told in a commentary on a painting by Milton Avery, is “...essential for virtually all conscious mental activity.” The inconsistency emerges when this very criterion is relaxed for non-human animals—declarative memory then becoming any type of memory of which they are capable that is impaired by lesions of the medial temporal lobe. This circularity of the theory as applied to animals has rightly been of concern to others, there being no very obvious rubric to predict in advance whether a task will be learned declaratively or nondeclaratively.

My sympathies on this issue are divided, however, because I wonder if critics favoring alternative classifications are any better at identifying episodic, semantic, or even spatial memories in animals. Take the last by way of example. If a rat swims from the sidewalls of a circular water maze to the escape platform hidden in one quadrant, we presently classify this as “spatial learning.” But beyond a merely descriptive statement, how can we be sure that all it is learning is where things are located? Perhaps the animal is also remembering events (even if it does lack a sense of the self); perhaps it is learning other facts about the world as well (not tested in the usual measures of performance). The concept of “spatial learning” deserves no less critical scrutiny than the concept of “declarative memory.”

A further difficulty with insisting on consciousness in any definition of memory is that, to be clear rather than vague (always a good thing in a scientist), one must specify what a person has to be conscious of and when. Endel Tulving’s demand that we possess what he calls “autonoetic consciousness,” i.e., a sense of the self, strikes me as overly restrictive but, if we allow ourselves to be more liberal, what sort of consciousness are we talking about? Do I have merely to be awake—the neurological definition? Surely it is more than that. Do I have to be attentive, too? But this is also a requirement for nondeclarative learning. If awake, attentive, and conscious, what do I have to be conscious of? Do I have to be conscious of the material at the time I encode a memory or only when I later recall it? I confess that I am barely attentive at breakfast, still less do I consciously encode what I am eating. But ask me a few hours later (but not much longer) and I could tell you with near perfect accuracy whether it was toast or cereal and could recreate the scene in my mind’s eye.

This leads me to think that it is consciousness at the time of recall that is critical. Even quite severe amnesiacs, such as the two patients described in this book as H.M. and E.P., are “conscious” in the accepted everyday sense of the term as you talk to them; and they will also make a conscious effort to remember something when asked to do so even though they will inevitably fail. Thus, the criterion of consciousness for declarative memory turns out to be a criterion having to do with the character of what eventually gets recalled. Ironically, retrieval and recall are memory processes whose importance is underplayed in the theory of declarative memory.

In my judgment, we need to be much more precise about what it is a person has to be conscious of for their memory to be labeled declarative. Fortunately, there are other anchors for propositional knowledge and episodic memory such as, respectively, the ability to use it inferentially or to remember the spatio-temporal context in which an event happened. Experiments by Howard Eichenbaum and others, some discussed in the book, may yet provide alternative foundations on which a revised theory can be built.

THE CELLULAR NUTS AND BOLTS

Knowing the different types of memory is one step; knowing that different bits of the brain are involved in each is another. Still another, more challenging step is working out how activity in these brain circuits allows memories to be encoded. In the 1890s, the Spanish anatomist Santiago Ramón y Cajal had suggested a likely site where this might happen—the connections between neurons. Fifty years later, the Polish scientist Jerzy Konorski and the Canadian neuropsychologist Donald Hebb had each proposed rules governing the circumstances in which synaptic changes may occur during learning. Now, a further 50 years later, Kandel’s contribution has been to produce the key evidence that such changes do occur.

My impression from reading this book is that the early studies of Bernard Katz were a particular inspiration to Kandel. It was Katz who worked out the basic “quantal” principles (all-or-none response) of synaptic transmission using model systems of synapses formed by the squid giant axon and the frog neuromuscular junction. The success of this, and of other important work going on at the time, led Kandel to choose his own model system in which to study the neurobiology of learning. He chose a marine mollusk, Aplysia Californica, focusing on the few hundred neurons of the abdominal ganglion where the animal’s breathing apparatus of gill and siphon are neurally controlled. To the layman, it may seem an odd choice to make, but the large size of the nerve cells and the apparent simplicity of its circuitry (which had to be painstakingly worked out) made the problem potentially tractable.

In a series of brilliant experiments in the 1970s, Kandel and his colleagues demonstrated that synapses change in efficiency during learning, increasing or decreasing as appropriate, and that these changes may be either short- or long-lived. They used a combination of behavioral and neurophysiological techniques to show how learning that resulted in increases or decreases in the responsivity of organs in the Aplysia was associated with exactly corresponding changes in the amount of chemical transmitter released onto the “motor” neurons in those organs. When the changes are long-lasting, structural changes accompany this increased release of neurotransmitters. This work, which many regard as worthy of the Nobel Prize, is beautifully described in Chapter 3.

A REDUCTIONIST TREASURE HUNT

The reductionist is, however, never satisfied. There is always one more chamber to dig down to where, it is hoped, yet more hidden treasure is to be found. Beginning in the 1980s, Kandel began to wonder about identifying the proteins and other molecules involved in mediating the synaptic changes that he had identified. Which are involved in short-term changes? Which in longer lasting changes? Scientists also all became aware that any “general theory of the neurobiology of learning and memory” would have to take into account that there are multiple types of memory—not just short- and long-term, but also the subtypes within the domain of long-term memory. Kandel and his colleagues therefore diversified their efforts. Some stayed loyal to Aplysia, or at least to its neurons, growing them in tissue-culture and then applying the techniques of modern cell-biology to try to work out the biochemical cascades mediating synaptic change.

Others in Kandel’s laboratory, together with many laboratories worldwide, began to study a phenomenon that had been serendipitiously discovered by Terje Lömo, a Ph.D. student in a Norwegian laboratory. Working on the very same brain area that, if damaged, causes amnesia in humans, Lömo and a British colleague, Tim Bliss, found that they could augment the strength of otherwise stable synaptic connections by specific patterns of electrical stimulation. This discovery, soon to be called long-term potentiation (LTP), rapidly became the basis of many studies. Struck by the similarity to what occurred in Aplysia during learning, everyone wanted to know what the physiological properties of LTP were, if it occurred during other kinds of learning, and what were its underlying mechanisms of induction and expression. This field of research has been a roller coaster of a scientific ride, with experiment after experiment shifting the balance of opinion from year to year about first one detail and then another. Chapter 6 tells much of this story, but it is a long and complicated tale, which needs a book of its own.

Kandel’s own journey has immersed him in two quite separate controversies. First, do you find evidence of the enhanced synaptic efficacy that underlies LTP on the sending (presynaptic) or the receiving (postsynaptic) side of the synapse, or on both? Those favoring an exclusively postsynaptic account point out that that is where you find the receptor protein that is critical for triggering the change in synaptic efficacy. Kandel and his colleagues, perhaps hoping for closer parallels to the synapses in Aplysia than the evidence has quite allowed, have sought a presynaptic explanation. He is by no means alone in holding this view, which, if vindicated, would probably mean that regulation of transmitter release is the exclusive mechanism by which memories are stored in both the vertebrate and invertebrate brain. This would be a marvelously parsimonious explanation, not to mention a momentous discovery. 

…in science, unlike in diplomacy, there is no room for compromise unless the evidence justifies it. What is exciting for scientifically literate spectators of this debate is that both positions are associated with radical new ideas about the nervous system. It is not just a fight about facts; it is also about new ideas.

The jury is still out on this particular issue, and in science, unlike in diplomacy, there is no room for compromise unless the evidence justifies it. What is exciting for scientifically literate spectators of this debate is that both positions are associated with radical new ideas about the nervous system. It is not just a fight about facts; it is also about new ideas. Kandel finds himself questioning the classical concept of the polarity of synapses. Not for him does communication go only in one direction from “pre”to  “post”; he believes it must go both ways. This hypothesis requires the postulation of a mysterious “retrograde” messenger that can carry information back to the presynaptic side and indicate that critical things have happened on the postsynaptic side. The prize here, if he is right, is an astonishing similarity to enhancement of transmitter release in Aplysia neurons.

The postsynaptic camp likewise finds itself with a mystery, wondering not just about how to make synapses stronger, but how they ever become functional at all. Perhaps, synapses get made but, so the metaphor goes, they remain silent until asked to speak. LTP may be a process of turning silent synapses into communicative ones. This, it turns out, is quite a clever way in which a genetically programmed developing nervous system can “fine-tune” its connectivity in response to neural activity. Tricks used during the development of the brain may also hang around and get put to a new purpose in the adult brain. Squire and Kandel quote the lovely French word “bricolage” (odd job) to capture the sense in which the adult nervous system tinkers with tricks left over from an earlier stage of development and of evolution.

THE “ONE-TO-MANY”AND “MANY-TOONE”PROBLEMS

Kandel is also immersed in the debate about how long synaptic changes may last. LTP is, in truth, something of a misnomer because the alteration in synaptic efficacy can often be quite short lasting—a matter of hours at most. Study of behavioral learning in Aplysia and other animals has shown that protein synthesis is critical for getting memory traces to last much longer than this, although no one knows definitively exactly what proteins are involved. Until very recently it has been widely assumed that most of the relevant protein synthesis is carried out inside the cell body. If so, we have two problems on our plate, which Hirohiko Bito of Kyoto University has aptly called “the many-to-one problem”and “the one-to-many problem.” If memory traces are laid down at the thousands of synapses outside each neuron and the proteins that render any such changes long-lasting are made in the one nucleus of that same neuron, we have to work out how the many synapses tell the one nucleus to make proteins and how, once made, these find their way back to the appropriate synapses.

Kandel’s answer to the “many-toone” problem is to argue that changes at the synapses activate an enzyme called PKA (cAMP dependent protein kinase) that then “translocates” to the nucleus of the cell. This hypothesis is a serious contender, but not the only one—other enzymes may also translocate. Ingenious experiments in which mice are engineered to lack a particular enzyme, an exciting approach in which Kandel and his colleagues have also been pioneers, are helping to sort this out. I and a German colleague from Magdeburg, Uwe Frey, have suggested that the traffic problem would be greatly simplified if potentiated synapses set a tag (a kind of neuronal catcher’s mitt) that could gather together proteins related to plasticity. That way, the proteins would not need to know where they were going—they could travel relatively diffusely in the neuron’s dendrites and still end up caught where they were needed. My own further research and that by Kelsey Martin and Kandel have vindicated the general idea.

So where does this leave us? Clearly we have come a long way from the very general statements about the basis of memory by Cajal, Konorski, and Hebb. Through the work of Kandel and others, we now know some of the key molecular players involved in altering transmitter release, enhancing postsynaptic sensitivity, and making such changes structurally stable over time. We are therefore beginning to understand how memory works on the cellular and molecular level. Charles Stevens of the Salk Institute calls it “the beginning of a dream.” That is, by any standards, a substantial achievement—much of it realized during the last 20 years of this century. Memory: From Mind to Molecules is an excellent introduction to this astonishing body of work.

THE FUNCTIONAL ARCHITECTURE OF NEURAL CIRCUITRY: THE CHALLENGE OF THE NEXT CENTURY

The last chapter of Memory: From Mind to Molecules sketches out some examples of the likely impact of this work. One important implication of the concept of synaptic plasticity is that no two brains are exactly alike—even those of identical twins— because neural activity constantly shapes and fine-tunes the intricate connectivity of the brain. Sherrington once likened the brain to an “enchanted loom.” The concept may seem quaint today, but its modern variant is that the different experiences of a person’s life allow many a different pattern of connectivity to be woven. This idea has, in the hands of Michael Merzenich of the University of California, San Francisco, whose work is discussed in this book, even led to the development of computer games that dyslexic children can play to help them learn letter-sound correspondences and so help them to read.

While benign forgetfulness can sometimes be a blessing, major breakdown of the systems responsible for memory can be catastrophic. The book is brutally honest, as it should be, about the fact that we still do not know why the tombstone plaques and neuritic tangles of Alzheimer’s disease wreak the havoc that they do. But Kandel and Squire hold out the promise, and it is right to do so, that understanding the biological basis of memory mechanisms is our best hope for developing treatments for this and other afflictions of old-age.

The scientific chasm that still needs to be bridged, in my view, is our relatively poor understanding of what information is represented in different circuits in the brain (including those involved in memory). We need to know more about how information is represented as patterns of activity (including patterns in time) and how the biophysical properties of neurons, their dendrites and axons, enable the bits and pieces of information represented at different sites to be bound together into a coherent whole. This level of analysis is one step below the behavioral domain so well described by Squire, and one step up from the cellular level occupied by Kandel.

But there is the gap to be bridged between these two levels of analysis having to do with exactly how the myriad of local circuits that have been worked out by anatomists and physiologists are tailored to the differing information processing tasks they perform. This is the level of the functional architecture of neural circuitry. While reductionists like Kandel are avidly pursuing the path of identifying proteins and the intracellular roles they perform, I am inclined to pause in a back eddy of this “post-genomics” age. There are secrets in the circuitry yet to be gleaned.

EXCERPT

From Memory: From Mind to Molecules by Eric R. Kandel and Larry R. Squire. © 1999 by Scientific American Library. Used with permission of W.H. Freeman and Company.

The unificiation of molecular biology and cognitive neuroscience is specifically illuminating the two components of memory that we have considered in this book: the memory systems of the brain and the mechanisms of memory storage...

Consider just three findings central to our current understanding that have emerged from the study of the brain’s memory systems. First, memory is not a unitary faculty of the mind but is composed of two fundamental forms: declarative and nondeclarative. Second, each of these two forms has its own logic—conscious recall as compared to unconscious performance. Third, each has its own neural systems.

The molecular study of the mechanisms of memory storage, in turn, has revealed previously unsuspected similarities between declarative and nondeclarative forms of memory. Both forms of memory have a short-term form lasting minutes and a long-term lasting days or longer. With both forms of memory, the short-term and long-term forms rely on a change in synaptic strength. In both cases, short-term storage calls for only a transient change in synaptic strength. Ultimately, in each case, the activation of genes and proteins is necessary for converting short-term to long-term memory. Indeed, both forms of memory storage seem to share a common signaling pathway for activating shared sets of genes and proteins. Finally, both kinds of memory appear to use the growth of new synapses—the growth of both presynaptic terminals and dendritic spines—to stabilize long-term memory.

One further achievement of this new synthesis is the appreciation that we commonly use the memory systems of the brain together. Consider, for example, the viewing of a vase sitting on a table. Our perception of the vase gives rise to a number of different unconscious and conscious effects that can persist as memory. The unconscious memories are particularly diverse. First, the ability to detect and identify the same vase later will be enhanced through the phenomenon of priming. Second, the vase could serve as a cue in the gradual acquisition of a new behavior or habit, shaped by reward. Presentation of the cue (the vase) signals that expressing the behavior will be rewarded. Third, the vase could serve as a conditioned stimulus (CS) and come to elicit a response that is appropriate to coping with an unconditioned stimulus (US) such as a loud noise. Fourth, if the encounter leads to a distinctly pleasant or unpleasant outcome, then one may develop strong positive or negative feelings about the vase. Learning feelings of like or dislike requires the amygdala, learning habits requires the neostriatum, and learning a discrete motor response to a CS requires the cerebellum.

All these memories, potentially triggered by the vase, are unconscious. They are expressed without the experience of any memory content and without the feeling that memory is being used. Moreover, all these memories are the result of cumulative change. Each new moment of experience adds to, or subtracts from, whatever has just preceded. The resulting neural change is the sum of these moment-to-moment changes laid upon each other. There is no sense in which these kinds of memories separate out and store the various individual episodes, each with their own context of time and place, that together make up the cumulative record. In these cases, the vase is a basis for improved perception or a basis for action, but the vase is not remembered as something encountered in the past.

Conscious declarative memory is very different and provides the possibility of recreating in memory a specific episode from the past. In the case of the vase, we can later recognize it as familiar and also remember the encounter itself, the specific time and place when a unique combination of events converged to create a moment involving this particular vase. For each declarative or non-declarative memory that might be formed from the encounter with the vase, the starting point is the same distributed set of cortical sites that are engaged when one perceives the vase. Declarative memory uniquely depends on the convergence of input from each of these distributed cortical sites into the medial temporal lobe and ultimately into the hippocampus, and the convergence of this input with other activity that identifies the place and time in which the vase was encountered. This convergence establishes a flexible representation such that the vase can be experienced as familiar and also be remembered as part of a previous episode...

Although much has been learned, all the work to date on the molecular biology and cognitive neuroscience of memory has provided us with just a beginning. We still know relatively little about how and where memory storage occurs. We know in broad outline which brain systems are important for different kinds of memory, but we do not know where the various components of memory storage are actually located and how they interact. We do not, as yet, understand the functions of the various subdivisions of the medial temporal lobe system and how they interact with the rest of the cortex. We also do not understand how declarative information becomes available to conscious awareness. We know almost nothing about how an earlier encounter with a vase is retrieved from memory, or what actually happens as the vase is gradually forgotten, or why it is so easy to confuse a memory with a dream or with something we have only imagined.

Similarly, although we have identified a small number of genes and proteins that actually switch short-term to long-term memory, we have a long way to go before the molecular steps required for the establishment of the structural changes of long-term memory are completely understood...

Some answers to these questions will come from using imaging techniques designed to visualize the human brain while it carries out cognitive tasks of learning, remembering, and forgetting. These experiments will give us correlations between cognitive activities and neural systems for memory. To obtain an understanding of causal mechanisms, scientists can turn to genetic techniques in mice that allow genes to be expressed or eliminated in specific regions and even in specific cells. The most penetrating insights, however, will come from the continued interplay of these two approaches—the molecular biological analysis of cognition and the use of anatomy, physiology, and behavior to analyze the functions of the brain systems that support cognition.



About Cerebrum

Bill Glovin, editor
Carolyn Asbury, Ph.D., consultant

Scientific Advisory Board
Joseph T. Coyle, M.D., Harvard Medical School
Kay Redfield Jamison, Ph.D., The Johns Hopkins University School of Medicine
Pierre J. Magistretti, M.D., Ph.D., University of Lausanne Medical School and Hospital
Robert Malenka, M.D., Ph.D., Stanford University School of Medicine
Bruce S. McEwen, Ph.D., The Rockefeller University
Donald Price, M.D., The Johns Hopkins University School of Medicine

Do you have a comment or question about something you've read in CerebrumContact Cerebrum Now.