For more than a century, great minds in psychology, medicine and philosophy have searched for the stuff of which memories are made. Earlier this year, an interdisciplinary research team led by Gary Lynch, a professor of psychiatry and human behavior at the University of California, Irvine, may have discovered physical evidence of the neurobiological basis of a memory.
Lynch and his team believes that they have demonstrated evidence of an engram, a physical change that occurred in the brain after learning, using an innovative microscopy technique.
In recent years, scientists have made major discoveries about what genes, neurotransmitters and molecules do to cause a memory to form, but there has been a missing link—evidence of any lasting physical change from all that activity. Lynch's team offers the first proof of such a change, completing the picture implied by the earlier work.
The idea that memories were encoded by connections between brain cells was first suggested in the late 1800s by Théodule Ribot, a French philosopher and psychologist. “Ribot said this before we had the word synapse for those connections,” says Lynch. “And since then, this idea that synapses in the brain are going to change when you learn has stuck around.”
The memory "holy grail"
Many researchers have searched for ways to empirically test this idea, to find evidence of engrams in the brain then map their pattern and location, to better understand how humans learn.
“Since the early 20th century, it’s been one of the holy grails of behavior neuroscience, to discover if a memory is widely distributed across the cortex or located in a few key areas,” Lynch says. “If we had a good idea of where in the brain a memory was located, and how it was located there, we might be able to define what a memory actually is. Right now, we can’t do that. There really is not a description of memory from a neurobiological perspective.”
In previous experiments, Lynch’s group studied brain slices from rats after a learning exercise. The researchers identified chemical markers in synaptic connections in the hippocampus that had recently undergone long term potentiation (LTP), or an increase in chemical strength due to repeated stimulation.
In their newer study, published in the July 24 issue of the Journal of Neuroscience, Lynch and colleagues examined the hippocampal area of live rats that had learned to navigate a complex environment. The researchers used an advanced three-dimensional microscopic technique, called restorative deconvolution microscopy, that focuses on infinitesimally small objects by deconvoluting, or computationally washing out, the light from other nearby objects. This permitted the researchers to not only look at objects as small as a synapse but also measure them.
“After we train the animals, there is a substantial increase in [dendritic] spines with the LTP marker,” says Lynch. “And when we look at those spines more closely, the synapses are indeed larger. With pretty good probability, we can point to this particular synapse, its increased size and number of receptors, and say that it was something created by memory. The face of the memory, or the actual physical appearance of the memory in this one part of the hippocampus, is that enlarged synaptic connection.”
The Kantian perspective
Linda Palmer, a research scientist in the Department of Philosophy at Carnegie Mellon University and member of the Center for the Neural Basis of Cognition, developed the learning environment used in the study. A Kantian philosopher, she became involved with neuroscientific work to see whether there was a neurobiological basis to Immanuel Kant’s philosophical theories about how people mentally represent the world around them.
“Hume and Locke, philosophers who came a little before Kant, believed that when we walk into a room and see an object, maybe a phone or a desk, our mental representation of that object is just a copy of our sensory impressions,” says Palmer. “Kant thought this was wrong; that a representation had to be actually constructed.”
Kant hypothesized that in order to create a mental representation, a person had to take sensory impressions from an experience, arrange them in space and time and then categorize them in terms of a concept.
“These are things you have to learn from experience,” says Palmer. “If you walk into a room, see objects like a stethoscope or an examining table, you know that it is a doctor’s office. But when you see the telephone in that same room, you can think about it on a lower level, that it is a telephone. There’s a concept there, a right way of organizing this data. But how do you know that you’ve come up with the right way?”
Partnering with Lynch and another of the current study's co-authors, Christine M. Gall, professor of anatomy and neurobiology at the University of California, Irvine, Palmer hopes to understand more about how the brain connects the dots when learning and applying concepts – as well as see how much Kant got right.
The Importance of Learning Type
William T. Greenough, a professor in psychology at the University of Illinois at Urbana-Champaign, studies the creation of new synapses and potential differences in the properties of previously existing synapses after memory or memory-like processes occur in the brain. “I’ve been in the synaptic shape and size business since the 1970s. There is nearly 40 years of history that shows a relationship between synaptic shape change and behavioral learning.”
As such, he is reluctant to say that Lynch’s team’s finding is totally unique. He also is wary of calling it an engram.
“The engram is a term that is used for the memory trace or the actual change in the nervous system that underlies memory,” he says. “The ideal engram would be fairly complicated – a comprehensive description of the changes in the entire nervous system after learning of some sort.”
But that’s not to say that this finding is not important for the memory arena. The type of learning in which Lynch’s team used for the study is of specific interest, Greenough says.
“Arguably, Lynch has really opened things up by putting in an animal in a natural situation – what he calls unsupervised learning – where the animal learns on its own as it explores a novel environment,” explains Greenough. “If you think about the number of different things the animal may be learning, the number of things that different animals could be learning, it is awesome that the changes in the brain that one sees are as focused as they are.”
Greenough believes that this type of learning paradigm could tell us a lot more about memory in the future. “In a sense, it’s a new paradigm. It may well be as powerful, ultimately – time will tell – as more traditional training environments like the Morris water maze or the eight-arm radial maze.”
Wider implications of the stuff of memory
Lynch and his colleagues will be the first to tell you that there is still much to understand about LTP effects in the cortex. “We don’t know nearly as much as we’d like to.”
But he does believe that his team's finding will be the first of many to help cognitive psychologists and neurobiologists find some common ground when it comes to memory.
“There is a huge disjunction when you talk about memory in neuroscience versus psychology,” he says. “The gulf is largely there because we can’t tell you what memory is actually composed of.”
Linda Palmer thinks that this work can also extend to the philosophical community. “We’re getting to a stage where neuroscientists and philosophers will actually have something to say to each other,” she says.
Future work in this area could help us figure out which areas of the brain encode new memories and help formulate a true brain science theory of memory, Lynch says. He also believes, down the road, these findings might help scientists to better understand why memory fails and indicate possible medical interventions to stop it.
“Assuming that the same LTP effects hold true in the cortex, there is nothing standing between us understanding how memory works at this point besides a monumental amount of work.”