Computational Models Reveal New Insights in Neuroscience


by Kayt Sukel

August 6, 2008

Neuroscientists have learned a great deal about how the brain works, from the molecular to the functional level, in the past few decades. But that knowledge has brought other questions, including a big one: How can researchers, often trained in a particular neuroscience discipline, understand and use all that data to develop the right kind of hypotheses to test?

“It’s one of the most daunting challenges of neuroscience,” says Michael Frank, a researcher who studies the basal ganglia at the University of Arizona. “Not that there is not enough data but rather, in some ways, there is just too much to link among the different types of technologies and data.”

One way that researchers are tackling that challenge is with the use of computational models, or so-called neural networks. A computational model is a computer system that has been programmed to mimic the behavior of the brain based on the results of proven scientific studies.

“Whenever you talk about function, you have an idea of how something works,” says David Plaut, a researcher at Carnegie Mellon University’s Center for the Neural Basis of Cognition. “But to communicate those ideas to other scientists, you need a language in which to describe that function. Computational modeling is one of the only ways, an inherently dynamic way, where you can specify that function precisely.”

Many models are programmed to imitate a specific level—i.e., the functional, neural, circuit or molecular properties of the nervous system. But as technology progresses, many researchers are attempting to cross those boundaries, making more comprehensive systems. And more and more researchers are looking to computers to help them construct and test new neuroscience theories.

Using models to represent meaning

For decades, cognitive psychologists and neuroscientists have struggled to explain how the brain represents and processes language. In the May 30 issue of Science, a cross-disciplinary team of scientists at Carnegie Mellon discuss a new computational model that suggests how the brain represents the meaning of concrete nouns (words for objects that you can see, hear, feel, taste or smell) by predicting brain activation patterns during a functional magnetic resonance imaging (fMRI) scan.

“Before the days of neuroimaging, it was plausible to think that the meaning of a word could be represented in a single location in the brain,” says Marcel Just, director of CMU’s Center for Cognitive Brain Imaging and one of the lead authors of the study. “But as neuroimaging advanced, it became clear that everything—all thought—has a distributed representation that involves many parts of the brain.”

The model was built using previously collected fMRI data of activation patterns when human subjects think of a single word. For 60 concrete nouns, Just and partner Tom M. Mitchell, head of Carnegie Mellon’s machine learning department, created a trillion-word text corpus that linked those nouns to verbs with which they are commonly associated. “If you think about what words occur with the noun ‘book,’ you get verbs like ‘read’ but not ‘eat,’ ” says Mitchell. “You add up the fMRI activation contributions for all of the verbs that occur with ‘book,’ for this corpus filled with trillions of words, and then see what you can predict with an arbitrary noun.”

After training the model on 58 out of 60 words, it was able to predict the distributed activation image for the two other nouns 77 percent of the time. “If it was just guessing, you’d only see the system get half of them correct,” says Mitchell. “Instead, in three-quarters of the cases, it was able to match up fMRI images that it had never seen before.”

Just and Mitchell plan to take the model further; they’d like to test word combinations and more-abstract language concepts. But they are also interested in using the model for disease studies.

“We’ve started a little work to see how the representation of certain concepts may be altered in particularly neurological way,” says Just. “The fMRI image of concepts like friend or adversary or parent may be represented slightly different in individuals with autism. And now we have the tools to assess that.”

Models are informing neuroscience

David Plaut, another cognitive scientist who uses computational models to study language at Carnegie Mellon but who was not involved in the Just/Mitchell study, uses models to examine how the brain represents words with multiple meanings in normal populations and those with brain damage. He argues that modeling is a kind of experimentation on its own terms.

“You have ideas about how some function operates, and one way to examine those ideas is to run an experiment to see,” he says. “But what you really need to know is the real implications of those ideas. If you claim that the information is represented in a particular way, what is the implication to other processes, to certain kinds of behaviors or under certain kinds of damage? Modeling is a kind of test bed for trying out new theories.”

Frank uses models to help understand the circuits between the basal ganglia and the frontal cortex of the brain. “It’s very complicated area,” Frank says. “There are many different connections; a lot of them inhibit the one after that, which then inhibits the one after that, and so on. With all of those effects of the different neurotransmitters and neuromodulators, it becomes difficult to envision in your head how they dynamically interact and lead to the bigger picture of what the system is trying to do.”

Both Plaut and Frank argue that the neuroscience community could benefit from using models, incorporating data across various neuroscience fields, more often. “I hope there’s greater involvement of computational modeling in the field as a whole,” says Plaut. “To me, it’s not very satisfying to just identify brain areas associated with function and then stop. We need to see how information is represented in real time and how the whole chain from perception to action works. And modeling can play a larger role, a kind of theoretical scaffolding if you will, to help us find explanations with more details.”