Progress Report 2008: Sense and Body Function
The 2008 Progress Report on Brain Research


February, 2008

In 2007, scientists continued to explore the ways in which the brain processes and responds to perceived stimulation. Researchers at Harvard University investigated the mechanism by which we feel sick and took the first steps toward reducing those sensations for patients with certain conditions. Researchers at both Duke and Johns Hopkins universities made progress in the complicated investigation of sound perception, studying music and speech, respectively.
 

The Fever Response

A person who feels as though he or she is getting sick typically experiences a familiar set of symptoms: body aches, fatigue, poor appetite, and the chills and hot flashes associated with a fever. The body develops a fever in response to several situations perceived as threats. Bacterial infections are the most common fever-producing events, but some viral infections, along with noninfectious diseases involving the immune system, such as rheumatoid arthritis and Crohn’s disease, will also prompt the body to elevate its temperature above 98.6 degrees Fahrenheit.

Although running a fever is an unpleasant experience, fevers aid the body in its fight against infection. White blood cells, which are part of the body’s immune system, become more active when the body’s temperature elevates, mounting a stronger defense against the invading organisms.1 Infectious agents also have a more difficult time surviving and flourishing in a system that is getting hotter. Until recently, however, scientists did not fully understand the mechanism by which fevers are produced.

Scientists knew that a fever occurs when prostaglandin E2 (PGE2), a hormone made by blood vessels on the edge of the brain, is released into the blood, crosses into the brain, and binds to EP3 prostaglandin receptors (EP3Rs). These receptors are located in the part of the hypothalamus called the median preoptic nucleus as well as in other parts of the central nervous system.

The question that Clifford B. Saper and his research team at Harvard sought to answer in 2007 was this: which receptors respond to the PGE2 hormone by triggering the body to run a fever?

Median preoptic nucleus - Spotlight
Researchers were able to prevent the development of fevers in mice by blocking EP3 prostaglandin receptors (stained white) above the third ventricle, a normal opening in the brain. The dark cells have been affected by the injection of a gene that blocks EP3 receptors. The inset shows a higher magnification of this process. (Image courtesy of Dr. Saper) 

 

Saper’s team investigated receptor response via a viral vector—a benign virus modified to deliver specific genetic material—called adeno-associated virus. In this case, adeno-associated virus selectively “chopped out” the EP3 gene, thereby preventing any PGE2 hormone from binding at that site. The team incapacitated receptors in the brains of mice, working with one specific, tiny area at a time, and then tested the animals’ fever response.

When the EP3R located in the median preoptic nucleus were incapacitated—when the EP3 genes there were “chopped out”—the mice did not develop fevers in response to infection.2

 

Saper’s team suspects that the PGE2 hormone and its EP3R are responsible for the range of familiar symptoms one feels when one feels sick, because drugs such as aspirin and ibuprofen, which block the synthesis of prostaglandins, reduce both fever and pain. They decided to begin by investigating the fever response for two reasons. First, it is relatively easy to measure body temperature (easier than measuring aches or fatigue). Second, the research on fever was further along than research on the other responses to infection. In 2008, Saper and his colleagues will again use mice to explore the role that the PGE2 hormone and its EP3RS play in generating the pain response to illness.

If the mechanism by which the body experiences pain when it is sick can be deciphered as exactly as the mechanism by which it runs a fever, pain could then be controlled by managing the PGE2 hormone and its receptors. This progression would potentially provide clinicians with an alternative to narcotics and other pain management remedies when they are treating the discomfort of patients with chronic or terminal diseases—situations in which the pain response is no longer prophylactic and adaptive. Ideally, physicians could simply “dial down” the pain response in these patients to increase their quality of life.

The Universal Human Appreciation of Music

The human ear can hear a wide variety of tones, but musicologists who study music across different cultures have determined that people use approximately the same small subset of tones called scales in the creation of music. Dale Purves and his research team at Duke wondered why, and they hypothesized that it had something to do with the tones present in human speech. In 2007, these researchers set out to decode the connection between human speech and the musical tones that all humans find agreeable.

 

Initially, the team thought the preferred intervals in music mimicked the rise and fall of pitch when humans speak. They expected to be able to map common voice modulation over commonly used scales, but the intervals were not the same. The team then turned to what are called formants.

 

When an instrument produces a note, that note can be represented as a spectrum. Formants are the most important frequency components represented when an instrument, including the human voice box, generates a note. When a person speaks a vowel sound, it is those strongest pitches, or formants, that make the sound distinguishable from other vowel sounds.

 

Purves and his colleagues statistically analyzed spectra created by music and spoken vowels (the spectra were represented visually) and discovered that 68 percent of the time, the same intervals that create the music deemed pleasing by humans across time and geography were also emphasized when people spoke vowel sounds.3 

 

The emphasized harmonics in human speech—the frequencies that harmonize and form what we recognize as a person speaking a vowel sound—are often the same as our chromatic musical intervals. In other words, the tones of music are actually embedded in our speech.

 

The principles of evolutionary weeding suggest that humans’ aesthetic taste is rooted in something practical. This discovery suggests that the harmonies the brain finds pleasing identify aspects of our environment that bear important information, or did at one time. Paying attention to another person speaking used to spell the difference between life and death (it still can); those who found speech most pleasant listened, reaped its lifesaving benefits, and went on to reproduce. The descendants of those early humans eventually used those same lovely intervals to create music, this theory suggests.

 

Exploring music in this way has piqued Purves’s interest, and he plans to investigate the link between music and emotions next. Humans interpret music played in a major scale as bright and hopeful, while a tune in a minor scale seems melancholic. Purves speculates that changes in the larynx, which occur in response to activity in the nervous system, cause formant changes when we speak that reflect these major and minor scales. According to this theory, a happy person’s nervous system cues the larynx to produce major-scale formants; a sad person’s nervous system results in minor-scale formants.

The Complex Perception of Spoken Language

In the 1970s, Murray Sachs and Eric D. Young of Johns Hopkins University discovered the mechanism whereby the brain codes, and therefore understands, speech. They discovered that hair cells in the ear vibrate in response to sound, and that this vibration is translated into an electrical signal—a nerve spike—that the auditory nerve conducts to other parts of the brain.

 

In the 1980s, they shed light on how the brain represents the variety of information that is carried in through the ears. Each of the 30,000 auditory nerve fibers represents a very small number of specific frequencies. The dominant frequencies, or formants—the same patterns examined by Purves’s team—are then extracted in the cochlear nucleus, which interprets auditory nerve fibers’ responses to frequency.

 

Xiaoqin Wang, who has since joined the research group, is interested in how the brain processes speech-like stimuli in the auditory cortex. He began by using marmoset monkeys to study how animals determine which auditory stimuli are worth their attention. Marmosets were selected for the range of their vocal repertoire; they chirp to convey many kinds of social and practical information. They also continue to chirp in meaningful ways in captivity. Wang and his team played recorded monkey calls forward (as they are normally heard) and then backward, and determined that monkeys and cats process monkey calls differently. The cats’ response to the monkey calls did not change based on how the calls were played, but the neurons in the same-species monkeys responded more strongly to the forward, familiar version of the call. Thus it was determined that animals process the sounds of their own species uniquely, and those differences showed up in a part of the brain called the inferior colliculus.

 

The inferior colliculus, which Young has studied extensively, introduces time as a factor in understanding speech. When we listen to speech, we hear, decipher, and store individual sounds in our short-term memory, and we anticipate the next sounds. When we listen to multiple speakers simultaneously, as in a group discussion, those streams are understood separately and kept distinct. The speed at which the brain can make sense of speech is what allows it to be a practical way for humans to convey information.

 

Currently, Young is investigating how the auditory system uses short-term memory along with moment-to-moment processing of sound to make sense of language. The next step in his inquiry will be to study the mechanisms by which we are able to anticipate what a person will say next.

 

Sachs plans to rejoin Young and Wang in the lab in 2008 to begin studying how one marmoset monkey distinguishes the calls of another specific monkey when many, both seen and unseen, are chirping away. This isolation of all the sounds from one source is referred to as forming an auditory object. The researchers are looking for neurons in the inferior colliculus that do this analysis, the same sort of analysis that allows humans to make sense of speech in a crowd or identify the sound of a particular instrument in a band or orchestra.

 

The group also plans to study the process of perceiving music. Like Purves, Sachs is interested in how sound affects emotions.

Notes

1. Schaffer A. Research identifies brain site for fever. New York Times online August 7, 2007:1.

2. Lazarus M, Yoshida K, Coppari R, Bass CE, Mochizuki T, Lowell BB, and Saper CB. EP3 prostaglandin receptors in the median preoptic nucleus are critical for fever responses. Nature Neuroscience 2007 10(9):1131–1133.

3. Ross D, Choi J, and Purves D. Musical intervals in speech. Proceedings of the National Academy of Sciences 2007 104(23):9852–9854.

back to top