Including sections: brain exploration past and present, electric avenues, lessons from the animal world, new tools for illuminating the brain, combining technologies, our genes, frontiers of neuroscience
The human brain, the three pounds of soft tissue inside your cranium, is a marvel of complexity. Nobel laureate James Watson, codiscoverer of DNA’s structure, has called the brain “the most complex thing we have yet discovered in our universe.” In round numbers, the organ contains some 100 billion nerve cells, each of which branches out to hundreds of thousands of others, forming trillions of connections, all told. The cells’ communication and activity produce electrical currents and the ebb and flow of chemicals, with millions of operations occurring simultaneously. How do we trace all these tiny elements to create a composite picture of the brain and nervous system?
For thousands of years scientists, philosophers, and other scholars tried to understand the workings of their own brains, to little avail. But in the twentieth century, with progress in techniques and technology, brain research began to gather momentum, and as the century closed, a much repeated theme among neuroscientists was that more had been learned about the brain in the previous 25 years than in all of human history. A key advance, made a century ago, paved the way: the discovery of dyes that would make the cells in brain tissue visible under a microscope, a feat that won a joint Nobel Prize for two fierce scientific rivals, Santiago Ramón y Cajal of Spain and Camillo Golgi of Italy.
Subsequent decades brought better ways to see and perform experiments on those cells and their chemical and electrical activity, especially with the inventions of the electron microscope and, in the 1950s, new devices that could record the firing of single brain cells in living, behaving animals, starting with cats and rabbits and later with monkeys and rodents. Such investigative tools were remarkably productive in the hands of skilled scientists. Those researchers’ pioneering demonstrations of the brain’s dynamic activity spelled out fundamental principles, such as that some brain circuitry will simply not operate if deprived of stimulation from the external world, that neurons communicate using both electrical action and chemical messengers (the neurotransmitters), and that the vast network of cells in the brain includes precise “pathways” participating in various functions.
These “first principles” established a frontier to which new cadres of neuroscientists would flock in the 1980s and 1990s to make the brain yield more and more secrets of its advanced functions. Now scientists can draw on a host of new tools to probe the human brain in exquisite detail. This technology includes not only the repertoire of genetics and molecular biology but also a valuable array of brain imaging or scanning devices handed to neuroscientists by the similarly exuberant field of physics, permitting researchers to study the living human brain’s machinery in action.
With these powerful tools, knowledge of the brain has spread from the laboratory to the clinic, and from there to the world at large, where, in places as different as the classroom and the courtroom, everyone is learning about the brain’s structure and function, gaining insight on such basic tasks as learning, memory, thinking, and feeling.
We are also discovering what can go wrong: how injury, disease, and developmental miscues can make these vital functions break down. As neuroscience progresses, we all—scientist and layman alike—are peering with ever better vision inside the once unseeable “black box” of the human brain and understanding some of the mysteries that have intrigued us for so long.
A note about the more recent findings described in this chapter: after one research group publishes any scientific study, other laboratories must examine, replicate, and refine the findings before the scientific world as a whole accepts its value. That is how progress takes place in medicine, or in any science. Most of the examples you will read about here fall into the category of promising results still under scrutiny as the new century opens. Many have a bright future as definite statements about how our brains work, but all are illustrations of the creative ferment in which teams of scientists all over the world are engaged, wedding their fascination and curiosity to new technology and new ideas to explore the brain.
Brain Exploration: Past and Present
While visiting London in 1873, Mark Twain entered the office of Lorenzo N. Fowler, a “practical phrenologist.” During the examination, Fowler informed Twain of a cavity in his brain in a place where, according to phrenologists, humor normally resided.
This dubious assessment of Twain, one of the world’s great humorists, highlights the fallibility of phrenology—a long discredited attempt to judge human character and intellect by the size and shape of bumps on the head. First promoted in the early 1800s by the Austrian physician Franz Joseph Gall, phrenology associated specific bumps on the skull with brain regions responsible for mirth, dexterity, wit, amativeness, and other qualities. The theory attracted followers in Europe and the United States for many decades, but the medical community ultimately rejected it as pseudoscience.
Researchers concluded long ago that the organization of the brain’s soft tissue has absolutely no bearing on the shape of the skull. Nevertheless, phrenology had stumbled onto a significant aspect of how we understand the brain today: different parts of the brain are indeed involved in executing different tasks, though with an intricacy and subtlety that Gall’s fanciful theory had no hope of explaining.
More authentic scientific approaches have provided a more direct, and less conjectural, means of exploring brain structure and function. The most venerable and still valuable technique, and certainly the most straightforward way to inspect the human brain, is to slice it open after its owner has died. The Greek physician Galen, for example, dissected the brains of humans and other animals in the second century A.D. He concluded that the brain’s vital parts are its fluid-filled cavities rather than its soft tissue—an erroneous notion that prevailed for nearly 1,500 years.
Centuries before Galen, around 300 B.C., investigators in Egypt also probed human anatomy through dissection. Herophilus, a physician with a keen interest in the brain and nerves as well as other organs, distinguished nerve networks from the tendons and blood vessels throughout the body. He also noted that nerves come in two varieties, sensory and motor.
His successor, the anatomist Erasistratus, differentiated the cerebrum, the brain’s main component, from the cerebellum, the smaller section behind it. Shortly after that, Egyptian religious authorities decided that the human body should remain intact after death, bringing a sudden end to such dissections. Yet the basic approach, called postmortem exam or autopsy, was revived in recent centuries, providing a fertile avenue for scientific and medical progress.
Relying on microscopic examination of brain cells and nerves from dead humans and animals, scientists began to trace the circuitry of the nervous system. A pioneer in this area, Camillo Golgi of Italy specialized in histology—the study of how tissue is organized at various levels, from single cells to an entire organism. In 1873, Golgi introduced a staining technique that selectively darkened neurons with silver nitrate. This made the cells and their fibrous extensions—axons, which transmit nerve signals, and dendrites, which receive signals—easy to see under the microscope.
About 15 years later, the histologist and neuroanatomist Santiago Ramón y Cajal of Spain improved on Golgi’s stain, using gold to reveal the structure of the nervous system in even finer detail. Ramón y Cajal focused in particular on how neurons communicated through a “synapse,” a tiny gap between one cell and the next. Golgi disagreed with this finding, maintaining that nerve cells formed a physically interconnected net. (Ramón y Cajal eventually proved to be right.) Despite their differences, the two scientists shared the 1906 Nobel Prize in medicine for their contributions in making the nervous system visible.
Scientists have also gained hints about the brain by studying human behavior. This strategy was especially fruitful when doctors came across patients who behaved oddly and whose brains—it was learned after their death—were impaired in specific, obvious ways. The approach came to be called lesion studies because it focused on the location of lesions, or damaged areas, in these people’s brains.
Of course, researchers also need to look at healthy brains, which serve as controls, so that they can spot the deviations caused by disease, injury, or congenital defects. In the 1860s, for example, the French neurologist Paul Broca conducted postmortem exams of stroke victims, linking damage to a region in the left frontal lobe (now called Broca’s area) with the inability to speak. A decade later the German neurologist Carl Wernicke discovered that damage to a part of the left temporal lobe (Wernicke’s area) affects the ability to understand language. In 1906, the German physician Alois Alzheimer described an autopsy of a 51-year-old woman who had suffered from memory problems. Her brain was riddled with two kinds of microscopic protein clumps, “plaques” and “tangles”—the first reported case of Alzheimer’s disease.
Lesion studies, now sometimes called clinical pathologic correlations, remain an important research avenue. For example, many physicians have used the method in postmortem studies of brain tissue to try to find the roots of schizophrenia, a complex disorder that affects many functions of the brain. Teams at the University of California at Los Angeles examined the brains of people who had had schizophrenia and reported misplaced neurons in the hippocampus—a seahorse-shaped structure involved in learning and the formation of long-term memories. Other analyses of schizophrenics’ brains, performed at the University of California at Irvine, found out-of-place neurons in various parts of the cerebral cortex. Researchers at the Brain Tissue Research Center of McLean Hospital in Massachusetts have found that the brains of schizophrenics have more excitatory neurons and fewer inhibitory neurons than normal individuals—a finding consistent with evidence that people with the disorder are often overwhelmed by sensory stimulation.
Similar studies of people who succumbed to Huntington’s disease have shown a marked deterioration in the brain’s caudate nucleus—an area involved in controlling movements, among other responsibilities. Investigators are exploring strategies for curbing brain cell death in that region by blocking the action of a protein responsible for the devastation.
For decades, neuroscientists have measured electrical currents in the brain and seen how this activity goes haywire in conditions like epilepsy. Efforts to understand the physiology of animals and humans through electrical stimulation of the brain and other body parts have a long tradition. In 1791, the Italian scientist Luigi Galvani (from whose name we have the term galvanic) made disembodied frogs’ legs jerk spasmodically when he ran a current through them. His colleague Alessandro Volta (for whom the volt is named) extended this research over the next several years, finding he could induce motions in an unconscious frog by stimulating nerves rather than muscles. Reports like these established that our nervous systems depend on bioelectrical currents.
In 1879 the German researchers Eduard Hitzig and Gustav Fritsch brought this line of research to the brain. Hitzig and Fritsch demonstrated that electrically stimulating certain areas of a dog’s cerebral cortex produced movements in the animal’s limbs—a pioneering approach to localizing function in the brain.
Continuing this work, the American neurosurgeon Wilder Penfield began a series of operations in the 1930s to locate the source of seizures in human patients. To identify sites with abnormal electrical activity, Penfield placed small exposed wires called electrodes throughout the cerebral cortex of hundreds of patients. He used the electrodes to stimulate particular areas and observed the effects. Penfield found that simulating adjacent parts of the cortex, for example, affected adjacent parts of the arm. Specific sections of the cortex, he concluded, correspond to specific parts of the body surface.
Brain surgery is unusual among major operations in that patients can be awakened to respond to their surgeon’s questions and actions, after their scalp, skull, and meninges (the covering around the brain) have been cut open under local anesthesia. In one such operation, Penfield stimulated a particular part of a teenage girl’s brain, and she suddenly recalled a terrifying incident from her youth when a man holding a sack appeared and asked, “How would you like to get into this bag with the snakes?” In this way, Penfield showed that vivid childhood memories could be retrieved by stimulating parts of the temporal lobes. A similar procedure caused a young woman to laugh whenever one area of her left cerebral cortex was stimulated. Her doctors asked why, and she replied that she suddenly found them very funny.
Although electrical stimulation techniques yield interesting results and avenues for further exploration, they can be used only on patients already undergoing brain surgery. So their greatest value remains medical: assisting neurosurgeons in identifying important areas to protect in surgeries for illnesses such as epilepsy and brain cancer. This would be a significant research limitation except that many other approaches, all noninvasive, have become available.
Lessons from the Animal World
To probe more deeply into the human brain, scientists have always studied animal anatomy and behavior. One of the most famous experiments involving animals began when the Russian scientist Ivan Pavlov noticed that his laboratory dogs salivated at the sight of the white-coated attendants who brought them their food. The dogs barked before they saw the food itself. By ringing a bell or flashing a light before mealtimes, Pavlov was able to condition the animals to salivate in response to those stimuli as well. They even came to salivate and wag their tails when they received electric shocks, so long as those shocks were consistently followed by food. Pavlov thus showed how events could condition the brain to produce a specific physical behavior.
Other animal studies involve making small brain lesions, implanting electrodes to monitor activity, or disrupting peripheral, sensory, or motor nerves to see how these nerves can repair themselves. Many people find it hard to read about this, and thus it is important to take a moment here to point out that cruelty has never been an intended consequence of research using animals, except in extremely rare instances by unprincipled scientists who were readily stopped by their peers.
Taking good care of animals to keep their health and environment as humane and near normal as possible has been the rule. This means cages are kept clean, the animals are well nourished, procedures that cause pain are done with anesthesia, and if pain itself is the object of study, the least amount necessary to make the needed observations—such as a hot pepper ointment on a paw—is used. Why? Not only because gratuitous suffering in animals offends most scientists, who are decent people, but also because findings cannot be trusted if unnecessary pain and its biological responses have crept in.
That said, studies of fruit flies, yeast, snails, worms, mice, rats, birds, and certain fish are the mainstay of neuroscience, providing useful models for understanding the human brain and nervous system and disorders, particularly at the genetic and cellular level. Monkeys, dogs, cats, and other animals are studied mainly at the stage when the research question must be pursued in a nervous system closer to that of humans. Most responsible investigators consider animal subjects indispensable for basic research; almost every scientific and medical advance described in the Dana Guide has passed through an animal testing stage. The alternative, to experiment on or prescribe treatments and medications for humans without first confirming safety and efficacy in animals, is unthinkable.
Scientists took a big step forward in the 1950s and 1960s by learning how to record the electrical activity of individual neurons in live laboratory animals, at first with cats and rabbits, then later with monkeys and rodents. In this technique, researchers anesthetize the animal and insert extremely fine electrodes directly into the targeted brain cell. When a neuron is active, it discharges an electrical impulse. Once the electrodes are surgically implanted, the scientists can record the animals’ brain activity repeatedly, during wakefulness and sleep, without causing them harm. The technique offers a way to gauge the response of cells to various stimuli, confirming ideas about nerve cell function in particular brain regions. Such exquisitely fine observations of living brain activity would not have been possible without electrode recording.
In 1957, National Institutes of Health neuroscientists Eric Kandel and Alden Spencer obtained the first recordings of cell activity in a mammal’s hippocampus. Kandel’s goal was to understand the cellular basis of memory, and he soon switched to studying how that worked in a simpler brain: the sea snail Aplysia californica, which has the largest nerve cells of any animal. Studying this creature at Columbia University over several decades, Kandel showed that when the snail (like Pavlov’s dogs) learned a response to a repeated stimulus, the synapses between its nerve cells strengthened as well. Scientists now believe that this process, called long-term potentiation, is vital to the formation of memories in humans as well as in snails. Kandel won a Nobel Prize for this work in 2000.
Around the same time that Kandel and Spencer started probing the hippocampus with microelectrodes, David Hubel and Torsten Wiesel began using such miniature electrodes to probe animal brains, one cell at a time. In 1958, while studying the primary visual cortex of cats at Johns Hopkins University, Hubel and Wiesel made an astonishing discovery: neurons in one part of the visual cortex responded only to vertical lines moving across the cat’s field of vision. Nearby brain cells responded just to horizontal lines, and still others responded to diagonal lines. This showed just how specialized brain cells can be. Hubel and Wiesel also learned that the cells that responded to the same kind of stimuli—such as shapes of a specific orientation—were stacked in vertical columns extending from the top of the visual cortex to the bottom.
In later studies, the two discovered that a newborn cat must be able to see in order to develop vision—that is, to “wire up” the visual cortex. They closed a kitten’s eye using surgical sutures and removed them when the kitten was about eight weeks old. The kitten was left without vision through that eye for life. This finding led to today’s understanding of the “critical” period for vision in infants, when an impairment, such as a cataract, with which some babies are born, must be corrected for sight to develop normally. Hubel and Wiesel collaborated for two decades, earning a Nobel Prize for their labors in 1981.
In the early 1960s, the American physiologist Vernon Mountcastle uncovered more details about the vertical organization of the cortex. By applying movable electrodes to the brains of anesthetized animals, Mountcastle showed that the neurons that respond to stimulation of the body’s surface are also arranged one on top of another. Each column within this part of the brain seems to have a particular function in sensing the outside world. Another pioneer of neurophysiology, Edward Evarts, had developed the systems for these motor recordings, and Mountcastle later advanced them to allow recording in animals that were awake.
In the 1980s and 1990s, the University of Minnesota neuroscientist Apostolos Georgopoulos found additional evidence of brain cell specialization in experiments with monkeys. Georgopoulos and his colleagues showed that neurons in the motor cortex—the part of the brain that directs simple movements—can be active even when an animal remains still. By monitoring the firing of individual cells in the motor cortex, Georgopoulos could accurately predict the direction in which a monkey would extend its arm before the movement took place.
Scientists at Johns Hopkins University have studied the physiological basis of attention by recording brain cell activity in monkeys. The investigators placed microelectrodes in a part of the cortex located in the parietal area of the monkeys’ brains. The animals performed visual tasks for seven to eight minutes, picking out targets on a computer screen, and then performed tactile tasks in which they manipulated a touch pad and keyboard for a similar period. When the monkeys switched between these two different kinds of tasks, each shift in attention was accompanied by sharp “spikes” of neurons firing together in the cortical region under study. The researchers suggest that a “chorus” of neural activity, with multiple nerve cells firing in unison, may help the brain focus attention on one item amid the flood of incoming sensory information.
Surgery on laboratory animals has also yielded revealing results in a burgeoning area of brain research: “plasticity,” the brain’s ability to change. In experiments at the Massachusetts Institute of Technology published in 2000, scientists “rewired” the brains of newborn ferrets so that signals from the eyes traveled to the auditory cortex—the part of the brain devoted to hearing—instead of to the visual cortex. The scientists rerouted nerves in the ferrets’ brains in the same way that a mechanic might switch around wires in a car’s engine. They tested the ferrets after the animals had matured into adults, finding physical changes in the auditory region that made it more closely match the visual center. The studies demonstrated the brain’s remarkable ability to adapt—a capacity that might be exploited in the future to treat people’s brain disorders during early stages of development.
New Tools for Illuminating the Brain
Spurred by rapid advances in computers over the past several decades, new technologies for brain imaging have helped change the way we look at the brain and suggested new ways to think about it. Although the imaging tools we use today still cannot tell us everything we would like to know about how the brain works, they provide important clues on where to look for answers in the near future.
The oldest technology for “looking” at what is going on in the living brain is electroencephalography (EEG), which measures the electrical activity of neurons inside the brain by means of electrodes attached to the patient’s scalp. A major application of EEG, employed clinically for more than 50 years, has been to monitor brain activity during sleep and chart the different sleep phases, including the rapid eye movement (REM) and non-REM (slow-wave) stages. (For more on sleep, see the section on basic drives.)
Another common use of EEG is the clinical investigation of brain wave activity in epilepsy, in an effort to locate and understand the misfiring neurons. And researchers studying infant behavior have found EEG particularly useful because it easily picks up signals from within a baby’s thin skull and involves simply placing a cap covered with sensors on the baby’s head. Using EEG and these little caps, investigators studying how babies acquire language have made dramatic observations of changes that occur in infant brain waves in response to sounds in the language of their parents, compared with sounds in foreign languages.
But EEG cannot map the entire brain in detail because it lacks the resolution to monitor activity deep within the organ. Nevertheless, it remains a valuable research device for scientists studying important areas lying close to the surface.
X-ray computed tomography (CT), developed in the early 1970s, offered the first opportunity to produce X-ray images of the body’s soft tissue, including the brain. (Traditional X rays can see only the skull, not the brain.) During brain scans, which may take just a few minutes, patients lie on a table that slowly moves through a doughnut-shaped scanner. X-ray beams pass through the brain at various angles and are collected as they emerge, changed by their interactions with brain tissue of varying density. A computer takes the information obtained from numerous X-ray transits to draw a three-dimensional picture of the brain. Clinicians commonly use these pictures to identify brain abnormalities.
Then physics joined computer science to produce positron-emission tomography (PET), invented soon after CT scans. PET was the first technology that could peer beyond anatomy to infer changes in brain activity as mental acts were performed. In typical PET studies, patients are injected with water in which the oxygen molecule bears a radioactive “label” that emits a low level of radiation for about 15 or 20 minutes. Researchers then track brain activity by monitoring blood as it flows through the brain, delivering the labeled oxygen to brain cells. The highest level of radioactivity indicates the site of the greatest blood flow and therefore the most cellular activity at any given fraction of a second.
Early PET used a chemically modified form of glucose, the fuel of brain cells, but because the cells trap this nutrient, it was not easy to tell when activity stopped—in other words, it was difficult to distinguish whether the cell was still using fuel or just sitting there. As a result, researchers had to keep their study questions very simple, so that some neurons would do a lot more work than usual and would therefore stand out from all the other neurons that already normally used a lot of glucose. Some favored early experiments were to move one hand or wriggle a few fingers to image that part of the motor cortex that represents the hand, and to flash lights in one eye to image the primary visual cortex. In contrast, oxygen molecules and their radioactive labels wash away when the cell is through with them. Also, the oxygen isotope (oxygen 15) that’s used to emit positrons decays many times faster than the modified glucose. This meant scientists could observe many more patterns and much finer mental activity, such as seeing, reading, speaking, and rhyming words.
In the late 1980s and early 1990s at UCLA, Michael Phelps (who with Michael Ter-pegossian had invented PET while both were at Washington University in St. Louis, Missouri) and John Mazziotta used the device to study the process of learning. Among their findings was that the extent of brain area devoted to a task shrinks and energy consumption diminishes as people learn to perform the task better. Moreover, responsibility for the action shifts over time from the cortex, a “high level” brain region, to more primitive brain structures, where the task is carried out with much less deliberation. In sum, tasks we do often truly do become easier and more routine.
In an intriguing 1996 experiment, Harvard researchers used PET to watch the brain making a mistake. Experimenters read a list of words to adult subjects. Ten minutes later, the subjects looked at another list and tried to identify the words they had just heard. The brain scans revealed a difference between those who remembered correctly and those who remembered incorrectly. Both sets of scans showed heightened activity in an area near the left hippocampus, a region important for memory formation. But scans associated with the correct memories also showed activity in the left temporal parietal area, where word recognition occurs, providing the researchers with a way to distinguish between accuratememory and errors—if only in this controlled situation.
The next advance, magnetic resonance imaging (MRI), is similar to CT technology, except that it probes the body with a combination of radio waves and a powerful magnetic field rather than X rays. Like CT, MRI yields anatomical, not unctional, images of the brain and other interior zones of the body that have many useful medical applications. Neuroscientists usually use it to identify structural differences in the brain associated with psychiatric and neurodegenerative conditions.
For example, researchers at Massachusetts General Hospital used MRI to spot early warning signs of Alzheimer’s disease, long before symptoms appeared. The entorhinal cortex, a tiny brain structure connected to the hippocampus, was 37 percent smaller (presumably due to nerve cell death) in patients who later developed Alzheimer’s, as compared with subjects who remained disease-free.
As the inventors of PET technology had confirmed, blood flow increases in active parts of the brain. But the oxygen-carrying red blood cells also alter those areas’ magnetic fields—and MRI machines are set up to measure magnetic fields. Therefore, in the early 1990s, scientists at Massachusetts General Hospital, the University of Minnesota, Washington University in Saint Louis, the University of Pittsburgh, and Carnegie Mellon developed ways to use a series of MRI scans to monitor blood flow and oxygen consumption. Scientists now compare MRI images of the brain at rest and in the midst of an activity such as listening to music, looking for the areas of increasing and decreasing activity. This technology, which came of age in the 1990s, is called functional MRI (fMRI).
Functional MRI made brain scanning a hugely popular research technique, not only because of its fine resolution but also because fMRI does not require the injection of radioactive tracers, which makes it safer for study volunteers, including children. Thus, researchers are using these scans in an effort to understand a wide variety of activities in a healthy brain, such as reading, speaking, looking at pictures, hearing a joke, experiencing pain, or recalling a disturbing memory.
They are able to do this because, with fMRI, people’s brains can be imaged while they participate in traditional cognitive psychology experiments. In 1998, for instance, scientists at Massachusetts General Hospital in Boston used fMRI to capture what they suggested was “the birth of a memory.” During the experiments, volunteers looked at a series of words while researchers monitored brain activity. Much of the neural firing occurred in the left parahippocampal cortex, a structure in the temporal lobe that is linked to the hippocampus. However, people who remembered those words later also displayed a characteristic pattern of brain activity in their left frontal and temporal lobes. That pattern did not appear in the fMRIs of people who forgot. For the first time, researchers looked inside other peoples’ heads and predicted whether they would remember what they were seeing.
Functional MRI has also provided the first direct means of monitoring neural activity in a developing fetus, an impressive technological feat. Researchers at the University of Nottingham in England reported detecting changes in the fetal brain—namely, the activation of the temporal lobe—in response to the mother’s voice.
But the utility of these techniques is not limited to providing insights only into normal brains. Both PET and fMRI have enabled researchers to find new aspects of brain activity to examine in the study of brain disorders, including addiction, autism, depression, dyslexia, epilepsy, and schizophrenia.
Of course the creative minds of imaging inventors are not leaving things there. Most recently, computer programmers have improved fMRIs by exploiting the technology’s reliance on advanced mathematical computations to assemble a series of readings into a single picture of a brain. By applying more sophisticated mathematical tools, programmers can improve the resolution of these images, almost as if they were focusing a microscope lens.
For example, researchers at the National Hospital for Neurology and Neurosurgery in London used computer software to compare MRI scans of people who experienced cluster headaches with those who did not. The computer analyzed tiny portions of the brain, one cubic millimeter (about six hundredths of a cubic inch) at a time. Before this study, conventional wisdom heldthat the brains of people who suffered from cluster headaches were structurally normal. But the London researchers reported a very slight increase in gray matter in the hypothalamus on the side where the headaches occurred. A cubic millimeter holds a few hundred neurons, so while it still has a long way to go, the technology is heading in the right direction.
Another tool, transcranial magnetic stimulation (TMS), relies on a pair of electromagnets to focus magnetic fields on specific brain areas, briefly inactivating them. The basic idea is to figure out what a brain structure does by seeing what happens when it is immobilized for a few thousandths of a second—a time so short that experimental volunteers often do not realize anything is amiss. Applying TMS to the area called V5 (the fifth relay level of the visual system after information has reached the visual cortex), for example, interferes with the perception of motion, supporting theories that V5 is the brain’s motion detection center. When a magnetic pulse is directed to a region on the left side of the head, people momentarily lose the ability to talk, showing the importance of the area for speech.
Scientists are combining imaging techniques, such as PET or fMRI, with TMS to see how different parts of the cerebral cortex are connected. Other researchers are investigating how to use TMS to treat such conditions as depression.
Magnetoencephalography (MEG), a technique for measuring neurally generated magnetic fields, can also complement fMRI. Although fMRI has good spatial resolution—it can accurately determine where in the brain something is happening—it is not nearly as good at pinpointing when events happen. Many brain processes occur within about a millisecond. MEG can clarify the picture by detecting rapid shifts in brain activity and describing what is happening millisecond by millisecond. As is the case with PET, MEG’s resolution is enhanced when it is combined with MRI.
Scientists have also begun to combine tools from molecular biology with imaging techniques—a trend that should become even more prevalent in the future. PET scans, for instance, have become more versatile because scientists have learned how to create radioactive tracers that bind to specific molecules in the brain, including neurotransmitter receptors. Using PET scans to watch where these tracers end up can reveal, for example, a deficiency in receptors in a particular part of the brain. Researchers can now assess the course of Parkinson’s disease by using tracers with imaging to reveal the extent to which dopamine-producing neurons have died—a hallmark of the disease. The technique thus allows for the evaluation of new therapies to see how well they stave off that damage. Similarly, targeting specific dopamine receptors with PET may identify people at risk for Huntington’s disease before symptoms are evident.
It is important to note that these exciting tools for examining the brain have definite limits: while the scans reveal pathways and sequences of brain activity, as well as many of the brain structures that participate, they do not resolve all the molecular questions about, for example, what genes and neurotransmitters are in use and what they contribute to the brain activity. For that, scientists must return to their petri dishes, jars of fruit flies, and cages of mice and rats and perform their traditional labors. These technologies have not replaced earlier methods but augmented them.
At the Level of Our Genes
Probably the greatest advance in medical research in the last half-century has been the discovery of the role of genes. Neuroscience has benefited from this knowledge as much as any other branch of medicine. Since the 1950s, we have known that each brain cell, like almost every other cell in our bodies, contains 23 pairs of chromosomes and that each chromosome is a long, twisted strand of deoxyribonucleic acid (DNA). In 2000 we learned the currently estimated number of segments—30,000 to 40,000—of that DNA that contain the blueprints for proteins that we need to live. These segments are what we call genes.
Genes do much more than determine the traits that we are born with; they operate throughout our lives, turning on and off at particular times to initiate functions or when triggered by experiences or other cues.
The field of genetics is aiding brain exploration in many ways. In 1996 scientists at the Massachusetts Institute of Technology used a gene-splicing technique to create a breed of “knockout” mice that were missing the gene that makes a particular receptor for the neurotransmitter glutamate in the hippocampus. Lacking the receptor, the mice had difficulty figuring out how to navigate through a maze—a finding that supports theories tying molecular processes in the hippocampus to spatial learning.
In recent years, scientists have identified genes associated with a variety of neural conditions. The gene responsible for Huntington’s disease, for instance, was discovered in 1993 after an exhaustive search lasting more than 15 years. The first breakthrough came in 1983, with the discovery of a genetic marker for the disease: a DNA variation found only in people who had Huntington’s. That marker provided a hint about the general location of the defective gene. Ten years later the gene itself was identified, owing to a telltale pattern—a “triplet” of DNA bases repeated an abnormally high number of times. Further research showed that the more repetitions that appeared in an affected person’s genes, the earlier he or she showed the signs of Huntington’s.
These discoveries about the Huntington’s gene have inspired scientists to link genes with similar patterns of increasing repetitions to other diseases, thus suggesting how those diseases are passed down in some families with earlier and earlier onset as the number of repetitions grows. When Huntington’s disease is inherited, the age at which the disease begins to be recognized gets younger and younger: a father shows it at 35; sons may show it in their 20s—a phenomenon called anticipation. Researchers also hope that by experimenting with the Huntington’s gene in the laboratory, they will learn the basic mechanisms of the disease and be able to develop new therapies. So far there is increasing evidence that the dysfunctional Huntington’s gene causes problems with other genes, leading to the disorder’s neuronal breakdown. Years after the discovery of the original gene, however, no effective treatment has been developed, and the research continues.
Frontiers of Neuroscience
When we deal with a system as complicated as the human brain, we can ask simple questions but cannot expect many simple answers. Nor can we hope to comprehend such sophisticated mental processes as attention, awareness, and consciousness just by seeing which parts of the brain “light up” during scans as a person performs all manner of tasks.
Through imaging studies, animal experiments, and research in genetics and biology, we are clarifying some elements of brain function, bit by bit. The challenge ahead lies in putting these pieces together to form a cohesive picture. Through that effort, we hope to understand how the brain coordinates myriad processes and assimilates vast amounts of information to function as an integrated whole.
Do not expect a grand synthesis anytime soon, as progress toward this goal can be difficult to gauge. Indeed, sometimes it may seem as if we’re going backward. Further explorations of the brain will uncover more, rather than less, complexity, bringing to light things we cannot yet fathom. Given the immensity of the challenge, it may take a century or more to truly understand how the brain works.
This is no cause for despair, since—as scientific disciplines go—neuroscience is still in its relative infancy. Indeed, there is good reason for optimism, given that the brain is not the inscrutable black box it once was. The field has made great strides since Franz Gall tried to divine the brain’s inner properties from the contours of the skull. With new tools at our disposal, and other powerful instruments continually being developed, it is now possible to study the brain in a much more systematic fashion. As a result, scientists are making steady progress, uncovering some of the brain’s tightly held secrets, while leaving many other mysteries for future explorers.
go to top