Magnetic Resonance Imaging, or MRI, is the most advanced and powerful technology we have ever possessed for looking inside the brain. Like every new technology that human genius puts in our hands, it presses upon us the question: How will we use it? Let me propose two seemingly contradictory statements about the role of MRI today:
- MRI bids fair to be the most important medical contribution to the alleviation of human misery since the invention of anesthesia.
- MRI is being badly misused, overused, and abused, not in the treatment of patients but in the quest for understanding how thoughts are represented by the brain.
The ﬁrst of these arguments is supported almost beyond dispute by the many instances in which MRI has saved people from unnecessary surgery, provided an early diagnosis of an obscure disease, or just reassured a frightened patient whose symptoms raised the possibility of a dire outcome. The second argument is, or should be, a matter of much discussion.
We can begin to explore the contradiction by ﬁrst evaluating MRI technology and speculating about the expanding range of applications to which these powerful machines are likely to be put in the brain sciences. But we must then critically evaluate what I see as the conceptual errors of MRI studies that seek the neural equivalents of cognitive functions: that is, the attempt to locate this supposed modular cognitive function in that narrowly circumscribed region of the brain. I trust that both my admiration of the technology and my dismay at another false movement in psychology’s long history of failed localization theory will become clear.
During the heady days of the 1930s, when quantum theory was born, physicists such as Isidor I. Rabi realized that one of its implications was that we could now examine the properties of the atomic nucleus. That ability depended upon the fact that the disordered magnetic orientations of the axes of the constituent protons would line up in the same direction if subjected to a strong magnetic field. After a brief interval, the protons would all return to their usual disordered orientations. In so doing, a radio frequency signal would be broadcast that was characteristic of the type of atomic nuclei of which the protons were part.
In 1937, Rabi worked with beams of ions, but in 1946 other workers, most notably Felix Bloch and Edward. M. Purcell, made similar measurements in solids. Then, in 1975, Richard R. Ernst brought forth the first system for translating these radio frequency signals into two- or three-dimensional pictures. The principle of MRI was born. It is evidence of the importance of their basic science research that all four of these scientists received Nobel prizes.
The practical application of MRI to medical diagnosis was not made until Raymond V. Damadian, a physician and engineer, took two additional conceptual leaps. First, he demonstrated in 1971 that cancerous tissue and normal tissue emitted distinctly different radio frequency signals when tested in a beaker. Second, he suggested that it might be possible to make these same kinds of measurements in the human body without surgery. Damadian was seeking a diagnostic tool for cancer, but the technique he pioneered has electriﬁed the brain sciences.
Damadian’s ﬁrst machines were crude but, in 1977, he succeeded in detecting the distinctive radio frequencies from a spot the size of a pea within the body of a living human. Today, such resolution seems crude, but it was a huge stride forward from the work of Rabi and his successors. For the interested reader, my recent book 1 presents the complete story of this fascinating technology, and the history of MRI is nowhere better told than in Mattson and Simon.2
Several themes have characterized the headlong pace of MRI development since then. One is the continued effort to improve the spatial resolution of the produced images. Damadian had taken MRI from cup-sized to pea-sized image resolution, thus making it possible to image the human body. As spatial resolution becomes smaller, it will be possible to discern ever finer internal details. As temporal resolution (that is, the ability to determine that two nearly simultaneous things actually occur at different times) approaches that of a snapshot, it will become possible to image the actual movement of speciﬁc substances in the living brain. Modern engineering focuses on enhancement of these two kinds of resolution.
MRI, FMRI, AND BEYOND
Whatever difﬁculties plague the application of MRI to cognitive neuroscience, further advances in this technology will unquestionably have profound positive effects on brain science in both the clinic and the laboratory. I believe, though, that the most important contributions of MRI will be studies of the structure of the brain, not today’s conceptually ﬂawed attempts to localize ill-deﬁned cognitive functions. To understand where MRI is going, we must look at recent developments in the technology of imaging devices. In what follows, I discuss both conventional and functional MRI, or fMRI, which is intended to emphasize the detection of neural activity (function) as opposed to anatomy (structure).
The fMRI technique was invented in 1990 by a group at Bell Laboratories led by Seiji Ogawa.3 A startling amount of progress has been made in the few years since then. One estimate is that 800 papers about fMRI or its application are published each month, and as many as 75,000 are in print already. The absolutely pivotal discovery made by Ogawa’s group was that the fMRI image varied with the level of oxygenation in blood’s main constituent, hemoglobin. These scientists were the ﬁrst to suggest, therefore, that since oxygen is used up by active neurons, fMRI could distinguish neurons that were functionally active from those that were relatively inactive. Interestingly, it was not until 2001 that their hypothesis was conﬁrmed by Nikos Logothetis and his group at the Max Planck Institute in Tubingen.
Considerable progress has been made in improving the spatial resolution of fMRI imaging systems. In practice, at the present time, clinical applications of fMRI have a spatial resolution approaching one millimeter (a thousandth of a meter). Research now aims at getting resolution down to the size of an individual neuron, the smallest of which is measured in a few microns (a millionth of a meter). That goal may seem elusive, but biophysicists agree that there is no physical constraint on how small the spatial resolution can be with MRI techniques. There are formidable practical problems, however, in attempting to work with such ﬁne discrimination. With the overwhelming amounts of data that can be collected at such a scale, experimenters also can be seduced into ignoring effects occurring at broader, global levels—much as microelectrode studies of neurons have deﬂected attention from larger brain regions.
There are other practical constraints on ultra-high spatial resolution equipment. In today’s machine, even 1 mm of movement can be a problem. Small motions of the brain, even if the skull is stabilized, can blur images, effectively canceling out much of the higher resolution. “Microscopic” MRI systems have nevertheless been successfully developed by David G. Cory and his colleagues at the Massachusetts Institute of Technology. Their devices permit resolutions as small as 10 microns. The technique has been used to examine the activity of the large neurons in the blowﬂy.
The microscopic MRI system promises to extend the extraordinary macroscopic capabilities of the fMRI system to the level of the activity of individual neurons. This could offer insights into extremely subtle anomalies in the brain—anomalies that simply would not show up at the lower resolutions. As the spatial resolution of fMRI equipment improves, we may begin to witness the living functions of brain matter previously known only by inference from postmortem structural examinations. For example, it is known that the neurons in the visual cortex are arranged in vertical columns. The future of high-resolution fMRI systems holds the promise of investigating in great detail not only the structure but also the living function of such minuscule neural networks. Similarly, we know that the visual cortex is a horizontally layered structure, with each layer only a couple of neurons thick. High-resolution fMRI would enable us to conﬁrm the role played by each layer, a role now only inferred from its anatomy or from isolated microelectrode experiments.
Nor are the advantages of high-resolution fMRI limited to studying the neural components of the brain. Given that fMRI is sensitive to the oxygenation of blood, the ﬁne structure of the smallest capillaries of the brain may be amenable to examination. Early diagnosis of strokes is one likely application of micro-fMRI devices. As spatial resolution improves, we should be able to detect cancerous lesions much earlier than we can now. And it should be possible to determine the effects of a drug therapy daily, rather than after months or years of such therapy.
THE PROBLEMS WITH SPEED
The enhancement of fMRI speed has two goals. The ﬁrst is to make the MRI system fast enough to follow responses in real time: that is, fast enough to keep up with changes as they occur in the brain. The second goal is enhanced resolution to help researchers distinguish between two events that occur close together in time. Unlike spatial resolution, however, temporal resolution in fMRI systems faces a fundamental limit. That limit is determined by the time it takes for the magnetic axes of protons to realign to their disordered rest state after the aligning force produced by the large magnetic ﬁeld has been removed. In other words, an fMRI system can operate no faster than the protons it affects, and protons take time to align, then unalign.
Another problem is the trade-off between spatial and temporal resolution. If high spatial discrimination is desired, longer data-gathering times are typically required. A very fast method, therefore, tends to produce blurred, low-contrast images more than one that allows for extensive image processing time. This is a high-tech version of the choice snapshot photographers make when opting for fast ﬁlm to capture movement versus slow ﬁlm that produces a sharp image.
Beyond these physical constraints, the same practical problem that confronts high spatial resolution also limits very ﬁne resolution in time. The problem is the classic one of too much data. The enormous amount of data obtained from an fMRI system must be converted into a picture: a two- or three-dimensional representation that summarizes the raw data picked up by the radio frequency receivers. Therefore, major efforts have gone into the development of high-speed data processing programs. The brute force way, of course, is just to use a faster computer; but far more ingenious methods have been proposed. One method for speeding up the image-generation time is known as single shot Echo Planar Imaging (EPI), which allows MRI data to be processed in 20 to 40 msec from a single cycle of magnetization and reorientation caused by an especially strong magnetic ﬁeld. Recently, EPI has been combined with other speeded-up methods to enhance temporal resolution even further. An alternative method is called spiral scanning. All such techniques are designed to scan the raw MRI data into the computer more rapidly than with the standard procedure.
Once the image is acquired, other data processing techniques can speed things up to achieve real-time imaging. One novel procedure is simply to ignore the redundant information in the original data. Since some of the data in effect cancels out other data, one has only to process a part of the data to approach real-time imaging. All these fast procedures hold promise of replacing invasive cerebral and cardiac angiography with noninvasive MRI techniques.
Once it becomes possible to capture a single image in a few tens of milliseconds, it also becomes possible to improve the signal-to-noise ratio by averaging signals over repeated trials. The trick in this procedure (Event Related fMRI) is to carry out that averaging out of order, rather than in the order of the sequential trials. Such a strategy, first developed in 1997, allows randomly mixed trials to be selectively averaged.
Now that equipment has been reduced in size to a manageable level, it has become possible to implement intraoperative MRI (iMRI) surgery. Rather than depending on stereotaxic atlases, which give a series of average three-dimensional coordinates for the brain, or on images taken prior to surgery, surgeons are now able to examine internal structures during the course of the operation and guide their actions based on immediately available data. The development of small MRI magnets that can image the brain and then be moved out of the way has been pioneered by Odin Medical Technologies in Israel. Their PoleStar N-10 equipment has so far mainly been applied to brain tumor surgery. However, as the technology improves even further it may become much more widely used in many other types of surgery.
The extensive literature on standard and functional MRI techniques is bursting with other dazzling ideas for future applications. One that is likely to be important to the brain sciences is a technique called “diffusion tensor imaging” (another application of fMRI), which is sensitive to the passive ﬂow of water in the brain and is capable of distinguishing between epileptic and normal brains. This technique has the advantage of responding differently to myelinated neuronal ﬁbers (white matter) and unmyelinated cell bodies (gray matter). Thus, in principle, it can track out the ﬁbers of the brain, as well as diagnose diseases such as multiple sclerosis that degrade the myelin coating.
Another trend in MRI is the search for techniques that accurately distinguish between different kinds of materials, not just concentrations of water or blood. One strategy of exceptional promise is MRI spectroscopy, by which MRI signals are picked up simultaneously from different chemicals. Current efforts are aimed at producing two-and three-dimensional image maps that display speciﬁcs of chemical composition across the whole brain. Success would permit qualitative as well as quantitative study of the brain in living subjects. Imagine being able to track out the distribution (or, better yet, dynamically changing distributions) of transmitter substances in the living brain. Diseases such as Parkinson’s would become amenable to early diagnosis and therapy in ways we can hardly imagine at present.
This discussion only scratches the surface of possibilities. At the rate things are going, a decade or two from now these speculations may have become realities, and even more ambitious speculations will be before us. Expect wonderful things to happen in two closely related areas. Research on the brain leading to fundamental understanding of how this wonderful organ works will continue to be forthcoming, as will new developments in curing illnesses now considered incurable.
HIGH TECH CANNONS AIMED AT PHANTOM TARGETS
MRI and other imaging machines are wondrous, but they can be used as readily to confuse and mystify as to enlighten and clarify.
Traditionally, observation of behavior was psychology’s main source of data, knowledge, and theory about our thought processes. But, like all other natural scientists, psychologists have felt the urge to seek the underlying brain mechanisms that account for that observed behavior. In pursuit of this program of “neuroreductionism,” they have rushed to adopt new technologies for examining the physiological or anatomical correlates of behavior and thought. Various techniques such as the electroencephalogram, sensory evoked brain potentials, the galvanic skin response, and similar physiological indicators of brain activity have all been quickly adopted by psychologists in their continuing search for noninvasive entrée into the material workings of the brain. Sometimes this has led to widely, but uncritically, accepted nonsense. Among the most egregious abuses are devices such as the polygraph, which purportedly evaluates the truth or falsity of an uttered statement. The so-called lie detector is just a few short steps away along the charlatanism axis from the “orgone box” of yesteryear— an infamous bit of quackery peddled in the 1940’s that was supposed to accumulate health-giving amounts of an unobservable substance called orgone. Its inventor died in jail for this deception.
With the introduction of the new imaging devices, particularly fMRI systems, it seemed that the long-sought goal of a valid noninvasive method for correlating brain and cognitive activity was at hand. The application of this technique, however, is based on certain highly questionable assumptions. The most vulnerable of these assumptions is that subjectively reported cognitive processes (to be associated by researchers with the indisputably objective measures of neural activity in the brain) are sufﬁciently well deﬁned to be localized in a certain anatomical spot. As it turns out, a careful reconsideration of how we deﬁne cognitive processes suggests that, in fact, most of them are rather imprecisely or even circularly deﬁned or, even worse, are only names applied to experimental protocols. In other words, not only is it not easy to deﬁne “consciousness” or “mind,” but even such terms as “short-term memory” and “decision making” may not refer to psychobiological realities.
The key conceptual problem faced by those who would correlate cognitive processes with brain activity is their implicit assumption that the mind comprises separable modules that can be isolated and examined independently of each other and, thus, separately localized. This premise assumes that the hypothetical cognitive processes produced by the brain interact linearly (one can simply add or subtract one from another, as opposed to their being complex multiplicative functions of each other) and that they maintain their same properties when used in different tasks. For example, it assumes that a component of a reaction-time process (such as the time it takes to select a response) remains the same regardless of how many stimuli are simultaneously presented. This latter criterion is one of the most fragile of the assumptions underlying the current stampede of work seeking the locations in the brain of what I believe are more likely to be the result of highly interconnected neural mechanisms, none of which operate in complete isolation from other cerebral regions. Robert G. Pachella was one of the ﬁrst to point out this problem in 1974 in a remarkably prescient, although not yet fully appreciated, article.4 He summed it up well when he questioned the assumption “that it is possible to delete (or insert) completely mental events from an information processing task without changing the nature of the other constituent mental operations.” Pachella’s conclusion was that “There is nothing in the application of the method itself, or in the data collected therefrom, that can justify this assumption.”
In short, mental or cognitive activity is more likely to represent an indivisible entity that cannot be broken up in modules. The philosopher Jerry Fodor has probed this question and concludes, as I have, that (with the exception of the sensory and motor systems) the assumption of cognitive modularity is an experimental convenience inappropriately based on Descartes’ classic idea of “divide and study.”
Thus, some of the cognitive processes that we seek to correlate with brain activity may be far from deﬁnable entities that can be isolated in the same manner as can a region of the brain. The history of cognitive processes and faculties is replete with a vocabulary of items that were once popular and then disappeared. Phrenology, once a great scientiﬁc fad, invoked mental processes such as “cautiousness”; later came a vogue for compartmentalizing of mental life into “faculties” such as a faculty for “friendship.”
Unfortunately, this process of trying to turn operationally defined processes into actual concrete entities continues to this day. Many of the “cognitions” that we seek to correlate with localized cerebral activity simply may not exist in the same discrete sense that various lobes of the brain exist. The important point about this holist-modularist controversy is that if it turns out that cognition is not divisible into modules in any meaningful way, then the great project of localizing these nonexistent modules in particular regions of the brain becomes unrealizable in principle, as well as in practice.
This brings us to some related problematical technical issues. Many of the studies carried out in cognitive imaging laboratories compare two brain images, one taken during the activation of the cognitive process under study—for example, “thinking of a word”— and one taken in a control condition in which the cognitive process is suppressed in some way. The two pictures are subtracted from each other on a pixel-by-pixel basis, and the locus of any residual difference is proposed as the localized site of the cognitive process.
There are many difﬁculties with this procedure, perhaps the foremost being that the image produced by this subtraction ignores the fact that virtually all of the cerebral cortex is active during virtually any cognitive process. The subtraction method supposedly highlights the places where the activity is maximally different in the two conditions, but it does so by erasing the “background activity.” In so doing, it reinforces the a priori assumption of localized modularity but ignores the alternative possibility that the neural mechanisms encoding cognitive processes are widely distributed in the brain. In other words, it begs the question.
Furthermore, the exact size of the purported brain locus is arbitrary. Depending upon statistical criteria for what constitutes “active,” a spot may be highly localized or appear to be spread widely across the surface of the cerebral hemisphere. Even then, the presence of a “hot spot” is by no means evidence that the particular cognitive process being studied is actually encoded, represented, or localized at that point. The ﬁndings of such an experiment do not support the conclusion that a particular region is sufﬁcient to encode a particular activity. On the contrary, the region may be simply a necessary part of a complex system of cerebral locations that collectively represent that particular mental task. Or, alternatively, the hot spot may represent some completely unrelated activity that is simply released from inhibition as the subject “thinks” about something else.
Beyond these problems in logic, there are technical and statistical considerations that also negate the simplistic idea that cognitive modules are local to particular regions of the cerebrum. A report by Nikos Logothetis pointed out a further difﬁculty in interpreting fMRI signals: An fMRI signal is much weaker than corresponding electrical signals obtained at the same time from the same regions. The discrepancy is so great that considerable amounts of neural activity could be going on below the thresholds of the fMRI system or of the statistical methods used to separate relevant activity from the background noise. Indeed, it is also not entirely sure that the accumulated level of neural activity directly reﬂects cognitive processing; vastly different microscopic patterns of neuronal activity may result in no net difference in an fMRI image.
Perhaps even more important is the problem of replication. Do comparable cognitive processes produce the same cerebral images in different trials, in different experiments, in different laboratories? This is still an empirical, active research question. A recent comprehensive analysis by Roberto Cabeza and Lars Nyberg of 275 published PET and fMRI studies on attention, perception, imagery, language, and memory posed this reliability question.5 Unfortunately, the ﬁndings are somewhat ambiguous, but there was no indication of any very narrow, highly constrained localization for any of the higher-level cognitive processes. (The only exception to this generality of function comes from evidence of narrow localization in the sensory and motor areas of the cerebrum, but even this classic idea is currently undergoing reevaluation.) Rather, comparable experimental protocols generally led to responses that were scattered over at least a quadrant of the brain, and frequently over an entire hemisphere (ignoring for the moment the problem of bilateral hemispheric representation). For example, “problem solving” was associated with activity in regions that varied from the occipital to the frontal lobes of the cerebral mantle; several studies also indicated correlated activity in the cerebellum. Where there were concentrations of comparable results from different laboratories, at best the “localization” was to the front or back of the brain, rather than to a smallish spot—the form in which such ﬁndings are usually reported.
What would the discovery of extreme localization of cognitive functions mean to our understanding of how cognition emerges, or how it is produced by the brain?
The interpretations of this comprehensive review can vary, based on one’s theoretical orientation. That comparable responses are not precisely localized is grist for those of us who would prefer to believe that the brain activities associated with complex cognitive processes are more dispersed than not. There being some semblance of localization, at least to quadrants of the brain, encourages those who adhere to the idea that the arguably ill-deﬁned modular cognitive functions are processed in speciﬁc regions of the brain. The authors of the analysis themselves point out that the data they have reviewed is applicable equally well to local, global, and network theories of how the mind/brain is organized. As we can see, there are conceptual and logical problems, as well as technical ones, involved in the localization enterprise.
DO COGNITIVE MODULES HAVE A FUTURE IN PSYCHOBIOLOGY?
This brings us to my ﬁnal question: What would the discovery of extreme localization of cognitive functions mean to our understanding of how cognition emerges, or how it is produced by the brain? Although it may be of interest to conﬁrm that interconnections exist among different parts of the brain, determining the hierarchical structuring of such a system may still be impossible, according to Claus C. Hilgetag and his colleagues, because of the computational complexity of the system.6 Furthermore, there is an argument that the game is simply being played on the wrong court. Focusing our attention on interactions among these macroscopic cerebral regions does not attack the critical question of the origin of thought. Rather, I think the neural equivalents of cognitive processes are much more likely to be found in the myriad microscopic details of the interconnections among individual neurons. The sheer number of these neurons may make it computationally impossible to carry out the analysis necessary to understand how this happens, even for the simplest mental process.
As we have seen, the effort to localize hypothetical cognitive modules in narrowly circumscribed regions of the brain is based upon some tenuous assumptions. There is no question that whatever mental activity is occurring, it must ultimately be explained by some associated neural activity in the brain. However, there are constraints on what can be learned, even with the fabulous new imaging devices. The hypothesis that we can isolate cognitive modules that are encoded in narrowly prescribed regions of the brain seems far too simplistic to have much hope as the guiding theory of a future psychobiology.