Stories on 60 Minutes and 20/20 about a supposed connection between cellular telephone use and brain tumors have not crimped cell phone sales. Is this just the latest scare about environmental threats to our brains, with anxiety-provoking headlines followed by studies reporting “We don’t know”?
Bioethicist Mark Parascandola looks at epidemiology, the science that brings us these warnings and subsequent relief. Epidemiology is a powerful tool for spotting hazards in the environment before the biological mechanisms are known, Parascandola says, and if we grasp what epidemiology can and cannot do, we will not be alarmed by hype or lulled by too many false alarms. We will demand more evidence and more timely assessment of potential hazards to our brains.
Almost weekly, it seems, another product or environmental agent is accused of injuring our brains. Anyone following the news has heard about cellular telephones, aluminum, Gulf War service, Agent Orange, power lines, and low-level lead exposure. Headlines have identiﬁed all of these with unwanted
health effects: “Alzheimer’s linked to aluminum in water,” “Hearing a call of danger: health harms from cell antennas debated,” “Cancer risk for children: scientists ﬁnd an increase among those who live near power lines,” “MRIs reveal Gulf War scars.” Yet, despite many scientiﬁc studies and much public debate, no one has reached ﬁnal verdicts in any of these cases. Do cell phones cause brain cancer? We do not know.
At the epicenter of these controversies is the discipline called epidemiology. Epidemiologists examine patterns of disease among groups of people to identify causes. Who gets a disease, where, and under what circumstances? In the 1950s, epidemiologists compared the fates of smokers and nonsmokers to show that cigarette smoking was the leading cause of lung cancer. Policy makers and the public rely on such studies in legislating, distributing research money, and deciding, for example, what is healthful to eat.
The problem is that some epidemiologic studies generate more controversy than certainty, particularly risk-factor studies seeking to uncover causes of chronic diseases in the environment and in our bad habits. The public complains that experts tout one conclusion this week, the opposite the next. Today margarine is the healthy choice; tomorrow it is butter. With experts like this, we may ask, what purpose does epidemiologic research serve? Despite these perceptions, I believe epidemiology is a key to resolving debates over human health risks and that the answer to this crisis of uncertainty is more, not less, research.
Epidemiology, like any science, often falls short of neat yes or no answers to questions like, “Will that nearby cellular telephone tower cause me to develop brain cancer?” Yet, when carefully interpreted, such studies yield powerful answers to questions answerable in no other way. Three subjects of recent epidemiologic study—aluminum, wartime chemical exposures, and cell phones—illuminate the challenges facing epidemiologists in linking environmental exposures to neurologic effects. While uncertainties remain, these cases argue the urgent need for more large-scale data collection and research. They demonstrate, too, how lawsuits and entrenched ﬁnancial interests can be powerful obstacles to resolving environmental health debates.
EXPERIMENTS NATURAL AND UNNATURAL
At the start of the 21st century, biomedical science still labors in the shadow of legendary 19th-century French physiologist Claude Bernard, whose Introduction to the Study of Experimental Medicine, published in 1865, became a manifesto of modern experimental medicine. Bernard lamented that medicine relied, at best, on anecdotal testimony of physicians about what worked for a particular patient. He disdainfully compared this informal approach to the successful example set by the basic sciences. What do physicists and chemists do? They perform experiments in the laboratory. To become a true science, Bernard insisted, medicine must imitate them.
At the same time, however, some of Bernard’s contemporaries were using statistical health data to track down environmental causes of disease. London anesthesiologist John Snow identiﬁed the cause of a cholera epidemic by making statistical comparisons of cholera cases throughout the city. Households served by the Southwark and Vauxhall Company had six times as many cholera cases as did households served by other water companies. Snow reasoned that Southwark and Vauxhall water was contaminated, perhaps by a broken pipe. He later famously proved the link between contaminated water and cholera cases by removing the handle of a water pump on London’s Broad Street, abruptly ending a disease outbreak in that neighborhood.
Consider what makes John Snow’s work so impressive. He demonstrated his conclusions before bacteriologists could see the cholera baccillus through a microscope. Snow suspected that a small waterborne organism might cause the disease, but his contemporaries had never seen one. The only connection he observed was the data on rates of disease among different groups. Snow did have a hypothesis about the source of the problem and designed his rigorous study to test it; it did not matter that the test took place outside the laboratory. Snow did not control the conditions of the experiment, but that did not stop him from making the kind of comparative observation laboratory scientists do.
Claude Bernard was not impressed. “Statistics,” he wrote, “can never yield scientiﬁc truth.” Public health practitioners poring over tables of disease and death saw only broad trends and accidental associations—the forest but not the trees. To understand what caused those trends, Bernard insisted, one must observe biological mechanisms in action. Unlike epidemiologists, laboratory scientists could uncover mechanisms by cutting open animals and performing experiments under carefully controlled conditions. Bernard’s attacks on nonlaboratory science were relentless and unforgiving, probably because he was forced throughout his career to defend laboratory science (and himself) against animists, who believed biology was largely mystical and beyond the reach of empirical science, and antivivisectionists, who disapproved of his using animals in research.
The ingenuity of early epidemiologists was in taking advantage of existing natural experiments. Epidemiologists are opportunists, often creating learning experiences out of public health tragedies.
As rigorous as Bernard’s approach may appear, its scope is severely limited. The laboratory is an unnatural setting for investigating human health. For example, complex conditions of human exposures to multiple environmental toxins simply cannot be reproduced in a laboratory. Nor can laboratory scientists intentionally expose people to contaminated water to see if it causes a cholera outbreak. The ingenuity of early epidemiologists was in taking advantage of existing natural experiments. Epidemiologists are opportunists, often creating learning experiences out of public health tragedies. Most knowledge of the effects of radiation exposure on humans comes from follow-up study of Japanese atomic bomb casualties at Hiroshima and Nagasaki, and more recently victims of the accident at the primitive Chernobyl nuclear power plant in the old Soviet Union (now Ukraine).
Following John Snow, epidemiologists continue to identify environmental health hazards even when understanding of the biological causes and mechanisms is limited. Over two decades, for example, research linking electromagnetic ﬁelds (EMF) to cancer risk continues despite the absence so far of a plausible biological mechanism. Because of the biological missing link, however, scientists have been especially cautious in interpreting results of these studies. There is no smoking gun. But if enough epidemiologic studies reach the same conclusion, that mass of evidence could be as powerful as a biological hypothesis. Few today doubt the connection between smoking and lung cancer, although scientists still do not understand how tobacco smoke initiates cancer. Unfortunately, getting consistent results is more easily said than done in epidemiology, as debates over aluminum and Alzheimer’s disease demonstrate.
ALUMINUM AND ALZHEIMER’S
In the early 1960s, neurologist Igor Klatzo at the National Institutes of Health used rabbits to study the workings of the immune system in the brain. When he injected various solutions into the rabbits’ brains to observe the immune system response, the rabbits went into severe convulsions. Investigating further, Klatzo learned that it was not active ingredients in the solutions that caused this response, but aluminum added to aid the solutions’ action. Klatzo dissected the animals and saw their brain cells appeared to have suffered a kind of degeneration characteristic of Alzheimer’s patients. Here, as with many scientiﬁc discoveries, serendipity played a decisive role. Klatzo’s paper began: “The origin of this study is rather accidental.”1
Klatzo’s ﬁnding generated little attention at ﬁrst because its relevance to humans seemed remote. While aluminum injected directly into rabbit brains had dramatic effects, the experiment bore little resemblance to typical human exposures outside the laboratory. True, a few years earlier British industrial doctors described how a worker in an aluminum mill went into a progressive dementia; an autopsy later showed 20 times the normal amount of aluminum in his brain. People generally, however, are exposed to relatively small doses of aluminum, most from food additives in dairy products, grains, desserts, and beverages. What are the effects of smaller doses over time? Also, of course, eating aluminum is different from having it injected into your brain; people who consume aluminum in food might not experience the build-up in the brain that Klatzo observed in his rabbits.
Eventually, researchers at the University of Toronto, led by neurologist Donald McLachlan, decided Klatzo’s ﬁndings merited further study and set out to look for aluminum in the brains of Alzheimer’s patients. Using autopsy samples, they measured aluminum levels in various parts of the brain and compared levels in Alzheimer’s patients with those in patients who died of unrelated conditions, such as heart disease. The Alzheimer’s patients had aluminum levels two to three times higher.
Even if Alzheimer’s patients had higher aluminum levels, it did not follow that aluminum caused the disease.
This kind of clinical study has significant limitations, however. According to an old social-science adage, “Association does not prove causation.” Critics of McLachlan’s work maintain that even if Alzheimer’s patients had higher aluminum levels, it did not follow that aluminum caused the disease. The buildup may be a consequence of the disease; the disease may cause changes in the brain that help the metal accumulate. Several researchers who tried to confirm the original study’s findings failed to find differences in aluminum measurements. The reason for this inconsistency is unclear, but it is possible the original findings were an artifact of the procedure: aluminum is everywhere and may have been deposited accidentally through careless handling of the samples. While Claude Bernard would be loathe to admit it, even a controlled laboratory is vulnerable to impurities.
Finally, in 1989, the ﬁrst epidemiologic study was completed. British researchers collected records from hospital CT scanning units in England and Wales and information about the aluminum content of drinking water in those regions. Water is a relatively minor source of aluminum in the average diet, but perhaps the body would absorb more from water because the metal is not bound into complex molecules as it is in many foods. The researchers took advantage of differences in water content to create a natural experiment, like Snow’s. The result? People who lived in districts with the highest aluminum content in their water had a 50 percent greater risk of developing Alzheimer’s disease.
Other studies soon followed. Some supported the British ﬁndings; others did not. By 1993, there were 10 studies; 7 supported a link between aluminum and Alzheimer’s disease and 3 did not. MacLachlan and colleagues were not easily discouraged; they designed a rigorous study that improved on weaknesses of earlier studies. For example, they asked subjects how long they had lived at their current address; without this information, people who had recently moved from a low to a high aluminum area would be misclassiﬁed. After analyzing the results, they provocatively concluded that 15,000 to 27,000 cases of Alzheimer’s in Ontario, Canada, could have been prevented by keeping aluminum levels below a recommended threshold and urged appropriate steps by the government. About the same time, however, a study from Northern England that involved collecting information about patients diagnosed with pre-Alzheimer’s dementia, found no such trend; people in high-aluminum areas were not any more likely to have dementia.
Legendary epidemiologist Sir Richard Doll (coauthor of a pivotal early paper on smoking and lung cancer) commented on this scientific standoff: “In some circumstances, the results of epidemiological findings are so clear that an association can be conﬁdently interpreted as indicating cause and effect and action taken to remove or reduce exposure to the putative cause.”2 Sometimes an association can be taken as proof of cause and effect. But such cases are rare, he insisted, and this was not one of them. To reach such a conclusion, consistent results are necessary, and results on aluminum were anything but consistent.
For now, aluminum has receded from the spotlight. The current view of the Alzheimer’s Association is that “there is no proof that aluminum causes Alzheimer’s disease.” At the same time, there is no proof that aluminum does not cause Alzheimer’s disease. Indeed, proving the absence of such an effect is even more difﬁcult than proving its presence. We are left with frustrating, lingering uncertainty.
A CRISIS OF PUBLIC CONFIDENCE
The aluminum story is one of many that seemed to appear with increasing frequency during the 1990s. Front page headlines announced a new association, but further studies would fail to coincide with the initial ﬁndings, and it came to seem that more research only engendered more controversy. Epidemiology did, and still does, suffer a crisis of public conﬁdence.
Since Claude Bernard’s day, epidemiology has consistently drawn its share of vocal critics. Yale University epidemiologist Alvan Feinstein has devoted a substantial portion of his career to questioning his colleagues’ methods. In 1988, he wrote a vehement critique in Science of “Scientific Standards in Epidemiologic Studies of the Menace of Daily Life,” charging that epidemiologists were too quick to point ﬁngers at potential health hazards. In particular, he referred to an infamous 1983 study linking coffee drinking and pancreatic cancer, an association that, after extensive media coverage, was shown to be wrong. Like Bernard, Feinstein compared epidemiology unfavorably to sciences such as physics and chemistry, lamenting epidemiologists’ “apparent complacency about fundamental methodologic ﬂaws.” The discipline, he concluded, was still in the dark ages.
More recently, attacks on epidemiology have increased, even within its own ranks. In 1995, Science ran a long article by science writer Gary Taubes titled “Epidemiology Faces Its Limits.” The article brimmed over with quotes from epidemiologists who lamented the sad state of their discipline. “We are fast becoming a nuisance to society,” said Dimitrios Trichopoulos, head of the epidemiology department at the Harvard School of Public Health. Did their ﬁndings, epidemiologists asked, do more harm than good?
One particular warning was against the dangers of over-interpreting subtle observations. It is easier to be conﬁdent about big effects, like the 3,000 percent increase in lung cancer risk from smoking, than about small ones. Many environmental associations uncovered more recently involve increases in risk of less than 100 percent. For example, studies that ﬁnd increased brain cancer risk in workers exposed to electromagnetic ﬁelds show increases of 10 to 20 percent. Epidemiologists argue that effects of this size are hard to distinguish from statistical noise.
Such effects can only be fully understood by studying large numbers of people. It is impossible to tell by looking at a single child whether he would be smarter if he had not swallowed lead paint.
Big effects are relatively easy to see. Extremely high doses of lead have immediate, visible effects: coma, convulsions, and death. There is no question that lead can be hazardous. But is it hazardous in very small doses over a long period? That question is much more difﬁcult to answer. Some research suggests that children with low but higher-than-normal blood-lead levels have impaired neurological development and more behavioral problems. Such effects would develop slowly over several years; measuring their impact is far less certain.
Still, it is foolish to say that in colliding with these more subtle risks epidemiology has reached its limits. Subtle or not, such effects can only be fully understood by studying large numbers of people. It is impossible to tell by looking at a single child whether he would be smarter if he had not swallowed lead paint. It is easy to show that a chemical is acutely poisonous in laboratory animals, but long-term, low-level exposures that most people experience are impossible to replicate in short, inexpensive animal studies. The problem is not too much epidemiologic data, as Taubes implies, but not enough.
It has long been observed that soldiers returning from war exhibit a host of psychological symptoms. After World War I, it was called “shell shock.” The intense stress of conﬂict, being near death and witnessing the deaths of fellow soldiers, led to recurring nightmares, insomnia, anxiety, memory loss, headache, and vertigo. Veterans returning from Vietnam reported similar symptoms. Studies showed that soldiers who experienced greater war-zone stress had higher rates of psychological disorders; returning veterans also had higher-than-average rates of accidental death, such as from car accidents.
But stress was not the sole protagonist. Between 1962 and 1971, U.S. soldiers sprayed about 19 million gallons of herbicide around Vietnam to destroy enemy crops and clear jungle trails and hideaways. One of these was Agent Orange. At the time, little was known about its effects, but a 1969 scientiﬁc report concluded that Agent Orange could cause birth defects in laboratory animals. By the time the United States stopped using Agent Orange in 1970, many soldiers were exposed. After returning home, veterans began to report a seemingly high number of serious illnesses and birth defects; by the mid-1970s some placed blame for the illnesses on Agent Orange.
Studying the effects of environmental hazards on the brain is especially difﬁcult under wartime conditions. How can scientists distinguish between the effects of wartime stress and those of a chemical? Veterans who were closer to combat, and thus under the greatest stress, also had the highest chemical exposures, making it almost impossible for scientists to tease out distinct effects. A bigger problem was that no one had bothered to track the soldiers who took part in the spraying or the extent of their exposure to Agent Orange. In 1994, a National Academy of Sciences committee looked at the evidence and concluded that it was insufﬁcient to determine whether or not Agent Orange exposure caused cognitive or psychological defects. They did not have enough data.
More recently, returning Gulf War veterans reported diverse psychological and physical symptoms that have come to be known as Gulf War syndrome. The ﬁrst challenge was that the syndrome was vaguely deﬁned and difﬁcult to diagnose. A group of scientists asked in the American Journal of Epidemiology: “How would we know a Gulf War syndrome if we saw one?” Moreover, many symptoms of the syndrome (headaches, dizziness, fatigue) are subjective, so early debate focused not on chemical exposure but on whether the veterans had an identiﬁable ailment at all.
Ofﬁcials of the U.S. Departments of Defense and Veterans Affairs insisted that the symptoms of Gulf War syndrome resulted from the usual wartime stress. If the symptoms could be explained as posttraumatic stress disorder (PTSD), there was no need to invoke chemical causes. The two departments released a study concluding that the Gulf War veterans exhibited no excess illness. In response, epidemiologist Robert Haley at the University of Texas Southwestern Medical Center, examined the government’s ﬁndings and charged its researchers with reaching the wrong conclusions from their own data. The government compared the exposed soldiers with the general population—an inappropriate comparison, Haley argued, because young soldiers sent to ﬁght tend to be healthier than the average citizen (the “healthy soldier” bias). The standoff continues.
In the most recent development, Haley and James Fleckstein reported last year that they identiﬁed evidence of physical brain damage in some Gulf War veterans. Speciﬁcally, they used magnetic resonance imaging (MRI) techniques to measure levels of certain brain chemicals. Veterans complaining of illness had lower levels of the chemical N-Acetyl-Aspartate, indicating cell loss in the brain stem and basal ganglia. Damage to these brain areas could explain symptoms of depression, difﬁculty concentrating, and pain. Thus the study is signiﬁcant in linking subjective reports of symptoms to measurable physical effects.
The greatest challenge in this study and others like it is that no one knows who was exposed to what. Ofﬁcials are not likely to know the details of chemical releases by opposing forces, and U.S. defense ofﬁcials are less than forthcoming with information about chemicals they themselves may have used in the ﬁeld. The Department of Defense moved slowly in releasing information about troop exposures to chemical and biological weapons, and its records of immunizations and use of anti-nerve-gas agents were incomplete.
Unfortunately, because secrecy and urgency inherent in war worked against collecting detailed health and exposure information, the debate may never be resolved.
In the end, perhaps all we can say is that more data is needed. Over the past few years, discussions about Gulf War syndrome have moved from the fringe to the mainstream, but information is still insufﬁcient for us to draw a ﬁrm conclusion either way. Unfortunately, because secrecy and urgency inherent in war worked against collecting detailed health and exposure information, the debate may never be resolved.
CELL PHONES AND BRAIN CANCER
After painstakingly drawing conclusions from limited data, aspiring young epidemiologists must resist the temptation, perhaps more acute in their discipline than in any other, to hype their conclusions. In Taubes’s Science article, epidemiologists lamented that competition for grant support and publication pressured young researchers to play up the importance of positive associations they found between environmental agents and dangers to health, even when evidence was weak. Journals, research sponsors, and the media are less intrigued by a study concluding that high aluminum levels are not associated with health effects than by one that does link aluminum to Alzheimer’s. Feinstein warned against statistical ﬁshing expeditions, where researchers collect all sorts of data about exposures without a clear hypothesis. This practice is likely to turn up positive associations simply by chance, Feinstein argued, peppering the public with weekly health alarms—many never conﬁrmed—that create what Lewis Thomas called an “epidemic of anxiety.”
Such an epidemic erupted on January 21, 1993, when Florida businessman David Reynard announced on the Larry King Live television show his intention to sue the manufacturer of his wife’s cellular telephone. Reynard insisted her fatal brain tumor came from using the phone, a conclusion he reached after seeing an MRI that showed the tumor “directly next to the antenna, and [it] seemed to be growing inward from that direction.” Cellular stocks plunged, although no studies had tested Reynard’s hypothesis. Yet often such episodes of panic are the impetus for epidemiologic research. Facing a public relations nightmare, the Cellular Telecommunications Industry Association donated $25 million for research on the health effects of cell phones.
With some of that support, epidemiologist John Muscat of the American Health Foundation, a nonproﬁt organization that studies cancer prevention, interviewed brain cancer patients about cell phone use, comparing their reports on cell phone use with those of patients hospitalized for other reasons. At ﬁrst, he found no difference between the two groups. Brain cancer patients did not spend appreciably more time on the telephone than those without brain cancer. As a next step, Muscat focused his analysis on neuroepithelial tumors, which grow from the periphery of the brain inward. Here he found an increased statistical association between cellular phone use and this cancer. Of 35 patients with neuroepithelial tumors, 18 used cell phones. From this, Muscat estimates that for neuroepithelial cancer alone, using a cell phone almost triples risk. An alarming discovery? Perhaps, but narrowing the focus of the study to a particular cancer and fewer patients increases the danger of statistical cherry picking: ﬁnding a positive correlation due to chance. Muscat himself urged caution: “It’s not a ﬁnding that deserves a lot of attention in supporting recommendations for the public.”
Epidemiologic research is not carried out in a vacuum; ﬁndings often directly affect public policy and the fate of businesses. Everyone, it seems, has an opinion about what the evidence means; inconclusive ﬁndings can be overstated—even hyped. George Carlo, head of Wireless Technology Research (WTR), the group formed to coordinate industry research, took Muscat’s results a step further, urging that the evidence is sufﬁcient to warn the public about the potential hazard. He produced a consumer information book detailing the results of Muscat’s study and other ﬁndings and appeared on 20/20 and 60 Minutes talking about the dangers of cell phones. In the long run, overzealous interpretations of limited evidence will hurt more than help public health by reducing public conﬁdence in scientiﬁc research.
Muscat’s intriguing ﬁndings are not sufﬁcient for public policy recommendations, but they should be viewed as a red ﬂag in an area where more research is urgently needed. Kenneth Rothman and Nancy Dreyer at the Epidemiology Research Institute outside Boston began another industry-funded epidemiologic study. They planned to link phone subscriber records with the National Death Index, the federal government’s database of the deceased. When a cell phone user died, the National Death Index would identify cause of death, and subscriber records would show how much time the deceased had spent on what kind of phone. This methodology would be far more powerful than Muscat’s because it would use data for one million people and rely on subscriber records rather than personal memory to measure radiation exposure.
Rothman and Dreyer had processed only a year’s worth of data before a Chicago lawyer ﬁled a privacy lawsuit that blocked their access to subscriber records. In a year’s worth of data, only six deaths had occurred— too few to draw ﬁrm conclusions either way. Nonetheless, WTR director Carlo makes much of the fact that the rate of brain cancer deaths was slightly higher among handheld phone users, compared with car-mounted phones (where the antenna is further away from the brain). But Rothman and Dreyer did not ﬁnd this small (statistically insigniﬁcant) difference worthy of mention.
More research on cell phones is under way. The National Cancer Institute has launched a big brain cancer study that examines possible causes, including cell phone use. The industry has teamed with the Food and Drug Administration to support further epidemiologic studies. Earliest results will be not be available until 2003.
Responding to international epidemics of infectious diseases, public health ofﬁcials recently called for better disease surveillance to spot potential epidemics before they spread. Another need is to track environmental hazards that may have chronic toxic effects. The Pew Environmental Health Commission and 13 public health organizations called on Congress to pay for a nationwide system to count and monitor chronic diseases, such as Alzheimer’s, asthma, and childhood cancers, and toxic exposures that may cause disease. Unfortunately, a historically entrenched and distinctly American preoccupation with personal privacy has so far thwarted efforts to establish more comprehensive government databases on individual disease outcomes, even though registries could be designed to protect confidentiality. Without more systematic surveillance, epidemiologic studies will continue to be initiated only after some national tragedy—or a lawsuit—has struck.
In the case of Gulf War syndrome, a vague diagnosis and incomplete exposure data severely hampered investigations. Ideally, the U.S. military should keep detailed records on how it uses chemical and biological agents (such as vaccines) on its own soldiers. Trying to reconstruct the information based on individual memories is far more difﬁcult and more costly, and far less reliable. Monitoring the soldiers could have picked up adverse effects earlier. New products such as cell phones appear on the market without prior safety testing or post-market observation, and manufacturers are unlikely to voluntarily collect such data. Thus there is a long way to go to help scientists amass more and better raw data for study.
In the long run, overzealous interpretations of limited evidence will hurt more than help public health by reducing public conﬁdence in scientiﬁc research.
Because new potential hazards will still be shrouded in uncertainty, the most important question is what should be done in the interim, while scientists continue to gather information. During such periods the attitude that scientists, regulators, and the public take toward the role of epidemiology is crucial. What makes epidemiologic ﬁndings contentious is not that they are unscientiﬁc but that they invariably threaten someone’s interests. The cell phone industry is not eager to see its products linked to cancer. Residents who do not want a cellular communications tower located near their homes may play up unproven dangers. To further complicate the situation, epidemiologic ﬁndings increasingly are brought into toxic tort lawsuits, where a plaintiff alleges that some environmental exposure caused his cancer. Some cases are well founded; others, such as David Reynard’s cell phone lawsuit, encourage creative interpretations of scientiﬁc evidence. Too often the outcome rests on a battle of experts, with opponents touting or trashing epidemiologic studies for the sake of a favorable verdict. Contrast this situation with basic research in physics and chemistry. Physics experiments rarely make front page news, unless they reveal a purportedly revolutionary ﬁnding like cold fusion. But epidemiologic findings like Haley’s recent study of brain chemical levels in Gulf War veterans regularly make headlines.
In this charged atmosphere, conclusions drawn from epidemiologic research must be viewed with an appropriate balance of caution and enthusiasm. Be suspicious of broad conclusions supposedly gleaned from a single study. Epidemiologic studies can uncover cause-and-effect relationships, but conﬁdence in the conclusions requires consistency across several studies. Philosophers of science like to point out that standards of evidence are different for generating hypotheses than for proving cause and effect. While a single epidemiologic study may suggest a new hypothesis, several (and sometimes more) are required to prove that, say, aluminum exposure causes Alzheimer’s.
Having said that, there is a point where skepticism ceases to be useful. Bernard’s uncompromising opposition to using statistics has signiﬁcant drawbacks for public health. Tobacco company executives took a quite similar line in denying long past the point where evidence was conclusive that smoking was a cause of lung cancer.
It is presumptuous to suppose that all big health hazards have been identiﬁed and that epidemiology has outlived it usefulness or carried its methods beyond their limits. Epidemiologists like to point out that an absence of evidence is not the same as evidence of absence—evidence that there is no effect on health. We must not be too quick to infer that an agent is harmless simply because it has not yet been put to the test. The verdict is still out on countless agents to which we are exposed daily. Our best hope of getting answers lies in research: above all, perhaps, in the powerful, complex, oft-misunderstood discipline of human epidemiology. �