Share This Page

The Long, Sometimes Bumpy Road of Drug Development
The current arduous process of drug development may be improved through brain imaging.
We often read that understanding the human genome promises the transformation of medicine, because finding the genes associated with major diseases provides potential new targets for therapy. So what are the obstacles in developing new drugs?
To understand this formidable challenge, let us consider the example of creating a new drug to treat Alzheimer’s disease. Scientists long knew that a protein called amyloid formed “plaques” in the brains of people with Alzheimer’s but not whether it was involved in causing the disease. Identifying abnormalities in genes regulating the synthesis of beta-amyloid in the brains of patients with an inherited form of Alzheimer’s helped remove the doubt in most scientists’ minds. But how can this help us develop a new treatment?
Studies with isolated cells in a culture showed that beta-amyloid can kill cultured nerve cells grown outside of the body. Together with the information from genetics research, this suggests that the progression of Alzheimer’s might be controlled (or, in early stages, reversed) by clearing amyloid from the brain or blocking its production.1 Different strategies for doing this have been proposed, such as finding ways of “washing” beta-amyloid from the brain, using large molecules called antibodies that irreversibly bind to beta-amyloid. Such therapeutic antibodies could be either injected (passive immunization) or generated through vaccination. Just imagine a future in which children are given a quick jab to prevent mumps, measles, rubella, whooping cough—and Alzheimer’s disease. Another drug strategy would be to develop chemicals that inhibit the brain enzymes (large molecules made by cells in the body that perform particular chemical tasks) that are responsible for producing beta-amyloid in the brain.
Having established a biological strategy and rationale for the treatment of Alzheimer’s disease, the drug industry faces the next challenge: to engineer molecules that can perform the required tasks. This means engineering an antibody (or other binding molecule) that can potentially pull beta-amyloid out of deposits in the brain or designing a chemical that can block the actions of enzymes that produce beta-amyloid. The rapidly growing predictive power of computational biology makes designing such molecules increasingly scientific, but important elements of the process remain an art. Many, many antibodies must be produced and screened in order to discover the few that work well, or researchers must synthesize a variety of small molecules expected to bind to the enzyme target. Luck continues to play a role, and not all biologically validated targets can be further developed to become potential drugs.
The next hurdle is to show that a molecule can do what is intended in a test tube or in cells in a culture dish, such as selectively recognizing beta-amyloid or blocking the target enzyme without interfering with other enzymes that have important functions. And successful test tube experiments are not enough. Researchers must show that the potential drug can be administered safely to an animal and, ideally, that it can reduce beta-amyloid in an animal model of Alzheimer’s disease.
Many molecules appear in the test tube or even in animal models to function in ways that suggest that they could be drugs, but either they fail to have the right effect in humans or they cause unacceptable side effects. So the most challenging hurdle for drug development is testing a molecule in humans. The first question is whether a safe dose range can be determined, one that will allow high enough doses to have the action predicted in the earlier animal experiments. For instance, in our example of a hypothetical drug for Alzheimer’s, researchers would have to establish that the beta-amyloid antibody can be given at high enough doses to bind significant amounts of beta-amyloid while not triggering undesirable activation of the body’s immune responses. Or they would need to show that an enzyme-inhibiting small molecule does not damage the liver, which is responsible for deactivating many small molecules in the bloodstream. This kind of critical safety information is acquired in Phase I experimental trials, in which the drug is carefully administered to closely monitored healthy volunteers. Success in Phase I is not guaranteed; overall, 35 percent of candidate drugs fail here for one reason or another.
But even finding that potential drugs work in an animal model and are safe in humans does not mean that they can be used to treat a disease. Candidate drugs that survive Phase I trials must next be tested in Phase II clinical studies in patients. In our example, Phase II trials would test whether the new molecules are likely to be effective in Alzheimer’s disease. At this point researchers encounter a whole new set of challenges.
Consider how this would work with the much more straightforward problem of developing an antibacterial drug. Phase II trials might involve testing whether administration of the compound leads to faster reduction of fever or other signs of infection. This can be done relatively quickly, because the trial uses clinical measures and signs that are easy to interpret; the fever goes down, for example, or swelling is reduced. But with neurological disorders such as Alzheimer’s disease, which have time courses extending over years or decades, it can be difficult to assess the potential utility of a possible drug, even in a preliminary way, if only conventional clinical measures are used.
To appreciate this problem, consider that in late 2005 more than 30 new potential drugs for Alzheimer’s disease were being developed by different companies. Using even the most sensitive current clinical measures of memory function and cognition, and even assuming that any of the compounds achieved as much as a 50 percent slowing of rates of cognitive decline, investigators would need more than 300 patients in order to make a preliminary assessment of the efficacy of each one of these agents after the first year of follow-up.2 In fact, establishing studies, recruiting patients, and analysis take far longer. What this means is that a traditionally designed Phase II study would take at least a couple of years for Alzheimer’s disease—and more than one Phase II trial is typically needed because different populations must be studied and a range of information must be acquired. Just do the math: 300 patients per trial x years per trial x more than one trial per possible drug. And only about a third of the new molecules entering Phase II can progress to Phase III.
Completion of Phase II with proof that a molecule might have useful activity in treatment of a disease still does not make a drug. Larger, longer Phase III studies must follow in which the drug is carefully tested in a more usual treatment population. The goal would be to assess not only whether the new potential drug is effective but also whether it is better than existing treatments. Equally important is understanding any risks associated with taking the new candidate drug. For Alzheimer’s disease, this might involve recruiting a larger number of patients from several different medical centers. Just under two-thirds of the molecules entering Phase III will be developed further. After Phase III testing, approval for marketing of a drug finally can be sought from government regulators. Then, and only then, can the drug reach the person with Alzheimer’s who needs it so desperately.
References
- Schenk D, Games D, and Seubert P. Potential Treatment Opportunities for Alzheimer’s Disease Through Inhibition of Secretases and Abeta Immunization. Journal of Molecular Neuroscience 2001; 17(2): 259-267.
- Thal LJ, Kantarci K, Reiman EM, Klunk WE, Weiner MW, Zetterberg H, Galasko D, Pratico D, Griffin S, Schenk D, and Siemers E. The Role of Biomarkers in Clinical Trials for Alzheimer Disease. Alzheimer Disease and Associated Disorders 2006; 20(1): 6-15.