For Western physicians, the 1950s and 1960s were a time of tremendous optimism. Nearly every week the medical establishment declared another “miracle breakthrough” in humanity's war with infectious disease. Antibiotics, first discovered in the early 1940s, were now growing in number and potency. So much so that clinicians and scientists shrugged off bacterial diseases, and in the industrialized world former scourges such as Staphylococcus and tuberculosis had been deftly moved from the “extremely dangerous” column to that of “easily managed minor infections.” Medicine was viewed as a huge chart depicting disease incidences over time: by the twenty-first century every infectious disease on the chart would have hit zero. Few scientists or physicians of the day doubted that humanity would continue on its linear course of triumphs over the microbes.
Dr. Jonas Salk's 1955 mass experimental polio vaccination campaign was so successful that cases of the disease in Western Europe and North America plummeted from 76,000 in 1955 to less than 1,000 in 1967.
1
The excitement engendered by that drama prompted optimistic declarations that the disease would soon be eradicated from the planet.
Similar optimism enveloped discussion of nearly every infectious disease affecting human beings. In 1948, U. S. Secretary of State George C. Marshall declared at the Washington, D.C., gathering of the Fourth International Congress on Tropical Medicine and Malaria that the conquest of all infectious diseases was imminent.
2
Through a combination of enhanced
crop yields to provide adequate food for humanity and scientific breakthroughs in microbe control, Marshall predicted, all the earth's microscopic scourges would be eliminated.
By 1951 the World Health Organization was so optimistic that it declared that Asian malaria could soon reach a stage through careful local management wherein “malaria is no longer of major importance.”
3
A key reason for the excitement was the discovery of DDT and the class of chemicals known as organochlorines, all of which possessed the remarkable capacity to kill mosquitoes and other insect pests on contact and to go on killing for months, perhaps years, all insects that might alight on pesticide-treated surfaces.
In 1954 the Fourteenth Pan American Sanitary Conference resolved in its Santiago, Chile, meeting to eliminate malaria completely from the Western Hemisphere, and PAHO (Pan American Health Organization) was instructed to draw up an ambitious eradication plan. The following year the World Health Organization decided to eliminate all malaria on the planet. Few doubted that such a lofty goal was possible: nobody at the time could imagine a trend of worsening disease conditions; the arrow of history always pointed toward progress.
Every problem seemed conquerable in the decade following World War II: humanity could reach the moon, bombs too terrifying to ever be used could create a balance of terror that would prevent all further worldwide wars, American and European agriculturalists could “green” the poor nations of the world and eliminate starvation, civil rights legislation could erase the scars of slavery and bring racial justice, democracy could shine in startling contrast to communism and provide a beacon to which the nations of the world would quickly flock, huge, gasoline-hungry cars cruised freshly paved highways, and their passengers dreamed of a New Tomorrow.
From the capitalist world came thousands of zealous public health activists who rolled up their sleeves and dived like budding Dr. Pickerbaughs into amazingly complex health crises. Sinclair Lewis lambasted such zealous health optimism in
Arrowsmith
, creating the character of Almus Pickerbaugh, physician-congressman-poet, whose gems included:
You can't get health
By a pussyfoot stealth,
So let's every health-booster
Crow just like a rooster.
Never mind the seemingly daunting obstacles presented by, for example, cholera control in India; all was possible in the Age of Boosterism.
The notion developed of a Health Transition, as it was called. The concept was simple: as nations moved out of poverty and the basic food and housing needs of the populations were met, scientists could use the pharmaceutical and chemical tools at hand to wipe out parasites, bacteria, and viruses.
What would remain were the slower chronic diseases that primarily struck in old age, particularly cancer and heart disease. Everybody would live longer, disease-free lives.
Such glowing forecasts were not limited to the capitalist world. Soviet and Eastern bloc health officials presented ever-rosier medical statistics each year, suggesting that their societies were also well on their way to conquering infectious diseases. And Mao Zedong, leader of the nearly one-billion-strong Chinese nation, declared in 1963:
The Four Seas are rising, clouds and waters raging,
The Five Continents are rocking, wind and thunder roaring.
Away with all pests!
Our force is irresistible.
4
Throughout the 1950s and 1960s, the Chinese Communist Party waged a peasant-based war on infectious diseases, mobilizing millions of peasants to walk through irrigation ditches and pluck schistosome-carrying snails from the banks.
5
According to British physician Joshua Horn, who fully embraced the campaign and Maoism, in 1965â66 virtually no new cases of schistosomiasis, a serious liver parasitic disease, occurred in Chinaâa result, he claimed, of the Communist Party campaign.
6
Though the ideological frameworks differed markedly, both the capitalist and communist worlds were forecasting brighter futures in which there would be a chicken in every pot, a car in every garage, and a long, infectious-disease-free life ahead for every child. Both sides of the Iron Curtain agreed that mass mobilization of the global populace to fight disease would inevitably result in victory. Never mind in what rhetoric public health campaigns might be wrapped, humanity would triumph over the microbes.
In September 1966 the U.S. Centers for Disease Control assessed the status of American health:
Â
The status of diseases may be classified as follows:
1.
Diseases eradicated within the United States (bubonic plague, malaria, smallpox, etc.)
2.
Diseases almost eradicated (typhoid, infantile paralysis, diphtheria, etc.)
3.
Diseases that still are health problems, although technology exists for effective control (syphilis, tuberculosis, uterine cervix cancer, injury, arthritis, breast cancer, gonorrhea, etc.)
4.
Diseases where technology is in early developmental stages or nonexistentâand where little capability exists for alleviating or preventing health impairment (leukemia and some other neoplasms, some respiratory diseases and strokes)
7
As the 1960s opened, the U.S. Department of Health, Education, and Welfare convened a team of medical experts to decide the future mission of the entire government public health effort. Praising the accomplishments
of the 1950s, the advisory team declared that “science and technology have completely transformed man's concepts of the universe, of his place in it, and of his own physiological and psychological systems. Man's mastery over nature has been vastly extended, including his capacity to cope with diseases and other threats to human life and health.”
8
By 1967 U.S. Surgeon General William H. Stewart would be so utterly convinced of imminent success that he would tell a White House gathering of state and territorial health officers that it was time to close the book on infectious diseases and shift all national attention (and dollars) to what he termed “the New Dimensions” of health: chronic diseases.
9
“In the words of the song, âThe fundamental things go by,' polio and measles can be eradicated and should be eradicated,” Stewart would tell his exuberant audience. “Venereal disease and tuberculosis can be sharply reduced and should be sharply reduced. These are tasks that no one will perform for us. So long as a preventable disease remains, it must be prevented, and public health must be the primary force for prevention.”
Not content to stop with the predicted eradication of all known infectious diseases, the optimists set out in search of rare and remote disease agents. Biology research stations were established throughout the Southern Hemisphere, staffed largely by scientists from the Northern Hemisphere. All sorts of agencies funded and administered these outposts, including the Rockefeller Foundation, agencies of the governments of France, the United States, Germany, and the United Kingdom, as well as a variety of small private interests.
Johnson's Panama Canal Zone laboratory was just such an outpost. The U.S. government alone operated twenty-eight overseas laboratories, and the Rockefeller Foundation's Virus Program operated facilities in eight countries through which over sixty viruses would be discovered between 1951 and 1971.
10
But much of what these searching scientists were to find proved terrifying. As officials prepared to uncork celebratory champagne, Johnson and his colleagues were unlocking some of nature's nastiest secrets.
Boosters of the 1950s and early 1960s had some basis, born of ignorance, for their optimism: they knew comparatively little about genetics, microbial evolution, the human immune system, or disease ecology. Given the state of knowledge in the public health world of that day, it may have seemed appropriate to view infectious diseases in simple cause-and-effect terms. Seen in such a reductionist manner, problems and solutions appeared obvious and readily conquerable, bravado warranted.
As early as the 1930s scientists guessed that the genetic traits of large creatures, such as plants, animals, and humans, were carried in packages called chromosomes. These structures, which, when examined through a microscope, resembled dark, squiggly worms, were inside the central core, or nucleus, of every cell in a plant or animal. By manipulating chromosomes in test tubes, scientists could change the ways cells looked or grew; exposing
chromosomes to radiation, for example, could transform healthy tissue into cancer colonies.
True, Gregor Mendel showed in 1865 that some characteristics were passed on as dominant traits from one generation to another, while other genetic characteristics were recessive. But nobody knew exactly how all this worked, why blue-eyed parents had blue-eyed children, or a bacterium could seem to suddenly develop the ability to withstand higher temperatures than normally tolerated by its species.
Until 1944 nobody knew what was responsible for this neat passage of genetic information, from the tiniest virus to the largest elephant. That year, Oswald Avery and his colleagues at the Rockefeller Institute in New York showed that if they destroyed a specific molecule found inside all living cells, the organisms could no longer pass on their genes.
The molecule was called deoxyribonucleic acid, or DNA.
In 1953 researcher Rosalind Franklin, working at King's College in London, made the first X-ray photographs of DNA, showing that the molecules had a unique helical structure composed of various combinations of the same five key chemicals.
Later that year, America's James Watson and Britain's Francis Crick, working at Cambridge University, figured it all out. One of the chemicalsâa sort of carbon chain linked by powerful phosphate chemical bondsâcreated parallel curved structures similar to the poles of a long, winding ladder. Forming the rungs of the ladder were four other chemicals, called nucleotides. The order of those nucleotide rungs along the carbon/phosphate poles represented a code which, when deciphered properly, revealed the genetic secrets of life.
DNA, then, was the universal code used by one meningococcal bacterium as the basis for making another meningococcal bacterium. It was the material wrapped up inside the chromosomes of higher organisms. Sections of DNA equaled genes; genes created traits. When the chromosomes of one parent combined with those of another parent, the DNA was the key, and which traits appeared in the children (blue versus brown eyes) was a function of the dominant or recessive genes encoded in the parents' DNA.
11
While government officials were bragging that everything from malaria to influenza would soon disappear from the planet, scientists were just beginning to use their newfound knowledge to study disease-causing viruses, bacteria, and parasites. Scientists like Johnson were of the first generation of public health researchers to know the significance of DNA. Understanding how DNA played a direct role in the emergence of disease would take still another generation.
Starting at nature's most basic level, scientists at Cold Spring Harbor Laboratory on Long Island, New York showed in 1952 that viruses were essentially capsules jam-packed with DNA. Much later, researchers discovered that some other viruses, such as polio, were filled not with DNA
but with its sister compound, RNA (ribonucleic acid), which also carries the genetic code hidden in sequences of nucleotides.
When Karl Johnson was virus hunting in Bolivia, scientists had a limited understanding of the vast variety of viruses in the world, the ways these tiniest of organisms mutate and evolve, or how the microbes interact with the human immune system. The state of the art in 1963 was best summarized in Frank Fenner's animal virus textbook, the bible for budding microbiologists of the day: