Â
Suppose that we have isolated a new virus and have managed to produce a suspension of purified particles. How can we classify the virus, and how do we find out about its chemical composition? A lead may be provided by its past history âthe species of animal from which it was isolated and whether or not it was related to a disease. This information, in conjunction with that obtained by electron microscope examination of ⦠particles, might be enough for us to make a preliminary identification.
12
Â
Scientists could “see” viruses with the aid of microscopes powerful enough to magnify up to visual level objects that were nearly a million times smaller than a dime. With that power of magnification they could detect clear differences in the appearance of various species of viruses, from the chaotic-looking mumps virus that visually resembles a bowl full of spaghetti to the absolutely symmetrical polio virus that looked as if it were a Buckminster Fuller-designed sphere composed of alternating triangles.
Researchers also understood that viruses had a variety of different types of proteins protruding from their capsules, most of which were used by the tiny microbes to lock on to cells and gain entry for invasion. Some of the most sophisticated viruses, such as influenza, sugarcoated those proteins so that the human immune system might fail to notice the disguised invaders.
In 1963 laboratory scientists knew they could also distinguish one virus species from another by testing immune responses to those proteins protruding from the viral capsules. Humans and higher animals made antibodies against most such viral proteins, and the antibodiesâwhich were themselves large proteinsâwere very specific. Usually an antibody against parts of the polio virus, for example, would not react against the smallpox virus. Indeed, some antibodies were so picky that they might react against a 1958 Chicago strain of the flu, but not the strain that hit the Windy City the following winter.
Jonas Salk used this response against outer capsule proteins of the polio virus as the basis of his revolutionary vaccine, and by 1963 medical and veterinary pioneers all over the world were finding the pieces of various viruses that could be used most effectively to raise human and animal antibody responses.
Back in the lab, they could also use antibody responses to find out what might be ailing a mysteriously ill person. Blood samples containing the victim's attacking microbe would be dotted across a petri dish full of human or animal cells. Antibodies would also be dotted across the dish, and scientists would wait to see which antibody samples successfully prevented viral kill of the cells in the petri dish.
Of course, if the virus was something never before studied, all the scientists would be able to get was a negative answer: “It's not anything that we know about, none of our antibodies work.” So in the face of something new, like Machupo, scientists could only say after a tedious process of antibody elimination, “We don't know what it is.”
With bacteria the process of identification was far easier because the organisms were orders of magnitude larger than viruses: whereas a virus might be about one ten-millionth of an inch in size, a bacterium would be a thousandth of an inch long. To see a virus, scientists needed powerful, expensive electron microscopes, but since the days of Dutch lens hobbyist Anton van Leeuwenhoek, who in 1674 invented a microscope, it has been possible for people to see what he called “wee animalcules” with little more than a well-crafted glass lens and candlelight.
The relationship between those “animalcules” and disease was first figured out by France's Louis Pasteur in 1864, and during the following hundred years bacteriologists learned so much about the organisms that young scientists in 1964 considered classic bacteriology a dead field.
In 1928 British scientist Alexander Fleming had discovered that
Penicillium
mold could kill
Staphylococcus
bacteria in petri dishes, and dubbed the lethal antibacterial chemical secreted by the mold “penicillin.”
13
In 1944 penicillin was introduced to general clinical practice, causing a worldwide sensation that would be impossible to overstate. The term “miracle drug” entered the common vernacular as parents all over the industrialized world watched their children bounce back immediately from ailments that just months before had been considered serious, even deadly. Strep throat, once a dreaded childhood disease, instantly became trivial, as did skin boils, infected wounds, and tuberculosis with the quick discovery of streptomycin and other classes of antibiotics. By 1965 more than 25,000 different antibiotic products had been developed; physicians and scientists felt that bacterial diseases, and the microbes responsible, were no longer of great concern or of research interest.
Amid the near-fanatic enthusiasm for antibiotics there were reports, from the first days of their clinical use, of the existence of bacteria that were resistant to the chemicals. Doctors soon saw patients who couldn't be healed, and laboratory scientists were able to fill petri dishes to the brim with vast colonies of
Staphylococcus
or
Streptococcus
that thrived in solutions rich in penicillin, tetracycline, or any other antibiotic they chose to study.
In 1952 a young University of Wisconsin microbiologist named Joshua Lederberg and his wife, Esther, proved that these bacteria's ability to outwit
antibiotics was due to special characteristics found in their DNA. Some bacteria, they concluded, were genetically resistant to penicillin or other drugs, and had possessed that trait for aeons; certainly well before
Homo sapiens
discovered antibiotics.
14
In years to come, the Lederbergs' hypothesis that resistance to antibiotics was inherent in some bacterial species would prove to be true.
The Lederbergs had stumbled into the world of bacterial evolution. If millions of bacteria must compete among one another in endless turf battles, jockeying for position inside the human gut or on the warm, moist skin of an armpit, it made sense that they would have evolved chemical weapons with which to wipe out competitors. Furthermore, yeastâthe molds and soil organisms that were the natural sources of the world's then burgeoning antibiotic pharmaceuticsâhad evolved the ability to manufacture the same chemicals for similar ecological reasons.
It stood to reason that populations of organisms could survive only if some individual members of the colony possessed genetically coded R (resistance) Factors, conferring the ability to withstand such chemical assaults.
The Lederbergs discovered tests that could identify streptomycin-resistant
Escherichia coli
intestinal bacteria before the organisms were exposed to antibiotics. They also showed that the use of antibiotics in colonies of bacteria in which even less than 1 percent of the organisms were genetically resistant could have tragic results. The antibiotics would kill off the 99 percent of the bacteria that were susceptible, leaving a vast nutrient-filled petri dish free of competitors for the surviving resistant bacteria. Like weeds that suddenly invaded an untended open field, the resistant bacteria rapidly multiplied and spread out, filling the petri dish within a matter of days with a uniformly antibiotic-resistant population of bacteria.
Clinically this meant that the wise physician should hit an infected patient hard, with very high doses of antibiotics that would almost immediately kill off the entire susceptible population, leaving the immune system with the relatively minor task of wiping out the remaining resistant bacteria. For particularly dangerous infections, it seemed advisable to initially use two or three different types of antibiotics, on the theory that even if some bacteria had R Factors for one type of antibiotic, it was unlikely a bacterium would have R Factors for several widely divergent antibiotics.
If many young scientists of the mid-1960s considered bacteriology passeâa field commonly referred to as “a science in which all the big questions have been answered”âthe study of parasitology was thought to be positively prehistoric.
A parasite, properly defined, is “one who eats beside or at the table of another, a toady; in biology, a plant or animal that lives on or within another organism, from which it derives sustenance or protection without making compensation.”
15
Strictly speaking, then, all infectious microbes could be labeled parasites, from viruses to large ringworms.
But historically, the sciences of virology, bacteriology, and parasitology have evolved quite separately, with few scientistsâother than “disease cowboys” like Johnson and MacKenzieâtrained or even interested in bridging the disciplines. By the time hemorrhagic fever broke out in Bolivia, a very artificial set of distinctions had developed between the fields. Plainly put, larger microbes were considered parasites: protozoa, amoebae, worms. These were the domain of parasitologists.
Their scientific realm had been absorbed by another, equally artificially designated field dubbed tropical medicine, which often had nothing to do with either geographically tropical areas or medicine.
Both distinctionsâparasitology and tropical medicineâset off the study of diseases that largely plagued the poorer, less developed countries of the world from those that continued to trouble the industrialized world. The field of tropical medicine did so most blatantly, encompassing not only classically defined parasitic diseases but also viruses (e.g., yellow fever and the various hemorrhagic fever viruses) and bacteria (e.g., plague, yaws, and typhus) that were by the mid-twentieth century extremely rare in developed countries.
In the eighteenth century the only organisms big enough to be studied easily without the aid of powerful microscopes were larger parasites that infected human beings in some stage of the overall life cycle of the creature. Doctors could, without magnification, see ringworms or the eggs of some parasites in patients' stools. Without much magnification (on the order of hundreds-fold versus the thousands-fold necessary to study bacteria) scientists could see the dangerous fungal colonies of
Candida albicans
growing in a woman's vagina, scabies acariasis roundworms in an unfortunate victim's skin, or cysticercosis tapeworms in the stools of individuals fed undercooked pork.
As British and French imperial designs increasingly in the late eighteenth century turned to colonization of areas such as the Indian subcontinent, Africa, and Southeast Asia, tropical medicine became a distinct and powerful science that separated itself from what was then considered a more primitive field, bacteriology. Science historian John Farley concluded that what began as a separation designed to lend parasitology greater resources and esteemâand did so in the early nineteenth centuryâended up leaving it science's stepchild.
16
Ironically, parasites, classically defined, were far more complex than bacteria and their study required a broader range of expertise than was exacted by typical
E. coli
biology. Top parasitologistsâor tropical medicine specialists, if you willâwere expected in the mid-1960s to have vast knowledge of tropical insects, disease-carrying animals, the complicated life cycles of over a hundred different recognized parasites, human clinical responses to the diseases, and the ways in which all these factors interacted in particular settings to produce epidemics or long periods of endemic, or permanent, disease.
Consider the example of one of the world's most ubiquitous and complicated diseases: malaria. To truly understand and control the disease, scientists in the mid-twentieth century were supposed to have detailed knowledge of the complex life cycle of the malarial parasite, the insect that carried it, the ecology of that insect's highly diverse environment, other animals that could be infected with the parasite, and how all these factors were affected by such things as heavy rainfall, human migrations, changes in monkey populations, and the like.
It was known that several different strains of
Anopheles
mosquitoes could carry the tiny parasites. The female
Anopheles
would suck parasites out of the blood of infected humans or animals when she injected her syringe-like proboscis into a surface capillary to feed. The microscopic male and female sexual stages of the parasites, called gametocytes, would make their way up the proboscis and into the female mosquito's gut, where they would unite sexually and make a tiny sac in the lining of the insect's stomach.
Over a period of one to three weeks the sac would grow, as inside thousands of sporozoite-stage parasites were manufactured. Eventually, the sac would explode, flooding the insect's gut with microscopic one-celled parasites that caused no harm to the cold-blooded insect; their target was a warm-blooded creature, one full of red blood cells.
Some of the sporozoites would make their way into the insect's salivary glands, from which they would be drawn up into the “syringe” when the mosquito went on her nightly sundown feeding frenzy, and be injected into the bloodstream of an unfortunate human host.
At that point the speed and severity of events (from the human host's perspective) would depend on which of four key malarial parasite species had been injected by the mosquito. A good parasitologist in the 1950s knew a great deal about the differences between the four species, two of which were particularly dangerous:
Plasmodium vivax
and
P. falciparum.