An Edible History of Humanity (21 page)

BOOK: An Edible History of Humanity
4.97Mb size Format: txt, pdf, ePub
ads

In autumn of 1991, an official memo reported: “The low harvest and the inability to expand imports, together with the refusal
of farms to turn over their grain to the state, may put the country and the republic on the brink of famine. The only way
out of this situation is to allow the farms to sell grain freely at market prices with further liberalization of retail prices
for bread. Without a transition to free pricing in conjunction with an accelerated reduction of state control in agriculture
and trade, there will be no incentive for growth in production.” Finally, the penny had dropped. The Soviet policies of centralizing
control of agriculture and controlling prices had failed. The only way forward, politicians conceded, was free trade and liberalization—in
other words, capitalism. By this time the Soviet Union’s disintegration was well advanced, and it formally ceased to exist
on December 26, 1991, dissolving into its constituent states.

THE DEMOCRACY OF FOOD

Is it a coincidence that the worst famine in history happened in a Communist state? Not according to Amartya Sen, an Indian
economist who won the Nobel prize in Economics in 1998. In his view, the combination of representative democracy and a free
press makes famines much less likely to occur. “In the terrible history of famines in the world, no substantial famine has
ever occurred in any indepen-dent and democratic country with a relatively free press,” he wrote in 1999.

We cannot find exceptions to this rule, no matter where we look: the recent famines of Ethiopia, Somalia, or other dictatorial
regimes; famines in the Soviet Union in the 1930s; China’s 1958–61 famine with the failure of the Great Leap Forward; or earlier
still, the famines in Ireland or India under alien rule. China, although it was in many ways doing much better economical
ly than India, still managed (unlike India) to have a famine, indeed the largest recorded famine in world history: Nearly
30 million people died in the famine of 1958–61, while faulty governmental policies remained uncorrected for three full years.
The policies went uncriticized because there were no opposition parties in parliament, no free press, and no multiparty elections.
Indeed, it is precisely this lack of challenge that allowed the deeply defective policies to continue even though they were
killing millions each year.

Famines, Sen pointed out, are often blamed on natural disasters. But when such disasters strike democracies, politicians are
more likely to act, if only to maintain the support of voters. “Not surprisingly, while India continued to have famines under
British rule right up to in dependence (the last famine, which I witnessed as a child, was in 1943, four years before independence),
they disappeared suddenly with the establishment of a multiparty democracy and a free press,” Sen wrote.

The rise of democracy, which Sen calls “the preeminent development” of the twentieth century, would therefore explain why
the use of food as an ideological weapon, like its use as a military weapon, has become much less widespread. A rare but striking
example, at the time of writing in mid-2008, is its use by Robert Mugabe, Zimbabwe’s dictator. He has presided over a collapse
of Zimbabwe’s agriculture, which has turned the country from a regional breadbasket into a disaster area. Between 2000 and
2008 agricultural output fell by 80 percent, unemployment increased to 85 percent, inflation rose to more than 100,000 percent,
life expectancy fell below forty, and three million Zimbabweans, or about one fifth of the population, fled the country. With
Zimbabwe in crisis, Mugabe maintained his grip on power through violence and intimidation, by rigging a series of elections,
and by channeling food aid to members of his government and regions where his support was strongest, while denying it to people
in areas known to be sympathetic to the opposition.

In June 2008 Mugabe was accused of offering food to people in opposition areas only if they gave up the identification documents
needed to vote in the presidential election, to prevent them voting for the opposition candidate. A spokesman for the U.S.
State Department, Sean McCormack, told reporters that Mugabe was “using food as a weapon, using the hunger of parents’ children
against them to prevent them from voting their conscience for a better kind of Zimbabwe.” Mugabe responded that it was Western
aid agencies that were using food for political ends, and he banned them from distributing food in opposition areas. “These
western-funded NGOs also use food as a political weapon with which to campaign against government, especially in the rural
areas,” he said.

The overt use of food as a weapon in this way is now mercifully rare. In Western democracies, however, food has found another,
more subtle political role. It is no longer a weapon, but has instead become a battlefield on which broader political fights
take place. This is a consequence of the variety of food now available to Western consumers as a result of global trade, growing
interest in the consequences and politics of food choices, and food’s unusual status as a consumer product that acts as a
lightning rod for broader social concerns. For almost any political view you want to express, there is a relevant foodstuff
to buy or avoid.

Concerns over the environment can therefore be expressed by advocacy of local and organic products; “fair-trade” products
aim to highlight the inequity of global-trade rules and the buying power of large corporations, while also funding social
programs for low-paid workers and their families; arguments about genetically modified foodstuffs give expression to worries
over the unfettered march of new technologies, and the extent to which farmers have become dependent on large agribusinesses.
Shoppers can buy dolphin-friendly tuna, bird-friendly coffee, and bananas that support educational programs for growers in
Costa Rica. They can express a desire for reconciliation in the Middle East by buying “peace oil” made in olive groves where
Israelis and Palestinians work side by side. They can signal opposition to large companies by boycotting supermarkets in favor
of small shops or farmers’ markets.

Food can also be used to make specific protests against companies or governments. In 1999 when José Bové, a French political
activist, wanted to express his opposition to the might of the United States and to the impact of multinational corporations
on French traditions and local companies, he did so by dismantling a McDonald’s restaurant in the town of Millau, loading
the rubble onto tractors, and dumping it outside the town hall. More recently, in South Korea in 2008 there were huge public
protests against American beef imports, ostensibly on safety grounds; but the protests really gave voice to broader unease
about the removal of trade barriers and to concerns that South Korea’s ruling party was allowing itself to be pushed around
by the country’s superpower patron.

The idea of using food to make wider political points can be traced back to 1791, when British consumers who wanted to express
their opposition to slavery began to boycott sugar. A stream of pamphlets ensued, including the Anti-Saccharine Society’s
deliberately shocking manifesto, illustrated with a cross-section of a slave ship to show how tightly the shackled men were
packed into it. A newspaper advertisement placed by James Wright, a Quaker merchant, in 1792 was representative of the mood:
“Therefore being impressed with the Sufferings and Wrongs of that deeply-injured People, and also with an Apprehension, that
while I am a Dealer in that Article, which appears to be a principal support of the Slave Trade, I am encouraging Slavery,
I take this Method of informing my Customers, that I mean to discontinue selling the Article of Sugar till I can procure it
through Channels less contaminated, more unconnected with Slavery, and less polluted with Human Blood.”

Campaigners claimed that if just thirty-eight thousand British families stopped buying sugar, the impact on the planters’
profits would be severe enough to bring the trade to an end. At the boycott’s peak, one of the leaders of the campaign claimed
that three hundred thousand people had given up sugar. Some campaigners smashed teacups in public, since they were tainted
by sugar. Tea parties became social and political minefields. It was a faux pas to ask for sugar if it was not offered by
an abstaining hostess. But not all sugar was equally bad. Some people regarded more expensive sugar from the East Indies to
be less ethically problematic—until it transpired that it, too, was very often grown by slaves. When the slave trade was abolished
by Britain in 1807, it was unclear whether the boycott, or a series of slave revolts, had made the most difference. Some even
argued that the boycott had made things worse: As planters’ profits fell, they might well have treated their slaves even more
cruelly. But there was no doubt that the sugar boycott had drawn attention to the slavery question and helped to mobilize
political opposition.

The same is true of today’s food debates. Their real significance lies not so much in their direct impact, but in the way
in which they can provide a leading indicator to governments about policy, and encourage companies to change their behavior.
Food has a unique political power, for several reasons: food links the world’s richest consumers with its poorest farmers;
food choices have always been a potent means of social signaling; modern shoppers must make dozens of food choices every week,
providing far more opportunities for political expression than electoral politics; and food is a product you consume, so eating
something implies a deeply personal endorsement of it. But there are limits to its power. Real change—such as abolishing slavery
in the nineteenth century, or overhauling world trade or tackling climate change today—ultimately requires political action
by governments. Voting with your food choices is no substitute for voting at the ballot box. But food provides a valuable
arena in which to debate difference choices, a mechanism by which societies indicate what they feel strongly about, and a
way to mobilize broader political support. Those in positions of power, whether in politics or business, would be foolish
to ignore such signals.

[Agriculture’s] principal object consists in the production of nitrogen under any form capable of assimilation.

—JUSTUS VON LIEBIG, 1840

THE MACHINE THAT CHANGED THE WORLD

Compared with the flight of Wright brothers’ first plane or the detonation of the first atomic bomb, the appearance of a few
drips of colorless liquid at one end of an elaborate apparatus in a laboratory in Karlsruhe, Germany, on a July afternoon
in 1909 does not sound very dramatic. But it marked the technological breakthrough that was to have arguably the greatest
impact on mankind during the twentieth century. The liquid was ammonia, and the tabletop equipment had synthesized it from
its constituent elements, hydrogen and nitrogen. This showed for the first time that the production of ammonia could be performed
on a large scale, opening up a valuable and much-needed new source of fertilizer and making possible a vast expansion of the
food supply—and, as a consequence, of the human population.

The link between ammonia and human nutrition is nitrogen. A vital building block of all plant and animal tissue, it is the
nutrient reponsible for vegetative growth and for the protein content of cereal grains, the staple crops on which humanity
depends. Of course, plants need many nutrients, but in practice their growth is limited by the availability of the least abundant
nutrient. Most of the time this is nitrogen. For cereals, nitrogen deficiency results in stunted growth, yellow leaves, reduced
yields, and low protein content. An abundance of available nitrogen, by contrast, promotes growth and increases yield and
protein content. Nitrogen compounds (such as proteins, amino acids, and DNA) also play crucial roles in the metabolisms of
plants and animals; nitrogen is present in every living cell. Humans depend on the ingestion of ten amino acids, each built
around a nitrogen atom, to synthesize the body proteins needed for tissue growth and maintenance. The vast majority of these
essential amino acids comes from agricultural crops, or from products derived from animals fed on those crops. An inadequate
supply of these essential amino acids leads to stunted mental and physical development. Nitrogen, in short, is a limiting
factor in the availability of mankind’s staple foods, and in human nutrition overall.

The ability to synthesize ammonia, combined with new “high-yield” seed varieties specifically bred to respond well to chemical
fertilizers, removed this constraint and paved the way for an unprecedented expansion in the human population, from 1.6 billion
to 6 billion, during the course of the twentieth century. The introduction of chemical fertilizers and high-yield seed varieties
into the developing world, starting in the 1960s, is known today as the “green revolution.” Without fertilizer to nourish
crops and provide more food—increasing the food supply sevenfold, as the population grew by a factor of 3.7—hundreds of millions
of people would have faced malnutrition or starvation, and history might have unfolded very differently.

The green revolution has had far-reaching consequences. As well as causing a population boom, it helped to lift hundreds of
millions of people out of poverty and underpinned the historic resurgence of the Asian economies and the rapid industrialization
of China and India—developments that are transforming geopolitics. But the green revolution’s many other social and environmental
side effects have made it hugely controversial. Its critics contend that it has caused massive environmental damage, destroyed
traditional farming practices, increased inequality, and made farmers dependent on expensive seeds and chemicals provided
by Western companies. Doubts have also been expressed about the long-term sustainability of chemically intensive farming.
But for better or worse, there is no question that the green revolution did more than just transform the world’s food supply
in the second half of the twentieth century; it transformed the world.

THE MYSTERY OF NITROGEN

The origins of the green revolution lie in the nineteenth century, when scientists first came to appreciate the crucial role
of nitrogen in plant nutrition. Nitrogen is the main ingredient of air, making up 78 percent of the atmosphere by volume;
the rest is mostly oxygen (21 percent), plus small amounts of argon and carbon dioxide. Nitrogen was first identified in the
1770s by scientists investigating the properties of air. They found that nitrogen gas was mostly unreactive and that animals
placed in an all-nitrogen atmosphere suffocated. Yet having learned to identify nitrogen, the scientists also discovered that
it was abundant in both plants and animals and evidently had an important role in sustaining life. In 1836 Jean-Baptiste Boussingault,
a French chemist who took a particular interest in the chemical foundations of agriculture, measured the nitrogen content
of dozens of substances, including common food crops, various forms of manure, dried blood, bones, and fish waste. He showed
in a series of experiments that the effectiveness of different forms of fertilizer was directly related to their nitrogen
content. This was odd, given that atmospheric nitrogen was so unreactive. There had to be some mechanism that transformed
nonreactive nitrogen in the atmosphere into a reactive form that could be exploited by plants.

Some scientists suggested that lightning created this reactive nitrogen by breaking apart the stable nitrogen molecules in
the air; others speculated that there might be trace quantities of ammonia, the simplest possible compound of nitrogen, in
the atmosphere. Still others believed that plants were somehow absorbing nitrogen from the air directly. Boussingault took
sterilized sand that contained no nitrogen at all, grew clover in it, and found that nitrogen was then present in the sand.
This suggested that legumes such as clover could somehow capture (or “fix”) nitrogen from the atmosphere directly. Further
experiments followed, and eventually in 1885 another French chemist, Marcelin Berthelot, demonstrated that uncultivated soil
was also capable of fixing nitrogen, but that the soil lost this ability if it was sterilized. This suggested that nitrogen
fixation was a property of something in the soil. But if that was the case, why were leguminous plants also capable of fixing
nitrogen?

The mystery was solved by two German scientists, Hermann Hell-riegel and Hermann Wilfarth, the following year. If nitrogen-fixing
was a property of the soil, they reasoned, it should be transferable. They put pea plants (another kind of legume) in sterilized
soil, and they added fertile soil to some of the pots. The pea plants in the sterile soil withered, but those to which fertile
soil had been added flourished. Cereal crops, however, did not respond to the application of soil in the same way, though
they did respond strongly to nitrate compounds. The two Hermanns concluded that the nitrogen-fixing was being done by microbes
in the soil and that the lumps, or nodules, that are found on the roots of legumes were sites where some of these microbes
took up residence and then fixed nitrogen for use by the plant. In other words, the microbes and the legumes had a cooperative,
or symbiotic, relationship. (Since then, scientists have discovered nitrogen-fixing microbes that are symbiotic with freshwater
ferns and supply valuable nitrogen in Asian paddy fields; and nitrogen-fixing microbes that live in sugarcane, explaining
how it can be harvested for many years from the same plot of land without the use of fertilizer.)

Nitrogen’s crucial role as a plant nutrient had been explained. Plants need nitrogen, and certain microbes in the soil can
capture it from the atmosphere and make it available to them. In addition, legumes can draw upon a second source of nitrogen,
namely that fixed by microbes accommodated in their root nodules. All this explained how long-established agricultural practices,
known to maintain or replenish soil fertility, really worked. Leaving land fallow for a year or two, for example, gives the
microbes in the soil a chance to replenish the nitrogen. Farmers can also replenish soil nitrogen by recycling various forms
of organic waste (including crop residues, animal manures, canal mud, and human excrement), all of which contain small amounts
of reactive nitrogen, or by growing leguminous plants such as peas, beans, lentils, or clover.

These techniques had been independently discovered by farmers all over the world, thousands of years earlier. Peas and lentils
were being grown alongside wheat and barley in the Near East almost from the dawn of agriculture. Beans and peas were rotated
with wheat, millet, and rice in China. In India, lentils, peas, and chickpeas were rotated with wheat and rice; in the New
World, beans were interleaved with maize. Sometimes the leguminous plants were simply plowed back into the soil. Farmers did
not know why any of this worked, but they knew that it did. In the third century B.C., Theophrastus, the Greek philosopher
and botanist, noted that “the bean best reinvigorates the ground” and that “the people of Macedonia and Thessaly turn over
the ground when it is in flower.” Similarly, Cato the Elder, a Roman writer of the second century B.C., was aware of beneficial
effects of leguminous crops on soil fertility, and he advised that they should “be planted not so much for the immediate return
as with a view to the year later.” Columella, a Roman writer of the first century A.D., advocated the use of peas, chickpeas,
lentils, and other legumes in this way. And the “Chhi Min Yao Shu,” a Chinese work, recommended the cultivation and plowing-in
of adzuki beans, in a passage that seems to date from the first century B.C. Farmers did not realize it at the time, but growing
legumes is a far more efficient way to enrich the soil than the application of manure, which contains relatively little nitrogen
(typically 1 to 2 percent by weight).

The unraveling of the role of nitrogen in plant nutrition coincided with the realization, in the mid-nineteenth century, of
the imminent need to improve crop yields. Between 1850 and 1900 the population in western Europe and North America grew from
around three hundred million to five hundred million, and to keep pace with this growth, food production was increased by
placing more land under cultivation on America’s Great Plains, in Canada, on the Russian steppes, and in Argentina. This raised
the output of wheat and maize, but there was a limit to how far the process could go. By the early twentieth century there
was little remaining scope for placing more land under cultivation, so to increase the food supply it would be necessary to
get more food per unit area—in other words, to increase yields. Given the link between plant growth and the availability of
nitrogen, one obvious way to do this was to increase the supply of nitrogen. Producing more manure from animals would not
work, because animals need food, which in turn requires land. Sowing leguminous plants to enrich the soil, meanwhile, means
that the land cannot be used to grow anything else in the meantime. So, starting as early as the 1840s, there was growing
interest in new, external sources of nitrogen fertilizer.

Solidified bird excrement from tropical islands, known as guano, had been used as fertilizer on the west coast of South America
for centuries. Analysis showed that it had a nitrogen content thirty times higher than that of manure. During the 1850s, imports
of guano went from zero to two hundred thousand tons a year in Britain, and shipments to the United States averaged seventy-six
thousand tons a year. The Guano Islands Act, passed in 1856, allowed American citizens to take possession of any uninhabited
islands or rocks containing guano deposits, provided they were not within the jurisdiction of any other government. As guano
mania took hold, entrepreneurs scoured the seas looking for new sources of this valuable new material. But by the early 1870s
it was clear that the guano supply was being rapidly depleted. (“This material, though once a name to conjure with, has now
not much more than an academic interest, owing to the rapid exhaustion of supplies,” observed the
Encyclopaedia Britannica
in 1911.) Instead, the focus shifted to another source of nitrogen: the huge deposits of sodium nitrate that had been discovered
in Chile. Exports boomed, and in 1879 the War of the Pacific broke out between Chile, Peru, and Bolivia over the ownership
of a contested nitrate-rich region in the Atacama Desert. (Chile prevailed in 1883, depriving Bolivia of its coastal province,
so that it has been a landlocked country ever since.)

Even when the fighting was over, however, concerns remained over the long-term security of supply. One forecast, made in 1903,
predicted that nitrate supplies would run out by 1938. It was wrong—there were in fact more than three hundred years of supply,
given the consumption rate at the time—but many people believed it. And by this time sodium nitrate was in demand not only
as a fertilizer, but also to make explosives, in which reactive nitrogen is a vital ingredient. Countries realized that their
ability to wage war, as well as their ability to feed their populations, was becoming dependent on a reliable supply of reactive
nitrogen. Most worried of all was Germany. It was the largest importer of Chilean nitrate at the beginning of the twentieth
century, and its geography made it vulnerable to a naval blockade. So it was in Germany that the most intensive efforts were
made to find new sources of reactive nitrogen.

One approach was to derive it from coal, which contains a small amount of nitrogen left over from the biomass from which it
originally formed. Heating coal in the absence of oxygen causes the nitrogen to be released in the form of ammonia. But the
amount involved is tiny, and efforts to increase it made little difference. Another approach was to simulate lightning and
use high voltages to generate sparks that would turn nitrogen in the air into more reactive nitrous oxide. This worked, but
it was highly energy-intensive and was therefore dependent on the availability of cheap electricity (such as excess power
from hydroelectric dams). So imported Chilean nitrate remained Germany’s main source of nitrogen. Britain was in a similarly
difficult situation. Like Germany, it was also a big importer of nitrates, and was doing its best to extract ammonia from
coal. Despite efforts to increase agricultural production, both countries relied on imported wheat.

BOOK: An Edible History of Humanity
4.97Mb size Format: txt, pdf, ePub
ads

Other books

Nympho by Andrea Blackstone
Halo by Alexandra Adornetto
Starlight in the Ring by H. N. Quinnen
The Tree of Water by Elizabeth Haydon
Forever Burning by Evi Asher
Complete Short Stories (VMC) by Elizabeth Taylor
Some Day Somebody by Leger, Lori
Brett McCarthy by Maria Padian