100 Million Years of Food (5 page)

BOOK: 100 Million Years of Food
12.13Mb size Format: txt, pdf, ePub

Although Bajish is a gifted athlete and effortlessly acquired new strokes during swimming classes in Japan, he has been struggling to shed a few extra pounds over the years. At first glance, it seems nearly everyone around Kerala has a bit of a love handle or two around the waist. Nutritionists have blamed the high saturated-fat content of the traditional Keralan coconut-based diet as a factor in obesity and heart disease, but when Bajish and I chat with Dr. K. Jithendranath, an anesthesiologist with a strong interest in nutrition and traditional foods, he sputters at the attack upon coconut. “They said that coconut oil is bad for you, you get heart attacks. In that case all of my grandparents, everybody should have died! It's not the coconut oil, it's how they use the coconut oil. All that deep-fried stuff, browny-browny.”

An analysis of the history of coconut diets and research on saturated fats supports Dr. Jithendranath's frustration. Coconuts are deemed objectionable by many nutritionists because of their high saturated-fat content. The unpopularity of saturated fats largely began after nutritionist Ancel Keys and his wife visited Naples in the 1950s. Dr. Keys's enthusiastic account of the Mediterranean diet enshrined it as the gold standard for the following generations. Keys and his followers argued that the diet of olive oil, whole-grain bread, fruits and vegetables, pasta, fish, legumes, nuts, and moderate portions of red wine, cheese, and meat led to lower rates of heart disease than the dreaded “Western diet.” Subsequently, nutritionists trained in the Western sciences tended to laud the merits of the Mediterranean diet over other traditional and “fad” diets, such as the Atkins and Paleo diets, which are heavy in meat and saturated fats. However, critics noted that the statistical arguments of Keys's original papers were heavily flawed, relying on a cherry-picked group of seven countries to create an impression that lower consumption of saturated fat correlated with lower rates of cardiovascular disease. When the group of countries used to construct these arguments was broadened, the apparent correlation no longer existed. Since that time, the evidence linking saturated fats to heart disease has been tenuous and has not supported the stridency with which saturated fats have been vilified.
22
In fact, the particular kind of saturated fat found in coconuts, lauric acid, may actually increase “healthy cholesterol” (HDL) in the body.
23

Moreover, in countries where coconuts were a staple in the traditional diet, chronic diseases only became prominent after Western foods and lifestyles were introduced (and coconuts phased out). For example, among the Tokelau Islanders in the South Pacific, the historical diet consisted primarily of coconuts, fish, and breadfruit. It was a high-fat diet: Over half of the calories came from fat, mainly saturated fat (roughly one-third of coconut flesh is saturated fat).
24
As the population of these atolls increased, the New Zealand government offered to resettle the Tokelau Islanders in New Zealand. About half of the Tokelau Islanders took up the offer and left for the mainland. Their new diet now included sugar, flour, bread, potatoes, meat, chicken, and dairy products. The result in the migrating population was an increase in obesity, type 2 diabetes, heart disease, gout, and osteoarthritis, even though fat intake actually
declined
after the move to New Zealand. On the other hand, daily sugar intake increased, along with carbohydrate and alcohol consumption. Among those who stayed on the Tokelau Islands, new European foodstuffs were also added to the diet, and rates of chronic diseases also increased, but not to the same degree as among the migrants.
25
Like olive oil for those living in the Mediterranean, coconuts make sense as part of a South Pacific or South Asian cuisine; the high fat content of coconuts complements lean fish and a largely vegetarian diet. Removing the anchoring effect of coconut invites dietary abuse in the form of novel fatty or oily substitutes such as fried foods, which are a known risk factor for diabetes and inflammatory disease. As will be discussed later, fried foods contain trans fats and AGEs (advanced glycation end products) and have a high glycemic index, which are novel and harmful characteristics in the human diet; coconuts contain saturated fat, a substance that our ancestors had moderate but steady exposure to over millions of years, mainly in the form of animal fats.

Bajish and I travel with a medical convoy into the hills around Kerala. The ethnic tribal people whom we chat with often use coconut in their diet, but obesity, heart disease, and type 2 diabetes are not medical issues among them. We also note their vigorous lifestyles, how they work the land with hoes and their hands and walk long distances to get around, in contrast to using motorcycles like the majority of Keralans. Kerala has among the best roads and highest income levels in India, but also the highest levels of type 2 diabetes. Petrol is heavily subsidized by the government, making it even easier to ride rather than walk. The risk of diabetes is strongly linked to a decrease in physical activity rather than to coconut in the diet.

*   *   *

Another key fruit of contemporary Keralan cuisine, chili, has also been viewed with suspicion by Western-trained nutritionists. The spiciness of chilies comes from the peppers' store of capsaicin, a chemical compound employed with excruciating effectiveness in pepper sprays (some spider venoms work through the same pain channel).
26
Chili plants seem admirably protected against predators, which might seem like a straightforward chapter out of plant evolution, but the saga of chili is wrapped in enigmas.
27
For starters, chili plants retain their pain-inducing capsaicin protection even after the fruits mature; most plants, by contrast, reduce toxins and make their fruits tasty at that stage, to invite animals to eat the fruits and spread the seeds widely. Also, not only have we humans come to enjoy chilies, many people seek out the wickedest varieties (
XXX!
, the hot sauce bottle labels trumpet, as if parading the temptations of adult entertainment). Why do humans enjoy the pain inflicted by chili-protecting capsaicins, the only mammals known to do so?

The most popular explanation of why we enjoy chilies is that their capsaicin compounds kill off fungal infection and other microbial invaders, and thus we come to enjoy chili dishes because we don't get sick from eating them. If this explanation is true, it would put chilies in the company of a long list of spices that humans use not only to perk up dull dishes but also to keep meats and sauces from spoiling (and people from throwing up and running to the toilet). When Paul Sherman, a biologist at Cornell University, and his then-student Jennifer Billing looked at spice usage from recipes around the world, they found that hotter countries used more spices. This makes sense, since increased temperature boosts bacterial growth and encourages food spoilage, thus making the need for spices more urgent. In particular, three powerful spices that inhibit many varieties of bacteria are more frequently called for in the dishes of warm regions. The knock-'em-dead spices? Most likely they are familiar to you and are tucked away in your kitchen cupboards right now: garlic, onion, and of course, our favorite sadomasochistic temptress, chili.
28

However, there are some gaps in this explanation. The bacteria-busting hypothesis doesn't explain why chili seasoning is becoming popular in countries where food safety standards are high and food poisoning incidents are low, or why some countries that are geographically close to each other, such as Japan and Korea, have different levels of spiciness (Japanese food is considered relatively bland, whereas Koreans use chili in almost all of their dishes). If bacterial warfare were the only basis for eating spicy foods, then humans would get addicted to irradiated or canned foods, which seems not to be the case. The hypothesis also fails to explain why people steadily become addicted to eating chili, requiring ever greater amounts to feel satisfied. In fact, the more one looks at the behavioral pattern of chili consumption, the more it resembles thrill-seeking or recreational drug use.

Paul Rozin, a psychologist at the University of Pennsylvania, has suggested that humans are hardwired thrill-seekers, and we therefore enjoy blistering our tongues in the same way we (or some of us at least) savor a stomach-churning roller coaster ride and other forms of voluntary terror.
29
While equating roller coasters with chilies (and perhaps by extension garlic and onions and other spices) seems a little strange at first, back in the 1970s and 1980s, an American psychologist, Richard L. Solomon, pointed out that positive and negative emotions tend to come in pairs. When people are struck by lightning, survivors first experience terror, then elation. A similar thing happens to parachutists, who experience terror as they plunge through free fall; after landing, they warm up to a feeling of elation. People who take sauna baths go through an analogous sequence of discomfort followed by relief. The reverse is also true: When Solomon allowed babies to suck on a plastic nipple, they cried when the nipple was taken away. Solomon gave the article announcing his theory the clunky title of “The Opponent-Process Theory of Acquired Motivation,” but fortunately he found a memorable subtitle: “The Costs of Pleasure and the Benefits of Pain”; that is, positive experiences are invariably followed by a drop in mood, while painful experiences are followed by relief.
30
Solomon argued that over time, the pain and the relief paired with it both diminish, so a person is compelled to repeat the experience with gathering intensity, resulting in addiction to mildly painful experiences.

Although psychologists today view the Solomon hypothesis as too simplistic to describe drug addiction behavior, it may help to explain the pleasure-pain paradox of spices. Ingesting chili is initially an aversive experience, but at small doses, the pain fades and a pleasurable state arises afterward. Other spices, perhaps many, have the characteristic of being initially distasteful but pleasing afterward. Not all aversive foods have these tendencies, though; for example, getting sick from food poisoning produces a prolonged period of nausea that no one wants to reexperience.
31

We have addressed only one part of our original dilemma over spices. The second question remains: Why do humans alone come to enjoy mildly aversive experiences like chilies (and parachuting)? One possible answer is that humans are masters of gratification delay and brain rewiring. With practice, the discomfort of jumping out of an airplane, climbing onto a stage before crowds of thousands, or chomping on chilies gradually eases; however, so do the hits of pleasure, and thus the ever-increasing need for more punishment and more pain.

In other words, even though food that is spicy has antibacterial properties, we may eat these foods not to avoid getting sick, or even because they taste good at first, but primarily because they induce a paradoxical hit of pleasure after the displeasure; the benefits of pain, Professor Solomon might have observed. One consequence of his theory is that it explains why tropical cuisines tend to be spicy: The lack of meat in them, especially fat, makes it necessary for cooks to drop in dollops of spices, to increase the feeling of pleasure that fat and meat would otherwise induce. When I lived in Korea, cooks who saw me about to ladle a spoonful of rice and vegetables without adding red chili paste cried out in horror, seized the nearest bottle of chili paste, and tried to squeeze it over my bowl, because they assumed that my meal would taste bland, but I had not been desensitized to chili by that point, so in my view, the pain did not merit the pleasure. Solomon's theory also helps explain why Japanese and Korean cuisines differ so much in their spiciness. As an isolated, fertile island surrounded by rich coastal waters, Japan historically had access to much higher levels of animal flesh, compared to peninsular Korea, and Japanese food therefore requires relatively small amounts of mustard (wasabi) compared to the full-force application of chili in Korean dishes. The same situation could apply to England, with its relatively spice-light and meat-heavy fare, and France, with its more flavorful but less meaty cuisine. The fact that spices inhibit bacteria would certainly have been helpful in promoting their adoption, but this may be an additional rather than the sole reason they're so widely used.

It seems logical that spicier, more flavorful food would make us fatter. However, chili may make people lose weight, by increasing metabolism, body temperature, and the burning of fat.
32
These weight-sloughing effects are modest unless chili is eaten in large doses, though, which limits its usefulness for populations unused to chilies, such as in the United States, Canada, and Europe. By contrast, in one Mexican study, the average person ate the capsaicin equivalent of seventeen jalapeño peppers
a day.
Unfortunately, there is some evidence that eating copious quantities of chili could increase the risk of stomach, liver, bladder, and pancreatic cancer. Scientists at Kyoto University have developed a new variety of chili, CH-19 Sweet, that could offer the health benefits of capsaicin without the pain.
33

*   *   *

Between 40 million and 16 million years ago, something curious happened to our ancestors: Our uric acid levels started to rise because our ancestors progressively lost the genes for manufacturing uricase, the enzyme that helps dispose of it. Uric acid, a by-product of a diet rich in purines (organic compounds found in seafood and beer) and fructose (the sugar in fruit), can be a very inconvenient, nasty substance. It's responsible for causing gout, a debilitating condition in which crystals build up in a sufferer's joints. As a result of losing the ability to manufacture uricase, humans have uric acid levels three to ten times higher than other mammals and unfortunately a greater predisposition to gout and possibly hypertension. The loss of uricase over millions of years of evolution is one of the greatest unsolved mysteries in the evolution of the human diet. Because high uric acid levels are dangerous to health, it's extremely puzzling that our ancestors progressively shed the ability to deal with uric acid. Like losing a kidney or lung, it may not be fatal, but it's a considerable inconvenience. Why did our evolution take us down such a hazardous path? Around 70 percent of our uric acid is resorbed by our kidneys, not excreted, evidence that uric acid must have some positive role in the human body, rather than simply being a nuisance by-product of purine as scientists had formerly believed.

Other books

Art & Soul by Brittainy C. Cherry
A Thousand Cuts by Simon Lelic
The King's Marauder by Dewey Lambdin
The Manual of Detection by Jedediah Berry
Horizon by Christie Rich
Wake Up With a Stranger by Flora, Fletcher
Cion by Zakes Mda