3. Reasons for Modern Longevity
Last time we saw that the transition from a hunter-gatherer to an agricultural regime led to much larger and sicker populations. We also saw that agriculture led to the rise of centralized government, extreme inequality, infectious disease and chronic warfare. This decline in health and well-being lasted throughout all of human history up until the past 150 years or so.
So we are faced with a paradox. If agriculture and modern lifestyles are so much worse for us, then how did average lifespans double in the past 150 years?
Most of this development is due to modern medicine. Variolation, which had been practiced in China and the Middle East as early as the fifteenth century, began to be introduced into Western Europe and North America in the early 1700’s. In 1796 Edward Jenner injected the first smallpox vaccine in England. Eventually strong, centralized governments required all citizens to be immunized, and made vaccines available to all, including the poor (guaranteeing enough workers for industry and soldiers for the military). Later, vaccinations were systematically developed for polio, measles and pertussis (whooping cough) among other maladies. Immunizations and vaccinations are one of the leading causes of reduced infant mortality and longer average lifespans. It should be noted that one of the main reason for shorter life expectancies and higher infant mortality in poorer regions of the world with less stable governments is the lack of immunization.
In 1854. Dr. John Snow, through heroic efforts, traced an outbreak of cholera to the use of the Broad Street Pump in central London. By removing the handle of the pump, the outbreak tapered off. This was the beginning of modern epidemiology, later enhanced by the work of Pasteur and Koch. Governments began serious efforts to provide adequate sanitation and safe drinking water for urban populations. These efforts have led some historians to attribute one-half of the overall reduction in mortality, two-thirds of the reduction in child mortality, and three-fourths of the reduction in infant mortality to clean water.
In 1899 the Boer War broke out in South Africa. It lasted three years. When questions were asked about why it had taken so long for the world’s largest military to defeat a group of Dutch farmers, government surveys revealed the atrocious health of the English public. Up to nine out of ten recruits were rejected for military service in some areas due to poor health. There was a very real worry that the population was so frail and sickly that there would not be enough soldiers to defend the British empire.
In the crowded cities of the early Industrial Revolution, housing, especially for the poor, was often dark, damp, and poorly ventilated, leading to periodic outbreaks of communicable diseases like cholera and tuberculosis. Friedrich Engels documented the deplorable conditions of 1848 Manchester in “The Condition of the Working Class in England.” Charles Booth and Seebohm Rowntree made extensive social surveys of poverty in Victorian England. Jacob Riis documented the slums of New York in the 1880’s in “How the Other Half Lives.”
Governments began clearing out crowded slums and replacing them with clean, well-ventilated housing for workers. Filthy tenements were replaced by apartments and row housing with indoor plumbing. Urban renewal projects created sewers, wide streets, and public parks. Building codes were adopted. Areas prone to disease outbreaks were quarantined. The “liberal reforms” in Great Britain from the late nineteenth to mid twentieth century provided all sort of beneficial health outcomes. These occurred alongside similar reforms in Germany under Otto von Bismarck and in America under the Progressive Movement.
By far the biggest “predators” of humans for all of history have been microbes. The germ theory of disease was resisted up until the nineteenth century. When it finally became widely accepted, doctors began to wash their hands, cutting infections down dramatically. Disinfection techniques decreased deaths during surgery and childbirth. Soap and water use became common. Food was inspected and pasteurized. Awareness of the dangers of microbes penetrated all facets of society and altered peoples’ behavior.
In 1928 Alexander Fleming discovered penicillin when, returning to his lab after a two week vacation, he noticed that mold had developed on a staphylococcus plate due to him accidentally leaving the window open. He noticed that the mold retarded the growth of the bacteria. This discovery led to the development of antibiotics.
Surgery, which used to be extremely deadly, could now be performed relatively safely. During World War I, death rate from pneumonia in the American Army totaled 18%. In World War II, it fell to less than 1%. Many illnesses were successfully treated with antibiotics - strep throat, scarlet fever, diphtheria, syphilis, gonorrhea, meningitis, tonsillitis, rheumatic fever, and many other diseases.
Famines slowly withered away. The last major famine to effect Western Europe was the Irish potato famine in 1845-1852. Malnutrition, both acute and chronic, which lowers resistance to disease and stunts growth, receded into the past for wealthy, industrialized nations.
The link between high blood pressure and heart disease was discovered leading to early intervention. The role of vitamins and minerals in childhood development became well-known. Medicines were developed for all sort of diseases due to the science of chemistry.
Even automobiles and telecommunication has played a part. Today, a heart attack or stroke victim can instantaneously summon an ambulance or helicopter to whisk him away to a hospital where a team of physicians with sterilized modern medical equipment stand by in a temperature-controlled steel and concrete building ready to save him. Bicycle and motorcycle riders wear helmets and cars now have seat belts. Even air conditioning has contributed to longevity by preventing the elderly from dying during heat waves.
There are many more reasons too numerous to go into. I encourage you to read Slate Magazine’s series on health and longevity, where much of the above information is taken from.
Thus we see that almost all the increase in longevity is due to investments in public health undertaken by national governments, along with the ongoing march of scientific discovery. All of it is very recent and none of it was due to changing our diets, except by curtailing severe malnutrition.
We also see that none of it is due to office work, sedentary lifestyles, pollution, chronic stress, television, processed food, higher education, consumerism, or any other phenomena of modern life. In fact, longer life spans largely occurred in spite of those things. We live longer, but much less healthy, lives. We are, in a very real sense, the walking wounded, both mentally and physically.
What about the negative effects of the consumption of meat? It’s often argued that heavy meat consumption is linked with shorted life spans and worse health outcomes like heart disease and high cholesterol. This alleged fact makes many people bemoan paleo diets as harbingers of death.
It turns out that the lack of meat consumption is just plain wrong. At a dinner party, Thomas Jefferson once noted that all of the Americans in the room were a full head taller than his European guests. American soldiers in the Revolutionary War were taller than their British counterparts. The secret was the higher amounts of meat available in North America, as well as an abundance of food for even poor Americans:
...for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old.
In the book Putting Meat on the American Table, researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of 8,000 urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds.
A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the 18th century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”
About 175 pounds of meat per person per year—compared to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, about half is poultry—chicken and turkey—whereas until the mid-20th century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs).
Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. And before large supermarket chains started importing kiwis from Australia and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)
So it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator. By these accounts, for the first 250 years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.
During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.Early settlers of North America were awed by the height of Native Americans. In fact, the Plains Indians who lived a “poor” hunter-gatherer lifestyle were among the tallest people in the world.
Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital.
The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris, makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his 700 male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.”
About one fifth of the U.S. population was over 50 years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some 10 million Americans of a prime age for having a heart attack at the turn of the 20th century, but heart attacks appeared not to have been a common problem.
Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle, Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another 20 years.
In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.
It is also important to note that the most deadly microbes that preyed on the human species occurred after the Neolithic transition, and thus did not affect our hunter-gatherer ancestors. This is because hunter-gathers lived in small, isolated tribes, so diseases could not spread as easily. Also, many of the most deadly diseases are zoonotic diseases transmitted from animals to humans. These only came about after humans started domesticating animals and living in close proximity to them. Influenza, chicken pox, mumps and measles are just a few of these. Agriculture even caused malaria to spread by clearing forests and creating more habitats for mosquitoes. Here’s Spencer Wells again (emphasis mine):
In Plagues and Peoples, McNeill traces the origin of many diseases common today back to changes in human society during the Neolithic period. Many of these changes we are familiar with from last chapter, including the increasing number of people living in a relatively small space, allowing rapid transmission of diseases by infected individuals, and a large enough pool of uninfected people to permit the emergence of epidemics. Perhaps the most important factor, though, was the domestication of animals. As the human population increased in early farming communities, hunting was no longer a viable option--as with wild seed-bearing grasses, the supply of wild animals was limited by the natural carrying capacity of the land. This meant that many were soon hunted to near extinction. The necessity of creating a stable food supply led human populations in the Middle East to begin domesticating sheep, goats, pigs, and cattle from their wild progenitors by around 8000 B.C., and the Southeast Asian population to domesticate the chicken by around 6000 B.C. This created a reliable source of meat in the Neolithic diet, but the large numbers of people and animals cohabitating also created an environment that had never before exited in human history.Dr. Wells is simply wrong on that last part - as we saw before, at least a quarter of people make it into their sixties even in hunter-gatherer tribes, and as we'll see next time, these so-called 'diseases of civilization' are mostly absent.
For the first time, people and animals were living in the same communities. While Paleolithic hunters had certainly come into contact with their prey after a successful hunt, the number of wild animals contacted was a small fraction of those living in the newly domesticated Neolithic herds. Also, most of these animals were dead; this would have decreased the chances of transmitting many diseases, but perhaps facilitated the transfer of blood-borne infections. When we started living close to animals throughout our lives--particularly as children--the odds of diseases being transmitted increased significantly. Although some such infections had probably always existed to a lesser extent in both the animal and human populations, suddenly there was a brand-new opportunity to swap hosts. The microorganisms had a field day.
McNeill wrote that of the diseases shared by humans and their domesticated animals, twenty-six are found in chickens, forty-two in pigs, forty-six in sheep and goats, and fifty in cattle. Most of the worst scourges of human health until the advent of vaccination in the eighteenth century were imports from our farm animals, including measles, tuberculosis, smallpox, and influenza. Bubonic plague was transmitted to us by fleas from rats living in human settlements. As far as we can tell from the archaeological record, none of these so-called zoonotic diseases...afflicted our Paleolithic ancestors--all seem to have arisen in the Neolithic with the spread of farming. McNeill suggests that many of the plagues described in the Bible may coincide with the explosion of zoonotic diseases during the emergence of the urban civilizations of the Neolithic, Bronze and Iron Ages.
What is clear is that a new source of human mortality had arrived on the scene. This does, however, raise the question of what people had been dying of before the development of agriculture. Were there really no diseases in the human population? Of course there were. It's likely that macroparasites--things such as tapeworms that can be seen by the naked eye--were problems for our distant ancestors. Most of these infections generally would have produced little beyond feelings of malaise, though--not acute, debilitating symptoms like high fevers, organ failure and death--in part because we had probably been evolving together with these parasites for such a long time. Over millions of years, an evolutionary process known as mutualism would have led the parasites to produce less acute physical symptoms in their hosts (us), since it does a parasite little good to kill its host and thus its source of food, and we would have adapted to their presence. In general, the longer an infection has been around, the less virulent it is, the symptoms it elicits in the host becoming less severe over many generations. New diseases that erupt suddenly into a previously unexposed population often have extreme outcomes, including death.
If macroparasites couldn't have produced a significant amount of mortality during the Paleolithic period, and most disease-causing microorganisms hadn't yet had a chance to pass from animals to humans, what did our hunter-gatherer ancestors die of? According to British evolutionary biologist J.B.S. Haldane, traumatic injuries were the most likely cause of death throughout most of human history. Does this mean we spring from a race of klutzes, who tripped and fell their way through the Paleolithic? No: such injuries would have included wounds sustained during hunting and skirmishes with other groups, the traumas associated with childbirth (a significant source of mortality for both mother and child until quite recently), and accidental falls and drownings. All of these hazards, coupled with infections from the wounds, would have been the main cause of hunter-gatherer morbidity and mortality.
So, we seem to have evidence for an interesting pattern--three waves of mortality was we move from Paleolithic times to the present. The first is trauma, primary from the time of our hominid ancestors until the dawn of the Neolithic period. As people settled down and began to domesticate animals rather than hunt them, infectious disease began to supersede trauma as a significant cause of mortality. The second wave, of infectious disease, continued to be the most significant cause of death until antibiotics were developed in the mid-twentieth century. The final wave has happened since the mid-twentieth century, in developed countries, where vaccinations and widespread antibiotic use have reduced infectious diseases to a fraction of their former threat. Now that we have stemmed the joint threats of trauma and infection, chronic diseases are becoming a larger threat. Most people prior to the twentieth century would have died relatively young, before these maladies--primarily diabetes, hypertension, stroke and cancer--would have had a chance to develop. With modern medicine, we've traded the scourges of trauma and infection for a threat from within our own bodies.
Why Are You Not Dead Yet? (Slate)
The Liberal Reforms (BBC bitesize history)
Penicillin, the Wonder Drug (University of Hawaii)
How Americans Got Red Meat Wrong (The Atlantic)
The Tall-but-Poor ‘Anomaly’ (Social Evolution Forum)
Raise a glass of clean water to John Snow (The Guardian)
How yam farming contributed to the rise of sickle-cell anemia (Slate)