The Beginning of Infinity: Explanations That Transform the World (35 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
6.56Mb size Format: txt, pdf, ePub

Also, in the case of our civilization, the precautionary principle rules itself out. Since our civilization has not been following it, a transition to it would entail reining in the rapid technological progress that is under way. And such a change has never been successful before. So a blind pessimist would have to oppose it on principle.

This may seem like logic-chopping, but it is not. The reason for these paradoxes and parallels between blind optimism and blind pessimism is that those two approaches are very similar at the level of explanation. Both are prophetic: both purport to know unknowable things about the future of knowledge. And since at any instant our best knowledge contains both truth and misconception, prophetic pessimism about any one aspect of it is always the same as prophetic optimism about another. For instance, Rees’s worst fears depend on the unprecedentedly rapid creation of unprecedentedly powerful technology, such as civilization-destroying bio-weapons.

If Rees is right that the twenty-first century is uniquely dangerous,
and if civilization nevertheless survives it, it will have had an appallingly narrow escape.
Our Final Century
mentions only one other example of a narrow escape, namely the Cold War – so that will make two narrow escapes in a row. Yet, by that standard, civilization must already have had a similarly narrow escape during the Second World War. For instance, Nazi Germany came close to developing nuclear weapons; the Japanese Empire did successfully weaponize bubonic plague – and had tested the weapon with devastating effect in China and had plans to use it against the United States. Many feared that even a conventionally won victory by the Axis powers could bring down civilization. Churchill warned of ‘a new dark age, made more sinister and perhaps more protracted by the lights of perverted science’ – though, as an optimist, he worked to prevent that. In contrast, the Austrian writer Stefan Zweig and his wife committed suicide in 1942, in the safety of neutral Brazil, because they considered civilization to be already doomed.

So that would make it three narrow escapes in a row. But was there not a still earlier one? In 1798, Malthus had argued, in his influential essay
On Population
, that the nineteenth century would inevitably see a permanent end to human progress. He had calculated that the exponentially growing population at the time, which was a consequence of various technological and economic improvements, was reaching the limit of the planet’s capacity to produce food. And this was no accidental misfortune. He believed that he had discovered a law of nature about population and resources. First, the net increase in population, in each generation, is proportional to the existing population, so the population increases exponentially (or ‘in geometrical ratio’, as he put it). But, second, when food production increases – for instance, as a result of bringing formerly unproductive land into cultivation – the increase is the same as it would have been if that innovation had happened at any other time. It is not proportional to whatever the population happens to be. He called this (rather idiosyncratically) an increase ‘in arithmetical ratio’, and argued that ‘Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetical ratio. A slight acquaintance with numbers will shew the immensity of the first power in comparison of the second.’ His conclusion was that the relative well-being of humankind in his time was a temporary phenomenon and that he was living at a uniquely dangerous moment in history. The
long-term state of humanity must be an equilibrium between the tendency of populations to increase on the one hand and, on the other, starvation, disease, murder and war – just as happens in the biosphere.

In the event, throughout the nineteenth century, a population explosion happened much as Malthus had predicted. Yet the end to human progress that he had foreseen did not, in part because food production increased even faster than the population. Then, during the twentieth century, both increased faster still.

Malthus had quite accurately foretold the one phenomenon, but had missed the other altogether. Why? Because of the systematic pessimistic bias to which prophecy is prone. In 1798 the forthcoming increase in population was more predictable than the even larger increase in the food supply not because it was in any sense more probable, but simply because it depended less on the creation of knowledge. By ignoring that structural difference between the two phenomena that he was trying to compare, Malthus slipped from educated guesswork into blind prophecy. He and many of his contemporaries were misled into believing that he had discovered an objective asymmetry between what he called the ‘power of population’ and the ‘power of production’. But that was just a parochial mistake – the same one that Michelson and Lagrange made. They all thought they were making sober predictions based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that
we do not yet know what we have not yet discovered
.

Neither Malthus nor Rees intended to prophesy. They were warning that
unless
we solve certain problems in time, we are doomed. But that has always been true, and always will be. Problems are inevitable. As I said, many civilizations have fallen. Even before the dawn of civilization, all our sister species, such as the Neanderthals, became extinct through challenges with which they could easily have coped, had they known how. Genetic studies suggest that our own species came close to extinction about 70,000 years ago, as a result of an unknown catastrophe which reduced its total numbers to only a few thousand. Being overwhelmed by these and other kinds of catastrophe would have
seemed
to the victims like being forced to play Russian roulette. That is to say, it would have seemed to them that no choices
that they could have made (except, perhaps, to seek the intervention of the gods more diligently) could have affected the odds against them. But this was a parochial error. Civilizations starved, long before Malthus, because of what they thought of as the ‘natural disasters’ of drought and famine. But it was really because of what we would call poor methods of irrigation and farming – in other words, lack of knowledge.

Before our ancestors learned how to make fire artificially (and many times since then too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge. Many of the hundreds of millions of victims of cholera throughout history must have died within sight of the hearths that could have boiled their drinking water and saved their lives; but, again, they did not know that. Quite generally, the distinction between a ‘natural’ disaster and one brought about by ignorance is parochial. Prior to every natural disaster that people once used to think of as ‘just happening’, or being ordained by gods, we now see many options that the people affected failed to take – or, rather, to create. And all those options add up to the overarching option that they failed to create, namely that of forming a scientific and technological civilization like ours. Traditions of criticism. An Enlightenment.

If a one-kilometre asteroid had approached the Earth on a collision course at any time in human history before the early twenty-first century, it would have killed at least a substantial proportion of all humans. In that respect, as in many others, we live in an era of unprecedented
safety
: the twenty-first century is the first ever moment when we have known how to defend ourselves from such impacts, which occur once every 250,000 years or so. This may sound too rare to care about, but it is random. A probability of one in 250,000 of such an impact in any given year means that a typical person on Earth would have a far larger chance of dying of an asteroid impact than in an aeroplane crash. And the next such object to strike us is already out there at this moment, speeding towards us with nothing to stop it except human knowledge. Civilization is vulnerable to several other known types of disaster with similar levels of risk. For instance, ice
ages occur more frequently than that, and ‘mini ice ages’ much more frequently – and some climatologists believe that they can happen with only a few years’ warning. A ‘super-volcano’ such as the one lurking under Yellowstone National Park could blot out the sun for years at a time. If it happened tomorrow our species could survive, by growing food using artificial light, and civilization could recover. But many would die, and the suffering would be so tremendous that such events should merit almost as much preventative effort as an extinction. We do not know the probability of a spontaneously occurring incurable plague, but we may guess that it is unacceptably high, since pandemics such as the Black Death in the fourteenth century have already shown us the sort of thing that can happen on a timescale of centuries. Should any of those catastrophes loom, we now have at least a chance of creating the knowledge required to survive, in time.

We have such a chance because we are able to solve problems. Problems are inevitable. We shall always be faced with the problem of how to plan for an unknowable future. We shall never be able to afford to sit back and hope for the best. Even if our civilization moves out into space in order to hedge its bets, as Rees and Hawking both rightly advise, a gamma-ray burst in our galactic vicinity would still wipe us all out. Such an event is thousands of times rarer than an asteroid collision, but when it does finally happen we shall have no defence against it without a great deal more scientific knowledge and an enormous increase in our wealth.

But first we shall have to survive the next ice age; and, before that, other dangerous climate change (both spontaneous and human-caused), and weapons of mass destruction and pandemics and all the countless unforeseen dangers that are going to beset us. Our political institutions, ways of life, personal aspirations and morality are all forms or embodiments of knowledge, and all will have to be improved if civilization – and the Enlightenment in particular – is to survive every one of the risks that Rees describes and presumably many others of which we have no inkling.

So – how? How can we formulate
policies
for the unknown? If we cannot derive them from our best existing knowledge, or from dogmatic rules of thumb like blind optimism or pessimism, where
can
we derive them from? Like scientific theories, policies cannot be
derived
from anything. They are conjectures. And we should choose between them not on the basis of their origin, but according to how good they are as explanations: how hard to vary.

Like the rejection of empiricism, and of the idea that knowledge is ‘justified, true belief’, understanding that political policies are conjectures entails the rejection of a previously unquestioned philosophical assumption. Again, Popper was a key advocate of this rejection. He wrote:

The question about the sources of our knowledge . . . has always been asked in the spirit of: ‘What are the best sources of our knowledge – the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist – no more than ideal rulers – and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

‘Knowledge without Authority’ (1960)

The question ‘How can we hope to detect and eliminate error?’ is echoed by Feynman’s remark that ‘science is what we have learned about how to keep from fooling ourselves’. And the answer is basically the same for human decision-making as it is for science: it requires a tradition of criticism, in which good explanations are sought – for example, explanations of what has gone wrong, what would be better, what effect various policies have had in the past and would have in the future.

But what use are explanations if they cannot make predictions and so cannot be tested through experience, as they can be in science? This is really the question: how is progress possible in philosophy? As I discussed in
Chapter 5
, it is obtained by seeking good explanations. The misconception that evidence can play no legitimate role in philosophy is a relic of empiricism. Objective progress is indeed possible in politics just as it is in morality generally and in science.

Political philosophy traditionally centred on a collection of issues that Popper called the ‘who should rule?’ question. Who should wield power? Should it be a monarch or aristocrats, or priests, or a dictator, or a small group, or ‘the people’, or their delegates? And that leads to
derivative questions such as ‘How should a king be educated?’ ‘Who should be enfranchised in a democracy?’ ‘How does one ensure an informed and responsible electorate?’

Popper pointed out that this class of questions is rooted in the same misconception as the question ‘How are scientific theories derived from sensory data?’ which defines empiricism. It is seeking a system that
derives
or justifies the right choice of leader or government, from existing data – such as inherited entitlements, the opinion of the majority, the manner in which a person has been educated, and so on. The same misconception also underlies blind optimism and pessimism: they both expect progress to be made by applying a simple rule to existing knowledge, to establish which future possibilities to ignore and which to rely on. Induction, instrumentalism and even Lamarckism all make the same mistake: they expect
explanationless progress.
They expect knowledge to be created by fiat with few errors, and not by a process of variation and selection that is making a continual stream of errors and correcting them.

The defenders of hereditary monarchy doubted that any method of selection of a leader by means of rational thought and debate could improve upon a fixed, mechanical criterion. That was the precautionary principle in action, and it gave rise to the usual ironies. For instance, whenever pretenders to a throne claimed to have a better hereditary entitlement than the incumbent, they were in effect citing the precautionary principle as a justification for sudden, violent, unpredictable change – in other words, for blind optimism. The same was true whenever monarchs happened to favour radical change themselves. Consider also the revolutionary utopians, who typically achieve only destruction and stagnation. Though they are blind optimists, what defines them as utopians is their pessimism that their supposed utopia, or their violent proposals for achieving and entrenching it, could ever be improved upon. Additionally, they are revolutionaries in the first place because they are pessimistic that many other people can be persuaded of the final truth that they think they know.

Other books

Forbidden Love by Shirley Martin
Astra: Synchronicity by Lisa Eskra
Duke: Fallen MC #1 by C.J. Washington
Harvard Yard by Martin, William
Brass in Pocket by Jeff Mariotte
Lessons of Desire by Madeline Hunter
Reality Check (2010) by Abrahams, Peter
The Soul Mate by Madeline Sheehan