The Beginning of Infinity: Explanations That Transform the World (19 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
9.94Mb size Format: txt, pdf, ePub
ads

Emergence is also responsible for the fact that discoveries can be made in successive steps, thus providing scope for the scientific method. The partial success of each theory in a sequence of improving theories is tantamount to the existence of a ‘layer’ of phenomena that each theory explains successfully – though, as it then turns out, partly mistakenly.

Successive scientific explanations are occasionally dissimilar in the
way they
explain
their predictions, even in the domain where the predictions themselves are similar or identical. For instance, Einstein’s explanation of planetary motion does not merely correct Newton’s: it is radically different, denying, among many other things, the very existence of central elements of Newton’s explanation, such as the gravitational force and the uniformly flowing time with respect to which Newton defined motion. Likewise the astronomer Johannes Kepler’s theory which said that the planets move in ellipses did not merely correct the celestial-sphere theory, it denied the spheres’ existence. And Newton’s did not substitute a new
shape
for Kepler’s ellipses, but a whole new way for laws to specify motion – through infinitesimally defined quantities like instantaneous velocity and acceleration. Thus each of those theories of planetary motion was ignoring or denying its predecessor’s basic means of explaining what was happening out there.

This has been used as an argument for instrumentalism, as follows. Each successive theory made small but accurate corrections to what its predecessor
predicted
, and was therefore a better theory in that sense. But, since each theory’s
explanation
swept away that of the previous theory, the previous theory’s explanation was never true in the first place, and so one cannot regard those successive explanations as constituting a growth of knowledge about reality. From Kepler to Newton to Einstein we have successively: no force needed to explain orbits; an inverse-square-law force responsible for every orbit; and again no force needed. So how could Newton’s ‘force of gravity’ (as distinct from his equations predicting its effects) ever have been an advance in human knowledge?

It could, and was, because sweeping away the entities through which a theory makes its explanation is not the same as sweeping away the whole of the explanation. Although there is no force of gravity, it is true that
something real
(the curvature of spacetime), caused by the sun, has a strength that varies approximately according to Newton’s inverse-square law, and affects the motion of objects, seen and unseen. Newton’s theory also correctly explained that the laws of gravitation are the same for terrestrial and celestial objects; it made a novel distinction between mass (the measure of an object’s resistance to being accelerated) and weight (the force required to prevent the object from
falling under gravity); and it said that the gravitational effect of an object depends on its mass and not on other attributes such as its density or composition. Later, Einstein’s theory not only endorsed all those features but explained, in turn, why they are so. Newton’s theory, too, had been able to make more accurate predictions than its predecessors precisely because it was more right than they were about what was really happening. Before that, even Kepler’s explanation had included important elements of the true explanation: planetary orbits are indeed determined by laws of nature; those laws are indeed the same for all planets, including the Earth; they do involve the sun; they are mathematical and geometrical in character; and so on. With the hindsight provided by each successive theory, we can see not only where the previous theory made false predictions, but also that wherever it made true predictions this was
because
it had expressed some truth about reality. So its truth lives on in the new theory – as Einstein remarked, ‘There could be no fairer destiny for any physical theory than that it should point the way to a more comprehensive theory in which it lives on as a limiting case.’

As I explained in
Chapter 1
, regarding the explanatory function of theories as paramount is not just an idle preference. The predictive function of science is entirely dependent on it. Also, in order to make progress in any field, it is the explanations in existing theories, not the predictions, that have to be creatively varied in order to conjecture the next theory. Furthermore, the explanations in one field affect our understanding of other fields. For instance, if someone thinks that a conjuring trick is due to supernatural abilities of the conjurer, it will affect how they judge theories in cosmology (such as the origin of the universe, or the fine-tuning problem) and in psychology (how the human mind works) and so on.

By the way, it is something of a misconception that the predictions of successive theories of planetary motion
were
all that similar. Newton’s predictions are indeed excellent in the context of bridge-building, and only slightly inadequate when running the Global Positioning System, but they are hopelessly wrong when explaining a pulsar or a quasar – or the universe as a whole. To get all those right, one needs Einstein’s radically different explanations.

Such large discontinuities in the meanings of successive scientific
theories have no biological analogue: in an evolving species, the dominant strain in each generation differs only slightly from that in the previous generation. Nevertheless, scientific discovery is a gradual process too; it is just that, in science, all the gradualness, and nearly all the criticism and rejection of bad explanations, takes place inside the scientists’ minds. As Popper put it, ‘We can let our theories die in our place.’

There is another, even more important, advantage in that ability to criticize theories without staking one’s life on them. In an evolving species, the adaptations of the organisms in each generation must have enough functionality to keep the organism alive, and to pass all the tests that they encounter in propagating themselves to the next generation. In contrast, the intermediate explanations leading a scientist from one good explanation to the next need not be viable at all. The same is true of creative thought in general. This is the fundamental reason that explanatory ideas are able to escape from parochialism, while biological evolution, and rules of thumb, cannot.

That brings me to the main subject of this chapter: abstractions. In
Chapter 4
I remarked that pieces of knowledge are abstract replicators that ‘use’ (and hence
affect
) organisms and brains to get themselves replicated. That is a higher level of explanation than the emergent levels I have mentioned so far. It is a claim that something abstract – something non-physical, such as the knowledge in a gene or a theory – is affecting something physical. Physically, nothing is happening in such a situation other than that one set of emergent entities – such as genes, or computers – is affecting others, which is already anathema to reductionism. But abstractions are essential to a fuller explanation. You know that if your computer beats you at chess, it is really the
program
that has beaten you, not the silicon atoms or the computer as such. The abstract program is instantiated physically as a high-level behaviour of vast numbers of atoms, but the
explanation
of why it has beaten you cannot be expressed without also referring to the program in its own right. That program has also been instantiated, unchanged, in a long chain of different physical substrates, including neurons in the brains of the programmers and radio waves when you downloaded the program via wireless networking, and finally as states of long- and short-term memory banks in your computer. The specifics of that chain
of instantiations may be relevant to explaining how the program reached you, but it is irrelevant to why it beat you: there, the content of the knowledge (in it, and in you) is the whole story. That story is an explanation that refers ineluctably to abstractions; and therefore those abstractions exist, and really do affect physical objects in the way required by the explanation.

The computer scientist Douglas Hofstadter has a nice argument that this sort of explanation is essential in understanding certain phenomena. In his book
I am a Strange Loop
(2007) he imagines a special-purpose computer built of millions of dominoes. They are set up – as dominoes often are for fun – standing on end, close together, so that if one of them is knocked over it strikes its neighbour and so a whole stretch of dominoes falls, one after another. But Hofstadter’s dominoes are spring-loaded in such a way that, whenever one is knocked over, it pops back up after a fixed time. Hence, when a domino falls, a wave or ‘signal’ of falling dominoes propagates along the stretch in the direction in which it fell until it reaches either a dead end or a currently fallen domino. By arranging these dominoes in a network with looping, bifurcating and rejoining stretches, one can make these signals combine and interact in a sufficiently rich repertoire of ways to make the whole construction into a computer: a signal travelling down a stretch can be interpreted as a binary ‘1’, and the lack of a signal as a binary ‘0’, and the interactions between such signals can implement a repertoire of operations – such as ‘and’, ‘or’ and ‘not’ – out of which arbitrary computations can be composed.

One domino is designated as the ‘on switch’: when it is knocked over, the domino computer begins to execute the program that is instantiated in its loops and stretches. The program in Hofstadter’s thought experiment computes whether a given number is a prime or not. One inputs that number by placing a stretch of exactly that many dominos at a specified position, before tripping the ‘on switch’. Elsewhere in the network, a particular domino will deliver the output of the computation: it will fall only if a divisor is found, indicating that the input was not a prime.

Hofstadter sets the input to the number 641, which is a prime, and trips the ‘on switch’. Flurries of motion begin to sweep back and forth across the network. All 641 of the input dominos soon fall as the
computation ‘reads’ its input – and snap back up and participate in further intricate patterns. It is a lengthy process, because this is a rather inefficient way to perform computations – but it does the job.

Now Hofstadter imagines that an observer who does not know the purpose of the domino network watches the dominoes performing and notices that one particular domino remains resolutely standing, never affected by any of the waves of downs and ups sweeping by.

The observer points at [that domino] and asks with curiosity, ‘How come that domino there is never falling?’

We
know that it is the output domino, but the observer does not. Hofstadter continues:

Let me contrast two different types of answer that someone might give. The first type of answer – myopic to the point of silliness – would be, ‘Because its predecessor never falls, you dummy!’

Or, if it has two or more neighbours, ‘Because none of its neighbours ever fall.’

To be sure, this is correct as far as it goes, but it doesn’t go very far. It just passes the buck to a different domino.

In fact one could keep passing the buck from domino to domino, to provide ever more detailed answers that were ‘silly, but correct as far as they go’. Eventually, after one had passed the buck billions of times (many more times than there are dominoes, because the program ‘loops’), one would arrive at that first domino – the ‘on switch’.

At that point, the reductive (to high-level physics) explanation would be, in summary, ‘That domino did not fall because none of the patterns of motion initiated by knocking over the “on switch” ever include it.’ But we knew that already. We can reach that conclusion – as we just have – without going through that laborious process. And it is undeniably true. But it is not the explanation we were looking for because it is addressing a different question – predictive rather than explanatory – namely, if the first domino falls,
will
the output domino ever fall? And it is asking at the wrong level of emergence. What we asked was:
why
does it not fall? To answer that, Hofstadter then adopts
a different mode of explanation, at the right level of emergence:

The second type of answer would be, ‘Because 641 is prime.’ Now this answer, while just as correct (indeed, in some sense it is far more on the mark), has the curious property of not talking about anything physical at all. Not only has the focus moved upwards to collective properties . . . these properties somehow transcend the physical and have to do with pure abstractions, such as primality.

Hofstadter concludes, ‘The point of this example is that 641’s primality is the best explanation, perhaps even the only explanation, for why certain dominoes did fall and certain others did not fall.’

Just to correct that slightly: the physics-based explanation is true
as well
, and the physics of the dominoes is also essential to explaining why prime numbers are relevant to that particular arrangement of them. But Hofstadter’s argument does show that
primality
must be part of any full explanation of why the dominos did or did not fall. Hence it is a refutation of reductionism in regard to abstractions. For the theory of prime numbers is not part of physics. It refers not to physical objects, but to abstract entities – such as numbers, of which there is an infinite set.

Unfortunately, Hofstadter goes on to disown his own argument and to embrace reductionism. Why?

His book is primarily about one particular emergent phenomenon, the mind – or, as he puts it, the ‘I’. He asks whether the mind can consistently be thought of as
affecting
the body – causing it to do one thing rather than another, given the all-embracing nature of the laws of physics. This is known as the mind–body problem. For instance, we often explain our actions in terms of choosing one action rather than another, but our bodies, including our brains, are completely controlled by the laws of physics, leaving no physical variable free for an ‘I’ to affect in order to make such a choice. Following the philosopher Daniel Dennett, Hofstadter eventually concludes that the ‘I’ is an illusion. Minds, he concludes, can’t ‘push material stuff around’, because ‘physical law alone would suffice to determine [its] behaviour’. Hence his reductionism.

BOOK: The Beginning of Infinity: Explanations That Transform the World
9.94Mb size Format: txt, pdf, ePub
ads

Other books

Taboo Kisses by Gracen Miller
A Jane Austen Encounter by Donna Fletcher Crow
The Broken by Tamar Cohen
Hustlers by Chilton, Claire
Operation Chimera by Tony Healey, Matthew S. Cox