Read The Half-Life of Facts Online
Authors: Samuel Arbesman
It would seem that, in these cases, new knowledge is being created more rapidly, spreading faster, and causing changes in things that we had not even realized were in the realm of mesofacts.
On the other hand, there seem to be examples where the change in facts is slowing down. While Moore’s Law has had an incredible run that has lasted decades, and it has incorporated successive generations of technology, there are many who feel that it only has a couple of decades or so more to go. At that point, in the near future, we will start to bump up against limits imposed by physics, such as the size of atoms, which will ultimately limit how many components we can cram onto a circuit.
The same sort of limits could be argued to hold with transportation speeds. We have had an astonishing sequence of technologies that have allowed us to go faster and faster, but an exponential pace doesn’t seem sustainable forever. While going to the moon for our lunch break sounds wonderful, it just isn’t likely.
This does not mean that technological change stops adhering to mathematical rules. But in the long term they might very well be adhering to a logistic curve, with an eventual slowing toward a limit. We are simply in the fast-changing portion in the middle
right now, so it is hard to see the eventual slowdown. That being said, as humans we are very good at being pessimistic and underestimating our ability for continued innovation. Even though each individual technology might reach its limits, a new one comes along so often to innovate around these limits that the change around us might not be slowing down for a long time to come.
But what of scientific knowledge? While we are nowhere near the end of science—the sum of what we don’t know is staggering—we might very well be in a logistic curve of ever-changing knowledge as well, rather than one of exponential growth. One of the reasons I believe this could be true is simple: demographics. It seems unlikely that the rapid population growth will continue growing faster and faster. Whenever a country has become industrialized, its development has gone hand in hand with a drop in birth rate. Therefore, as the world as a whole advances technologically, population will cease to grow at the frenetic pace of previous decades and centuries. Combined with energy constraints—we are nowhere near our limits, but our energy resources are certainly not unbounded—exponential knowledge growth cannot continue forever. On the other hand, as computational power advances, computer-aided scientific discovery could push this slowdown far off into the future.
Nevertheless, there are regularities to factual change and growth: Facts will continue to grow and be overturned, albeit at a slower place, and we certainly do not seem to be leaving the exponential regime anytime soon.
But even if everything continues to grow rapidly, there might be certain limits to how we perceive this change and adapt to it.
. . .
WHEN
Carl Linnaeus worked out his methodology for organizing all living things in the eighteenth century, his taxonomy had three kingdoms—roughly translated as animal, vegetable, and mineral—and further divisions into classes, orders, genera, and species. Biologists now have five kingdoms, new subdivisions between kingdoms
and classes called
phyla
(singular
phylum
), families between orders and genera, and even three larger overarching divisions above kingdoms known as domains. As our knowledge has grown from thousands of species to millions, so too has our system of classification.
Similarly, the way we categorize different diseases has grown rapidly. In 1893, the International List of Causes of Death was first adopted and contained about 150 different categories. As of 2012, we are up to the tenth revision of the International Statistical Classification of Diseases and Related Health Problems, known as ICD-10. It was released in 1990 and has 12,420 codes, which is nearly double that of the previous revision, ICD-9, which came out only a little more than ten years before ICD-10. As facts have proliferated, how we manage knowledge and think about it has also had to grow, with our classification systems ramifying in sophisticated and complex ways.
On the one hand, being exposed to more complexity, whether it be in the realm of categorization of diseases, living things, or the many other classification systems we use—from types of occupations to Internet domain names—could make us more intelligent. Just as being exposed to cognitively demanding television shows and video games seems to increase our ability to think critically, so too could more facts, and their attendant complex classification systems, make us smarter.
However, as humans, we seem to have certain cognitive limits on what we can know and what we can handle in our daily lives.
Our brains are only so big. And it seems that the sizes of our brains actually dictate how many social connections we can have and how many people we can regularly interact with and keep in our minds. Dubbed Dunbar’s Number, after its discoverer, Robin Dunbar, who examined the brain sizes of different primates, the number of people we can know and have meaningful social ties with seems to be limited to between about 150 and 200. This is about the number of soldiers that comprise a fighting unit—whether in ancient Rome or modern-day armies—and fits the size of a small
village. Surprisingly, despite technological advancements in the social networking sphere, our number of Facebook friends still adheres to Dunbar’s Number and is about 190, as of 2011.
Similarly, if we look at the number of close ties we each have, we discover another trade-off. While we know a lot of people, each of us really only has a handful of very close social ties, such as our spouse or best friend. For most people, this number is around four. In my own research I have found that as we increase the number of people we are close to, we lower how close we are to each of them, on average. So if I have five friends instead of four, I am less close to each of these five people than I would be if I eliminated one of them from my tight inner circle. There seems to be some sort of conservation of attention: As we increase who we pay attention to, we spread this amount of attention out evenly among these individuals.
Our brains have a certain capacity, at least when it comes to social ties. Is the same thing true for changing knowledge? Upon being confronted with his ignorance of the Copernican notion that the Earth orbits the Sun, Sherlock Holmes argued this very point:
“You see,” he explained, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”
We very likely can’t handle every piece of knowledge that comes our way, and while being exposed to more and more might help us to think better, we no doubt have our limits when it comes to dealing with rapidly changing facts. This sounds like bad news. Our brains simply won’t be able to handle all of this knowledge and information, and the rapidity at which it changes. There are workarounds, such as those mentioned in the last chapter, i.e., online search engines. But, happily, it turns out that even when rapid change happens, it’s not as overwhelming as we might think.
Many futurists are concerned with what are termed
singularities
, periods of such rapid and profound change due to technology that the state of the world is forever altered. Like some of the changes mentioned in
chapter 7
, these phase transitions happen so quickly that they can forever alter humanity’s relationship with its surroundings. The quintessential singularity that futurists dwell on is that of the potential creation of superhuman machine intelligence. While many scientists think this is either very far off or that it will never happen, how would singularities affect us? Would a singularity tax our cognitive limits or will we be able to cope?
Chris Magee, the MIT professor who studies the rapid technological change around us, and Tessaleno Devezas of the University of Beira Interior in Portugal, decided to use history as a guide. Focusing on two events that have already happened, Magee and Devezas decided to see how humanity has dealt with fast change. They first looked at how the Portuguese gained control over increasingly large portions of the Earth’s surface over the course of the fifteenth century, as their empire grew. They also looked at the progression of humanity’s increasingly accurate measurement of time over the last millennium or so. In both cases there were rapid shifts in certain facts, all according to exponentially fast growth and culminating in what many would argue was the crossing of some sort of singularity threshold. In the case of Portugal, the country established a nearly globe-encompassing maritime empire, and in the case of clocks, timepieces became so advanced that measurement of time was far more precise than human perceptions.
But humanity assimilated these changes quite well. When speaking about the innovation in timekeeping, Magee and Devezas wrote:
These large changes were absorbed over time apparently without major disruption; for example, no mention is made of “clock riots” even though there was resistance and adaptation was needed. In given communities, the large changes apparently happened within less than a generation.
So I think it is safe to assume a somewhat optimistic tone, recognizing that change, while it might be surprising to many of us, is not entirely destabilizing. Humans are very adaptable, and are capable of understanding how knowledge changes.
And, of course, that’s the message of this book itself.
As I hope I’ve shown, facts can change in a startlingly complex variety of ways. But far from the fluctuation in our knowledge being random, the changes are systematic and predictable. Whether about nature or about the man-made world, factual change due to measurement changes or even the identification of errors, facts change in recognizably regular ways.
In addition to looking up facts on the Internet, or to having glowing orbs on our desk that respond to changes in the market, another way to avoid the surprise of changes in knowledge is to simply recognize that it’s not that surprising.
We are getting better at internalizing this. For example, many medical schools inform their students that within several years half of what they’ve been taught will be wrong, and the teachers just don’t know which half. But too often—whether because change is still too slow to notice or because of quirks in how we learn and observe our surroundings—we don’t really live our lives with the concept that facts are always changing.
In an interview, the novelist Jonathan Franzen noted: “Seriously, the world is changing so quickly that if you had any more than 80 years of change, I don’t see how you could stand it
psychologically.” Many of us still maintain this attitude, unable to deal with change. But it doesn’t have to be this way. We have to begin actually educating ourselves and our children to recognize that knowledge will always be changing and showing the regularities behind how these changes can happen. More important than simply learning facts is learning how to adapt to changing facts. Until we begin to do that, we are going to continue to be caught flat-footed by new information.
Facts don’t change arbitrarily. Even though knowledge changes, the astounding thing is that it changes in a regular manner; facts have a half-life and obey mathematical rules. Once we recognize this, we’ll be ready to live in the rapidly changing world around us.
In June 2012, a screenshot from
Back to the Future
went viral. The film snippet showed that on June 27, 2012, the DeLorean hurtled forward in time. After a certain amount of excitement, posts, and retweets, people soon realized that the image had been modified: the “actual” date wasn’t for three more years, with the DeLorean slated to arrive in the future on October 21, 2015. And it turns out this wasn’t even the first time this had happened: A similarly fudged screenshot of the DeLorean’s time counters had spread across the Internet just two years prior (and was nothing more than a jokingly altered screenshot that spread to far more people than were aware of the joke).
Not only were people spreading incorrect information, but the collective Internet consciousness didn’t even recognize that it had been tricked by this same manipulation before. But even when we do recognize such errors, we can’t fix them as easily as we might like.
Knowledge changes around us all the time, but that doesn’t mean we always have the most up-to-date facts. Even though we live in an age of instant and massive information dissemination and despite our unprecedented ability to rapidly learn new things and crowdfix mistakes, Knowledge—and its sinister twin, Error—continues to propagate in complex and intriguing ways. Errors persist for far longer than they should, even when there is more accurate knowledge elsewhere. For example, medical misinformation thrives and spreads on the Internet alongside Web sites that both provide correct practices and even actively work to debunk the bad science. Newer knowledge does not spread as fast as it should, and it weaves its way unevenly throughout society.
The problem isn’t just epistemological—it can have serious consequences. Doctors might not realize there is a newer and better treatment for a disease. Teachers might not have the most current materials on what dinosaurs looked like. Entire fields of science invest time and money recapitulating the findings of others due to their ignorance of other fields’ advances.