The Half-Life of Facts (21 page)

Read The Half-Life of Facts Online

Authors: Samuel Arbesman

BOOK: The Half-Life of Facts
2.92Mb size Format: txt, pdf, ePub

In April 1668 Wilkins proposed at a meeting of the Royal Society that it was time for measurement to be standardized. Among those measurement standards he proposed was that of length. He argued that the base unit of length, from which all others would be derived, should be known as the
standard
and should be defined as follows: the length of a pendulum that causes it to swing from one side to the other—known as a single half-period—every second. This definition used the insight gained by Galileo
decades earlier that all pendulums of equal length—regardless of the weights at the end of the pendulums—swing at the same rate. Furthermore, no matter at what height you let go of the pendulum, it takes the same amount of time to go from one side to the other.

This suggestion, which was made to Wilkins by Christopher Wren, yields the definition of a standard as thirty-nine and a quarter inches, remarkably similar to the current measure of a meter. Wilkins went on to define a regular system of lengths derived from the standard, such as a tenth of a standard being denoted a
foot
, and ten standards equaling a
pearch
. A cube with sides of a standard was proposed to be equal to a
bushel
.

It’s probably clear that these derived measurements didn’t stick. In fact, I would be hard-pressed to find anyone who knows the term
pearch
(in case you’re wondering, it’s similar to our decameter, which seems to be similarly unused). However, the standard, which was transmuted into the French
metre
, or
meter
, continues to exist today.

But in the eighteenth century another approach to defining our units of length was the method that eventually won out. Rather than using time to calculate a meter—which Wilkins argued was uniform throughout the universe and would therefore be hard to beat for constructing a unit of measurement—the other approach derived the meter from the distance between the equator and the North Pole. A meter then becomes one ten-millionth of this distance. Due to the variation in gravity over the surface of the Earth, which would affect a pendulum’s swing, the French Academy of Sciences chose the distance-based measurement in 1791.

But there was a hitch. In addition to no one having actually yet visited the North Pole in 1791, the measurement methods used to calculate the distance from there to the equator were of varying qualities. Unlike in the previous cases discussed, not only were the properties of the Earth not completely known, but these unknown properties have a curious effect on the very measurements themselves. Since the imprecision of measuring the world affects
the units that are being measured in the first place, there is a certain circularity when it comes to measurement. This creates a feedback loop in which the better we know how to measure various quantities, the more we improve the very nature of measurement itself.

The story of the meter has been one of ever-changing definition. Over time, the definition of the meter has evolved, as technologies have advanced and as different techniques have been proposed. As the meter’s definition has changed, its precision has increased, which ultimately is the point of any effective definition.

While knowing the approximate length of a meter is helpful for many tasks, such as cutting a carpet or measuring one’s own height, it will not do when it comes to finer and more precise tasks, such as designing a circuit board. As the world’s complexity has progressed alongside technological and scientific development, more detailed and more exact measurements have become necessary. While I don’t particularly care if my height is off by a half centimeter or so, when it comes to measuring the size of microscopic organisms, I’m going to be a bit more punctilious.

So, in 1889, an actual metal bar, made of iridium and platinum, was constructed to be the official meter and to avoid the ambiguity of the distance to the North Pole. All measuring sticks would then be based on the dimensions of this literally quintessential meter.

But that still didn’t suffice, for this bar could still undergo deterioration. In addition, any slight change in the atmosphere or temperature can change its size, albeit very slightly. These considerations were included in the definition by specifying the pressure and temperature of the bar’s environment, but such precise conditions are very difficult to maintain.

An international group of scientists then constructed more fundamental definitions, first using the wavelength of the emission of a certain isotope of the gas krypton, and finally arriving at our current definition, which involves the distance light travels in a
fantastically small, though extremely precisely defined, span of time. In this way, the speed of light and the length of the meter are now inextricably and definitionally linked. As our measurements become more precise, the speed of light doesn’t change; instead, the definition of the meter does.

.   .   .

THE
world of measurement involves much more than just the meter. If you wish to see how far down the rabbit hole of measurement it is possible to go, I recommend the
Encyclopaedia of Scientific Units, Weights, and Measures: Their SI Equivalences and Origins
. Compiled by François Cardarelli, a French Canadian chemical engineer, it is truly a wide-ranging document. It has conversion tables for units of measurement throughout history, from Abyssinian units of length to Egyptian units of weight, from the Roman system of distance, which uses a
gradus
(a single stride when walking) and
passus
(two strides) to denote distance, to the Assyrio-Chaldean-Persian measurement system. This book is exhaustive.

Interested in moving beyond light-years and parsecs (about three and a quarter light-years) to describe outer space? Then consider the siriusweit, which is equal to five parsecs. Or wondering about the details of the fothers, a British mass for trading lead bullion, or the
kannor
, a Finnish unit of volume? This book can fulfill your needs.

There are even various discrete units included, such as the perfect ream, which is 516 sheets of paper, and the warp, which is four herrings; it is used by British fishermen and old men at kiddush.

In addition to all this useful and possibly not so useful information, the book includes an intriguing table that shows how each definition of the meter reduced errors in measurement overall: Each successive definition made the meter a bit less uncertain. On the facing page is a chart that displays the table in graphic form.

These data points aren’t just for years. Each redefinition occurred
at a very precise date. We know the very day when the meter became as precise as what we have now. Furthermore, the precision of the meter has increased in a regular fashion, and its error has declined in a linear fashion: an exponential decay. Just as scientific prefixes have changed in an exponential fashion, allowing for more precise terminology, so has the way we define measurement itself.

Figure 8. Measurement error of the meter over time. The precision in the definition of the meter has increased, with an exponential decay in error (the error axis is logarithmic) over time. Line shows general downward trend. Data from Cardarelli,
Encyclopaedia of Scientific Units, Weights, and Measures: Their SI Equivalences and Origins
(Springer, 2004).

The meter’s shift in precision, as well as definition, is not an aberration. Scientists have been driving toward ensuring that metric units in general be based on physical constants of the universe. In addition to the meter’s tie to the light-year, time too is known precisely: A second is defined in terms of the vibration rate of a certain type of cesium atom.

The last basic unit in the metric system to undergo this transition
to definition in terms of physical constants is the kilogram. For a long time the official kilogram was defined as the weight of a physical cylinder of platinum and iridium stored in a basement vault outside Paris. Over the past few years, metrologists—the scientists preoccupied with matters of measurement—have been bandying about alternative definitions, such as the mass of a sphere of silicon with an exact number of atoms or a precise amount of electromagnetic energy. In October 2011, they finally convened outside Paris at the Twenty-fourth General Conference on Weights and Measures and agreed on a definition based on a physical constant that, once ratified, will become the official description of a kilogram.

Even though our units of measurement have become unbelievably exact, we don’t normally really reach that level of precision. A certain amount of error and uncertainty are baked into our daily lives. Despite the increased precision with which all these units are now defined, we still deal with a certain amount of uncertainty when making measurements. Most people use a fairly basic ruler, despite the recent advances in precision. I can distinctly recall the yardstick my family owned when I was growing up—it was so worn at the ends that it was probably missing nearly an entire inch. Whatever I measured was objectively wrong, but it was close enough for everyday purposes. Similarly, when we exchange money from one currency to another, we don’t mind that these conversions are necessarily approximate, inaccurate by several thousandths of a dollar or euro.

But understanding why we have measurement error, and properly understanding precision, can help us better understand how facts change and how measuring our world can lead to changes in knowledge.

.   .   .

IN
1980, A. J. Dessler and C. T. Russell published a tongue-in-cheek paper in
Eos
, a journal of the American Geophysical Union. In it, they examined the estimated mass of Pluto’s size over time. We still don’t know Pluto’s mass, at least not exactly. Since it vents gases,
astronomers often have trouble telling its size, sometimes viewing its self-generated haze as part of the surface.

Since Pluto’s first sighting, when it was judged to be about the size of the Earth, estimates of its mass have dropped greatly over time. Dessler and Russell explained this by arguing something simple: Pluto itself is actually shrinking. By fitting the curve of Pluto’s diminishing size to a bizarre mathematical function using the irrational number π, they argued that Pluto would vanish in 1984. But don’t worry! According to their function, Pluto would reappear 272 years later (its mass would go from being mathematically imaginary to real again).

Of course, this is ridiculous. A far more reasonable explanation is that our tools improved over time via the Moore’s-like laws that inexorably improve technology, allowing us to resolve Pluto better. While there’s still a certain amount of uncertainty in Pluto’s mass, we now have a much better handle on this fact: There is a clear relationship between what our facts are, increases in technology, and increases in measurement.

When it comes to error, measurement revolves around two terms:
precision
and
accuracy
. Any measurement method inherently has these two properties, and it’s important to be aware of them when examining the true value of something. We can understand precision and accuracy through a rather whimsical scenario.

Imagine we are trying to determine the position of a point on a far-off surface by using a laser pointer. We have two different people tasked with trying to locate this point by aiming the laser pointer at it: a young boy who is lazy, and an older man who is very careful in his measurements. The young boy, endowed with the steady hand of youth, is physically capable of pointing the laser exactly at the point on the wall. But he doesn’t want to for very long, because in our rather contrived example, he is inherently lazy, so he always chooses to rest his laser-pointer arm on a nearby surface. In this case, no matter how many times this boy points the laser at the point, it is always lower than the point by a little bit, because he chooses to rest his wrist on a lower surface.

On the other hand, the old man tries his best. He points the laser exactly at the point, but due to his age he has a slight tremor. So the laser is always hovering around the point, within a certain range, but it is rarely exactly where it should be.

In case this hasn’t yet become clear, the old man embodies accuracy and the young boy embodies precision. Precision refers to how consistent one’s measurements are from time to time. If the true length of something is twenty inches, precision refers to how dispersed one’s measurements will be around the true value. If one measurement method always yields values of twenty-five inches, while another measurement method yields values within half an inch of twenty inches, but they are all variable, the former method is more precise, even if its results are wrong.

Accuracy refers to how similar one’s measurements are to the real value. If your measurements are always five inches too high, even if your measurements are very consistent (and therefore are highly precise), you lack accuracy.

Of course, all methods are neither perfectly precise nor perfectly accurate; they are characterized by a mixture of imprecision and inaccuracy. But we can keep on trying to improve our measurement methods. When we do, changes in precision and accuracy affect the facts we know, and sometimes cause a more drastic overhaul in our facts.

Other books

The Rational Animal: How Evolution Made Us Smarter Than We Think by Douglas T. Kenrick, Vladas Griskevicius
A Chamber of Delights by Katrina Young
An Assembly Such as This by Pamela Aidan
The Heir of Mondolfo by Mary Wollstonecraft Shelley
Danny Dunn and the Homework Machine by Abrashkin Abrashkin, Jay Williams
Now and Again by Brenda Rothert
56 Days (Black) by Hildreth, Nicole