Read The Physics of Star Trek Online

Authors: Lawrence M. Krauss

Tags: #Astrophysics, #General, #Performing Arts, #History & Criticism, #Science, #Mathematics, #working, #Dance, #Physics, #Astrophysics & Space Science, #Television Plays And Programs, #Physics (Specific Aspects), #Star trek (Television program), #Video games, #Television, #Space sciences, #Television - History & Criticism, #Television - General

The Physics of Star Trek (10 page)

BOOK: The Physics of Star Trek
8.94Mb size Format: txt, pdf, ePub
ads

Even at the cutting edge, the improvement has been impressive. The fastest computers used
for general-purpose computing have increased in speed and memory capability by a factor of
about 100 in the past decade. And I am not including here computers built for
special-purpose work: these little marvels can have effective speeds exceeding tens of
billions of instructions per second. In fact, it has been shown that in principle certain
special- purpose devices must be built using biological, DNA-based systems, which could be
orders of magnitude faster.

One might wonder where all this is heading, and whether we can extrapolate the past rapid
growth to the future. Another valid question is whether we need to keep up this pace. I
find already that the rate-determining step in the information superhighway is the end
user. We can assimilate only so much information. Try surfing the Internet for a few
hours, if you want a graphic example of this. I often wonder why, with the incredible
power at my disposal, my own productivity

has not increased nearly as dramatically as my computer's. I think the answer is clear. I
am not limited by my computer's capabilities but by my own capabilities. It has been
argued that for this reason computing machines could be the next phase of human evolution.
It is certainly true that Data, even without emotions, is far superior to his human
crewmates in most respects. And, as determined in “The Measure of a Man,” he is a genuine
life-form.

But I digress. The point of noting the growth of computer capability in the last decade is
to consider how it compares with what we would need to handle the information storage and
retrieval associated with the transporter. And of course, it doesn't come anywhere close.

Let's make a simple estimate of how much information is encoded in a human body. Start
with our standard estimate of 10
28
atoms. For each atom, we first must encode its location, which requires three coordinates
(the x, y, and z positions). Next, we would have to record the internal state of each
atom, which would include things like

which energy levels are occupied by its electrons, whether it is bound to a nearby atom to
make up a molecule, whether the molecule is vibrating or rotating, and so forth. Let's be
conservative and assume that we can encode all the relevant information in a kilobyte of
data. (This is roughly the amount of information on a double-spaced typewritten page.)
That means we would need roughly 10
28
kilobytes to store a human pattern in the pattern

buffer. I remind you that this is a 1 followed by 28 zeros.

Compare this with, say, the total information stored in all the books ever written. The
largest libraries contain several million volumes, so let's be very generous and say that
there are a billion different books in existence (one written for every five people now
alive on the planet). Say each book contains the equivalent of a thousand typewritten
pages of information (again on the generous side)or about a megabyte. Then all the
information in all the books ever written would require about 10
12
, or about a million million, kilobytes of storage. This is about sixteen orders of
magnitudeor about one ten-millionth of a billionthsmaller than the storage capacity needed
to record a single human pattern! When numbers get this large, it is difficult to
comprehend the enormity of the task. Perhaps a comparison is in order. The storage
requirements for a human pattern are ten thousand times as large, compared to the
information in all the books ever written, as the information in all the books ever
written is compared to the information on this page.

Storing this much information is, in an understatement physicists love to use, nontrivial.
At present, the largest commercially available single hard disks store about 10 gigabytes,
or 10,000 thousand megabytes, of information. If each disk is about 10 cm thick, then if
we stacked all the disks currently needed to store a human pattern on top of one another,
they would reach a third of the way to the center of the galaxyabout 10,000 light-years,
or about 5 years' travel in the
Enterprise
at warp 9!

Retrieving this information in real time is no less of a challenge. The fastest digital
information transfer mechanisms at present can move somewhat less than about 100 megabytes
per second. At this rate, it would take about 2000 times the present age of the universe
(assuming an approximate age of 10 billion years) to write the data describing a human
pattern to tape! Imagine then the dramatic tension: Kirk and McCoy have escaped to the
surface of the penal colony at Rura Penthe. You don't have even the age of the universe to
beam them back, but rather just seconds to transfer a million billion billion megabytes of
information in the time it takes the jailor to aim his weapon before firing.

I think the point is clear. This task dwarfs the ongoing Human Genome Project, whose
purpose is to scan and record the complete human genetic code contained in microscopic
strands of human DNA. This is a multibillion- dollar endeavor, being carried out over at
least a decade and requiring dedicated resources in many laboratories around the world. So
you might imagine that I am mentioning it simply to add to the transporter-implausibility
checklist. However, while the challenge is daunting, I think this is one area that could
possibly be up to snuff in the twenty-third century. My optimism stems merely from
extrapolating the present growth rate of computer technology. Using my previous yardstick
of improvement in storage and speed by a factor of 100 each decade, and dividing it by 10
to be conservativeand given that we are about 21 powers of 10 short of the mark now one
might expect that 210 years from now, at the dawn of the twenty-third century, we will
have the computer technology on hand to meet the information-transfer challenge of the
transporter.

I say this, of course, without any idea of how. It is clear that in order to be able to
store in excess of 10
28
kilobytes of information in any human-scale device, each and every atom of the device will
have to be exploited as a memory site. The emerging notions of biological computers, in
which molecular dynamics mimics digital logical processes and the 10
25
or so particles in a macroscopic sample all act simultaneously, seem to me to be the most

promising in this regard.

I should also issue one warning. I am not a computer scientist. My cautious optimism may
therefore merely be a reflection of my ignorance. However, I take some comfort in the
example of the human brain, which is light-years ahead of any existing computational
system in complexity and comprehensiveness. If natural selection can develop such a fine
information storage and retrieval device, I believe that there is still a long way we can
go.

THAT QUANTUM STUFF: For some additional cold water of reality, two words: quantum
mechanics. At the microscopic level necessary to scan and re-create matter in the
transporter, the laws of physics are governed by the strange and exotic laws of quantum
mechanics, whereby particles can behave like waves and waves can behave like particles. I
am not going to give a course in quantum mechanics here. However, the bottom line is as

follows: on microscopic scales, that which is being observed and that which is doing the
observation cannot be separated. To make a measurement is to alter a system, usually
forever. This simple law can be parameterized in many different ways, but is probably most
famous in the form of the Heisenberg uncertainty principle. This fundamental lawwhich
appears to do away with the classical notion of determinism in physics, although in fact
at a fundamental level it doesn'tdivides the physical world into two sets of observable
quantities: the yin and the yang, if you like. It tells us that
no matter what technology is invented in the future,
it is impossible to measure certain combinations of observables with arbitrarily high
accuracy. On microscopic scales, one might measure the position of a particle arbitrarily
well. However, Heisenberg tells us that we then cannot know its velocity (and hence
precisely where it will be in the next instant) very well at all. Or, we might ascertain
the energy state of an atom with arbitrary precision. Yet in this case we cannot determine
exactly how long it will remain in this state. The list goes on.

These relations are at the heart of quantum mechanics, and they will never go away. As
long as we work on scales where the laws of quantum mechanics applywhich, as far as all
evidence indicates, is at least larger than the scale at which quantum gravitational
effects become significant, or at about 10
-33
cmwe are stuck with them.

There is a slightly flawed yet very satisfying physical argument that gives some heuristic
understanding of the uncertainty principle. Quantum mechanics endows all particles with a
wavelike behavior, and waves have one striking property: they are disturbed only when they
encounter objects larger than their wavelength (the distance between successive crests).
You have only to observe water waves in the ocean to see this behavior explicitly. A
pebble protruding from the surface of the water will have no effect on the pattern of the
surf pounding the shore. However, a large boulder will leave a region of calm water in its
wake.

So, if we want to “illuminate” an atomthat is, bounce light off it so that we can see
where it iswe have to shine light of a wavelength small enough so that it will be
disturbed by the atom. However, the laws of quantum mechanics tell us that waves of light
come in small packets, or quanta, which we call photons (as in starship “photon
torpedoes,” which in fact are not made of photons). The individual photons of each
wavelength have an energy inversely related to their wavelength. The greater the
resolution we want, the smaller the wavelength of light we must use. But the smaller the
wavelength, the larger the energy of the packets. If we bombard an atom with a high-energy
photon in order to observe it, we may ascertain exactly where the atom was when the photon
hit it, but the observation process itself that is, hitting the atom with the photonwill
clearly transfer significant energy to the atom, thus changing its speed and direction of
motion by some amount.

It is therefore impossible to resolve atoms and their energy configurations with the
accuracy necessary to re- create exactly a human pattern. Residual uncertainty in some of
the observables is inevitable. What this would mean for the accuracy of the final product
after transport is a detailed biological question I can only speculate upon.

This problem was not lost on the Star Trek writers, who were aware of the inevitable
constraints of quantum mechanics on the transporter.

Possessing something physicists can't usually call uponnamely, artistic licensethey
introduced “Heisenberg compensators,” which allow “quantum resolution” of objects. When an
interviewer asked the Star Trek technical consultant Michael Okuda how Heisenberg
compensators worked, he merely replied, “Very well, thank you!”

Heisenberg compensators perform another useful plot function. One may wonder, as I have,
why the transporter is not also a replicator of life-forms. After all, a replicator exists
aboard starships that allows glasses of water or wine to magically appear in each crew
member's quarters on voice command. Well, it seems that replicator technology can operate
only at “molecular-level resolution” and not “quantum resolution.” This is supposed to
explain why replication of living beings is not possible. It may also explain why the crew
continually complains that the replicator food is never quite the same as the real thing,
and why Riker, among others, prefers to cook omelets and other delicacies the
old-fashioned way.

SEEING IS BELIEVING: One last challenge to transportingas if one more were needed. Beaming
down is hard enough. But beaming up may be even more difficult. In order to transport a
crew member back to the ship, the sensors aboard the
Enterprise
have to be able to spot the crew member on the planet below. More than that, they

need to scan the individual prior to dematerialization and matter-stream transport. So the
Enterprise
must have a telescope powerful enough to resolve objects on and often under a planet's
surface at atomic resolution. In fact, we are told that normal operating range for the
transporter is approximately 40,000 kilometers, or about three times the Earth's diameter.
This is the number we shall use for the following estimate.

Everyone has seen photographs of the domes of the world's great telescopes, like the Keck
telescope in Hawaii

(the world's largest), or the Mt. Palomar telescope in California. Have you ever wondered
why bigger and bigger telescopes are designed? (It is not just an obsession with bignessas
some people, including many members of Congress, like to accuse science of.) Just as
larger accelerators are needed if we wish to probe the structure of matter on ever smaller
scales, larger telescopes are needed if we want to resolve celestial objects that are
fainter and farther away. The reasoning is simple: Because of the wave nature of light,
anytime it passes through an opening it tends to diffract, or spread out a little bit.
When the light from a distant point source goes through the telescopic lens, the image
will be spread out somewhat, so that instead of seeing a point source, you will see a
small, blurred disk of light. Now, if two point sources are closer together across the
line of sight than the size of their respective disks, it will be impossible to resolve
them as separate objects, since their disks will overlap in the observed image.
Astronomers call such disks “seeing disks.” The bigger the lens, the smaller the seeing
disk. Thus, to resolve smaller and smaller objects, telescopes must have bigger and bigger
lenses.

There is another criterion for resolving small objects with a telescope. The wavelength of
light, or whatever radiation you use as a probe, must be smaller than the size of the
object you are trying to scan, according to the argument I gave earlier. Thus, if you want
to resolve matter on an atomic scale, which is about several billionths of a centimeter,
you must use radiation that has a wavelength of less than about one-billionth of a
centimeter. If you select electromagnetic radiation, this will require the use of either X
rays or gamma rays. Here a problem arises right away, because such radiation is harmful to
life, and therefore the atmosphere of any Class M planet will filter it out, as our own
atmosphere does. The transporter will therefore have to use nonelectromagnetic probes,
like neutrinos or gravitons. These have their own problems, but enough is enough....

BOOK: The Physics of Star Trek
8.94Mb size Format: txt, pdf, ePub
ads

Other books

A Hallowed Place by Caro Fraser
Sold To The Alphas: A BBW Paranormal Romance by Amira Rain, Simply Shifters
Bridge of Souls by Fiona McIntosh
Face Value by Baird-Murray, Kathleen
Murder With Mercy by Veronica Heley
All For Anna by Deese, Nicole
Love on the Air by Sierra Donovan
Perdita by Joan Smith