Read But What If We're Wrong? Online

Authors: Chuck Klosterman

But What If We're Wrong? (21 page)

BOOK: But What If We're Wrong?
7.96Mb size Format: txt, pdf, ePub
ads
But What If We're Right?

When John Horgan published his book
The End of Science
in 1996, he'd been a staff writer for
Scientific American
for ten years. A year later, he was fired from the magazine. According to Horgan, his employers suggested his book had caused a downturn in advertising revenue. This claim seems implausible, until you hear Horgan's own description of what his book proposed.

“My argument in
The
End of Science
is that science is a victim of its own success,” he tells me from his home in Hoboken. “Science discovers certain things, and then it has to go on to the next thing. So we have heliocentrism and the discovery of gravity and the fundamental forces, atoms and electrons and all that shit, evolution, and DNA-based genetics. But then we get to the frontier of science, where there is still a lot left to discover. And some of those things we may never discover. And a lot of the things we are going to discover are just elaborations on what we discovered in the past. They're not that exciting. My belief is that the prospect for really
surprising insights into nature is over, and the hope for future revolutionary discoveries is pretty much done. I became a science journalist because I thought science was the coolest thing that humans have ever done. So if you believe the most important thing about life is the pursuit of knowledge, what does it mean if that's over?”

It's now been twenty years since the release of
The End of Science
. Horgan has written four additional books and serves as the director of the Center for Science Writings at the Stevens Institute of Technology (he's also, somewhat interestingly, returned to
Scientific American
as a blogger). The central premise of his book—that the big questions about the natural world have been mostly solved, and that the really big questions that remain are probably impossible to answer—is still marginalized as either cynical or pragmatic, depending on the reader's point of reference. But nothing has happened since 1996 to prove Horgan wrong, unless you count finding water on Mars. Granted, twenty years is not that long, particularly if you're a scientist. Still, it's remarkable how unchanged the conversational landscape has remained. Horgan's most compelling interview in
The End of Science
was with the relatively reclusive Edward Witten, a Princeton professor broadly viewed as the greatest living theoretical physicist (or at least the “smartest,” according to a 2004 issue of
Time
magazine). One of the first things Witten noted in that interview was that Horgan had been journalistically irresponsible for writing a profile on Thomas Kuhn, with Witten employing much of the same logic Neil deGrasse Tyson used when he criticized Kuhn in our 2014 conversation for this book.

Now, there's at least one significant difference between those two interviews: I was asking if it's possible that science might be
wrong. Horgan was proposing science has been so overwhelmingly right that all that remains are tertiary details. Still, both tracts present the potential for an awkward realization. If the answer to my question is no (or if the answer to Horgan's question is yes), society is faced with a strange new scenario: the possibility that our current view of reality
is
the final view of reality, and that what we believe today is what we will believe forever.

“One of the exercises I always give my [Stevens Institute] students is an essay assignment,” Horgan says. “The question is posed like this: ‘Will there be a time in our future when our current theories seem as dumb as Aristotle's theories appear to us now?' And the students are always divided. Many of them have already been infected by postmodernism and believe that knowledge is socially constructed, and they believe we'll have intellectual revolutions forever. You even hear that kind of rhetoric from mainstream science popularizers, who are always talking about science as this endless frontier. And I just think that's childish. It's like thinking that our exploration of the Earth is still open-ended, and that we might still find the lost city of Atlantis or dinosaurs living in the center of the planet. The more we discover, the less there is to discover later. Now, to a lot of people, that sounds like a naïve way to think about science. There was a time when it once seemed naïve to me. But it's really just a consequence of the success of science itself. Our era is in no way comparable to Aristotle's era.”

What Horgan proposes is mildly contradictory; it compliments and criticizes science at the same time. He is, like Witten and Tyson, blasting Kuhn's relativist philosophy and insisting that some knowledge is real and undeniable. But he's also saying the acquisition of such knowledge is inherently limited, and we've essentially reached
that limit, and that a great deal of modern scientific inquiry is just a form of careerism that doesn't move the cerebral dial (this is a little like what Kuhn referred to as “normal science,” but without the paradigm shift). “Science will follow the path already trodden by literature, art, music, philosophy,” Horgan writes. “It will become more introspective, subjective, diffuse, and obsessed with its own methods.” In essence, it will become a perpetual argument over a non-negotiable reality. And like all speculative realities, it seems like this could be amazingly good or amazingly bad.

“By the time I finally finished writing
The End of Science
, I'd concluded that people don't give a shit about science,” Horgan says. “They don't give a shit about quantum mechanics or the Big Bang. As a mass society, our interest in those subjects is trivial. People are much more interested in making money, finding love, and attaining status and prestige. So I'm not really sure if a post-science world would be any different than the world of today.”

Neutrality: the craziest of all possible outcomes.

[
2
]
When I spoke with Horgan, he'd recently completed his (considerably less controversial) fifth book,
The End of War
, a treatise arguing against the assumption that war is an inescapable component of human nature. The embryo for this idea came from a conversation he'd had two decades prior, conducted while working on
The End of Science
. It was an interview with Francis Fukuyama, the political scientist best known for his 1989 essay “The End of History?” The title of the essay is deceptive, since Fukuyama was mostly asserting that liberal capitalist democracies were going to take over the world. It was an economic argument
that (thus far) has not happened. But what specifically appalled Horgan was Fukuyama's assertion about how a problem-free society would operate. Fukuyama believed that once mankind eliminated all its problems, it would start waging wars against itself for no reason, almost out of boredom. “That kind of thinking comes from a kind of crude determinism,” Horgan insists. “It's the belief that what has always been in the past must always be in the future. To me, that's a foolish position.”

The level to which you agree with Horgan on this point reflects your level of optimism about human nature (and Horgan freely admits some of his ideas could be classified as “traditionally hippie-ish”). But it can be securely argued that Fukuyama's perspective is much more common, particularly among the kind of people who produce dystopic sci-fi movies. Whether it's
Avengers: Age of Ultron
,
The Matrix
, the entire
Terminator
franchise, or even a film as technologically primitive as
War Games
, a predictable theme inexorably emerges: The moment machines become self-aware, they will try to destroy people. What's latently disturbing about this plot device is the cynicism of the logic. Our assumption is that computers will only act rationally. If the ensuing assumption is that human-built machines would immediately try to kill all the humans, it means that doing so must be the most rational decision possible. And since this plot device was created by humans, the creators must fractionally believe this, too.

On the other end of this speculatory scale—or on the same end, if you're an especially gloomy motherfucker—are proponents of the Singularity, a techno-social evolution so unimaginable that attempting to visualize what it would be like is almost a waste of time. The Singularity is a hypothetical super-jump in the field of
artificial intelligence, rendering our reliance on “biological intelligence” obsolete, pushing us into a shared technological realm so advanced that it will be unrecognizable from the world of today. The best-known advocate of this proposition, futurist Ray Kurzweil, suggests that this could happen as soon as the year 2045, based on an exponential growth model. But that is hard to accept. Everyone agrees that Kurzweil is a genius and that his model makes mathematical sense, but no man truly believes this is going to happen in his own lifetime (sans a handful of people who are already living their lives very, very, very differently). It must also be noted that Kurzweil initially claimed this event was coming in 2028, so the inception of the Singularity might be a little like the release of
Chinese Democracy
.

Even compared with Bostrom's simulation hypothesis or the Phantom Time conspiracy, the premise of the Singularity is so daunting that it can't reasonably be considered without framing it as an impossibility. The theory's most startling detail involves the option of mapping and downloading the complete content of a human brain onto a collective server, thus achieving universal immortality—we could all live forever, inside a mass virtual universe, without the limitations of our physical bodies (Kurzweil openly aspires to create an avatar of his long-dead father, using scraps of the deceased patriarch's DNA and exhaustive notes about his father's life). The parts of our brain that generate visceral sensations could be digitally manipulated to make it feel exactly as if we were still alive. This, quite obviously, generates unfathomable theological and metaphysical quandaries. But even its most practical aspects are convoluted and open-ended. If we download the totality of our minds onto the Internet, they—
we
—would
effectively become the Internet itself. Our brain avatars could automatically access all the information that exists in the virtual world, so we would all know everything there is to know.

But I suppose we have a manual version of this already.

[
3
]
I was born in 1972, and—because I ended up working in the media—I feel exceedingly fortunate about the timing of that event. It allowed me to have an experience that is not exactly unique, but that will never again be replicated: I started my professional career in a world where there was (essentially) no Internet at all, and I'll end my professional career in a world where the Internet will be (essentially) the only thing that exists. When I showed up for my first day of newspaper work in the summer of '94, there was no Internet in the building, along with an institutional belief that this would be a stupid thing to want. If I aspired to send an e-mail, I had to go to the public library across the street and wait for the one computer that was connected to a modem (and even that wasn't an option until 1995). From a journalistic perspective, the functional disparity between that bygone era and the one we now inhabit is vast and quirky—I sometimes made more phone calls in one morning than I currently make in two months. But those evolving practicalities were things we noticed as they occurred. The amplification of available information and the increase in communication speed was obvious to everyone. We talked about it constantly. What was harder to recognize was how the Internet slowly reinvented the way people thought about everything, including those things that have no relationship to the Internet whatsoever.

In his autobiography
Chronicles
, Bob Dylan (kind of) explains his motivation for performing extremely long songs like “Tom Joad,” a track with sixteen verses. His reasoning was that it's simply enriching to memorize complicated things. Born in 1941, Dylan valued rote memorization, a proficiency that had been mostly eliminated by the time I attended grade school in the eighties (the only long passages I was forced to memorize verbatim were the preamble to the Constitution, the Gettysburg Address, and a whole bunch of prayers). Still, for the first twenty-five years of my life, the concept of intelligence was intimately connected to broad-spectrum memory. If I was having an argument with a much older person about the 1970 Kent State shootings, I'd generally have to defer to her analysis, based on the justifiable fact that she was alive when it occurred and I was not. My only alternative was to read a bunch of books (or maybe watch a documentary) about the shooting and consciously retain whatever I learned from that research, since I wouldn't be able to easily access the data again. It was also assumed that—anecdotally, speaking off the cuff—neither party would be 100 percent correct about every arcane detail of the shooting, but that certain key details mattered more than others. So a smart person had a generalized, autodidactic, imperfect sense of history. And there was a circular logic to this: The importance of any given memory was validated by the fact that someone remembered it at all.

But then the Internet started to collect and index everything, including opinions and reviews and other subjective non-facts. This happened Hemingway-style: gradually (I wrote most of my first book in 1999 and the Internet was no help at all) and then suddenly (that book somehow had its own Wikipedia page by
2005). During the last half of the nineties, the Internet still felt highly segregated—to a mainstream consumer, it was hard to see the ideological relationship between limitless porn and fantasy football and Napster and the eradication of travel agents. What unified that diaspora was the rise of blogging, spawning what's now recognized as the “voice” of the Internet. Yet that voice is only half the equation; the other half is the mentality that came along with it. The first successful groundswell of bloggers came from multiple social classes and multiple subcultures. As a collective, they were impossible to define. But they did have one undeniable thing in common: They were, almost by definition, early adopters of technology. They were into the Internet before most people cared what it was. And in most cases, this interest in early adoption was not restricted to computers. These were the kind of people who liked grunge music in 1989. These were the kind of people who subscribed to
Ray Gun
magazine and made a point of mentioning how they started watching
Seinfeld
when it was called
The Seinfeld Chronicles
. These were the kind of people who wore a Premier League jersey to the theatrical premiere of
Donnie Darko
. These are consumers who self-identify as being the first person to know about something (often for the sake of coolness, but just as often because that's the shit they're legitimately into). It's integral to their sensibility. And the rippling ramifications of that sensibility are huge.

BOOK: But What If We're Wrong?
7.96Mb size Format: txt, pdf, ePub
ads

Other books

Attack on Area 51 by Mack Maloney
A Home for Rascal by Holly Webb
Shadow Over Second by Matt Christopher, Anna Dewdney
In a Heartbeat by Donna Richards
Bestiario by Juan José Arreola
Competition Can Be Murder by Connie Shelton