The Pentagon's Brain (44 page)

Read The Pentagon's Brain Online

Authors: Annie Jacobsen

Tags: #History / Military / United States, #History / Military / General, #History / Military / Biological & Chemical Warfare, #History / Military / Weapons

BOOK: The Pentagon's Brain
13.68Mb size Format: txt, pdf, ePub

“At the heart of limb regeneration is evolution,” Dr. Gardiner adds. What his wife is pointing out, he says, is that “at the heart of genetics is diversity.”

“Some people make mega-scars,” says Dr. Bryant. “The scars
can be bigger than the wound. If you cut the scar tissue off, it grows back. There is the same evidence at the other end of the scarring spectrum. Some people produce scars that can go away.”

Dr. Gardiner suggests looking at cancer research as an analogy. “Cancer equals our bodies interacting with the environment,” he says. “Cancer shows us we have remarkable regenerative ability. The pathways that drive cancer are the same pathways that cause regeneration. In the early days, no one had any idea about cancer. There was one cancer. Then along came the idea of ‘cancer-causing’ carcinogens. Well, we have found salamanders are very resistant to cancer. Inject a carcinogen into a salamander and it regulates the growth and turns it into an extra limb.”

“Where is this leading?” I ask.

“We are driving our biology toward immortality,” Dr. Gardiner says. “Or at least toward the fountain of youth.”

In April 2014, scientists in the United States and Mexico announced they had successfully grown a complex organ, a human uterus, from tissue cells, in a lab. And in England, that same month, at a North London hospital, scientists announced they had grown noses, ears, blood vessels, and windpipes in a laboratory as they attempt to make body parts using stem cells. Scientists at Maastricht University, in Holland, have produced laboratory-grown beef burgers, grown in vitro from cattle stem cells, which food tasters say taste “close to meat.”

“Can science go too far?” I ask Dr. Gardiner and Dr. Bryant.

“The same biotechnology will allow scientists to clone humans,” says Dr. Gardiner.

“Do you think the Defense Department will begin human cloning research?” I ask.

“Ultimately, it needs to be a policy decision,” Gardiner says.

In 2005 the United Nations voted to adopt the Declaration on Human Cloning, prohibiting “all forms of human cloning inasmuch as they are incompatible with human dignity and the protection of
human life.” But in the United States there is currently no federal policy banning the practice. The Human Cloning Prohibition Act of 2007 (H.R. 2560) did not pass. So the Defense Department could be cloning now. And while neither Dr. Bryant nor Dr. Gardiner has the answer to that question, we agree that what is possible in science is almost always tried by scientists.

“These are discussions that need to be had,” Dr. Gardiner says.

In the twenty-first-century world of science, almost anything can be done. But should it be done? Who decides? How do we know what is wise and what is unwise?

“An informed public is necessary,” Dr. Bryant says. “The public must stay informed.”

But for the public to stay informed, the public has to be informed. Dr. Bryant and Dr. Gardiner’s program was never classified. They worked for DARPA for four years, then both parties amiably moved on. What DARPA is doing with the limb regeneration science, DARPA gets to decide. If DARPA is working on a cloning program, that program is classified, and the public will be informed only in the future, if at all.

If human cloning is possible, and therefore inevitable, should American scientists be the first to achieve this milestone, with Pentagon funding and military application in mind? If artificial intelligence is possible, is it therefore inevitable?

Another way to ask, from a DARPA frame of mind: Were Russia or China or South Korea or India or Iran to present the world with the first human clone, or the first artificially intelligent machine, would that be considered a
Sputnik-
like surprise?

DARPA has always sought the technological and military edge, leaving observers to debate the line between militarily useful scientific progress and pushing science too far. What is right and what is wrong?

“Look at Stephen Hawking,” says Dr. Bryant.

Hawking, a theoretical physicist and cosmologist, is considered
one of the smartest people on the planet. In 1963 he contracted motor neuron disease and was given two years to live. He is still alive in 2015. Although Hawking is paralyzed, he has had a remarkably full life in the more than fifty years since, working, writing books, and communicating through a speech-generating device. Hawking is a proponent of cloning. “The fuss about cloning is rather silly,” he says. “I can’t see any essential distinction between cloning and producing brothers and sisters in the time-honored way.” But Hawking believes that the quest for artificial intelligence is a dangerous idea. That it could be man’s “worst mistake in history,” and perhaps his last. In 2014 Hawking and a group of colleagues warned against the risks posed by artificially intelligent machines. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Stephen Hawking is far from alone in his warnings against artificial intelligence. The physicist and artificial intelligence expert Steve Omohundro believes that “these [autonomous] systems are likely to behave in anti-social and harmful ways unless they are very carefully designed.” In Geneva in 2013, the United Nations held its first-ever convention on lethal autonomous weapons systems, or hunter-killer drones. Over four days, the 117-member coalition debated whether or not these kinds of robotic systems should be internationally outlawed. Testifying in front of the United Nations, Noel Sharkey, a world-renowned expert on robotics and artificial intelligence, said, “Weapons systems should not be allowed to autonomously select their own human targets and engage them with lethal force.” To coincide with the UN convention, Human Rights Watch and the Harvard Law School International Human Rights Clinic released a report called “Losing Humanity: The Case Against Killer Robots.”

“Fully autonomous weapons threaten to violate the foundational rights to life,” the authors wrote, because robotic killing machines “undermine the underlying principles of human dignity.” Stephen Goose, Arms Division director at Human Rights Watch, said, “Giving machines the power to decide who lives and dies on the battlefield would take technology too far.”

In an interview for this book, Noel Sharkey relayed a list of potential robot errors he believes are far too serious to ignore, including “human-machine interaction failures, software coding errors, malfunctions, communication degradation, enemy cyber-attacks,” and more. “I believe there is a line that must not be crossed,” Sharkey says. “Robots should not be given the authority to kill humans.”

Can the push to create hunter-killer robots be stopped? Steve Omohundro believes that “an autonomous weapons arms race is already taking place,” because “military and economic pressures are driving the rapid development of autonomous systems.” Stephen Hawking, Noel Sharkey, and Steve Omohundro are three among a growing population who believe that humanity is standing on a precipice. DARPA’s goal is to create and prevent strategic surprise. But what if the ultimate endgame is humanity’s loss? What if, in trying to stave off foreign military competitors, DARPA creates an unexpected competitor that becomes its own worst enemy? A mechanical rival born of powerful science with intelligence that quickly becomes superior to our own. An opponent that cannot be stopped, like a runaway train. What if the twenty-first century becomes the last time in history when humans have no real competition but other humans?

In a world ruled by science and technology, it is not necessarily the fittest but rather the smartest that survive. DARPA program managers like to say that DARPA science is “science fact, not science fiction.” What happens when these two concepts fuse?

CHAPTER TWENTY-SIX
The Pentagon’s Brain

I
n April 2014 I interviewed Charles H. Townes, the Nobel Prize–winning inventor of the laser. When we spoke, Professor Townes was just about to turn ninety-nine years old. Lucid and articulate, Townes was still keeping office hours at the University of California, Berkeley, still writing papers, and still granting reporters’ requests. I felt delighted to be interviewing him.

Two things we discussed remain indelible. Charles Townes told me that once, long ago, he was sharing his idea for the laser with John von Neumann and that von Neumann told him his idea wouldn’t work.

“What did you think about that?” I asked Townes.

“If you’re going to do anything new,” he said, “you have to disregard criticism. Most people are against new ideas. They think, ‘If I didn’t think of it, it won’t work.’ Inevitably, people doubt you. You persevere anyway. That’s what you do.” And that was exactly what Charles Townes did. The laser is considered one of the most significant scientific inventions of the modern world.

The second profound thing Charles Townes said to me, and I
mentioned it earlier in this book, was that he was personally inspired to invent the laser after reading the science-fiction novel
The Garin Death Ray,
written by Alexei Tolstoi in 1926. It is remarkable to think how powerful a force science fiction can be. That fantastic, seemingly impossible ideas can inspire people like Charles Townes to invent things that totally transform the real world.

This notion that science fiction can profoundly impact reality remains especially interesting to me because in researching and reporting this book, I learned that during the war on terror, the Pentagon began seeking ideas from science-fiction writers, most notably a civilian organization called the SIGMA group. Its founder, Dr. Arlan Andrews, says that the core idea behind forming the group was to save the world from terrorism, and to this end the SIGMA group started offering “futurism consulting” to the Pentagon and the White House. The group’s motto is “Science Fiction in the National Interest.”

Those responsible for safeguarding the nation “need to think of crazy ideas,” says Dr. Andrews, and the SIGMA group helps the Pentagon in this effort, he says. “Many of us [in SIGMA] have earned Ph.D.’s in high tech fields, and some presently hold Federal and defense industry positions.” Andrews worked as a White House science officer under President George H. W. Bush, and before that at the nation’s nuclear weapons production facility, Sandia National Laboratories, in New Mexico. Of SIGMA members he says, “Each [of us] is an accomplished science fiction author who has postulated new technologies, new problems and new societies, explaining the possible science and speculating about the effects on the human race.”

One of the SIGMA group members is Lieutenant Colonel Peter Garretson, a transformation strategist at the Pentagon. In the spring of 2014 Garretson arranged for me to come to the Pentagon with two colleagues, Chris Carter and Gale Anne Hurd. Chris Carter created
The X-Files,
one of the most popular science-fiction television dramas of all time. The
X-Files
character the Cigarette
Smoking Man is a quintessential villain who lives at the center of government conspiracies. Gale Anne Hurd co-wrote
The Terminator,
a science-fiction classic about a cyborg assassin sent back across time to save the world from a malevolent artificially intelligent machine called Skynet. In
The Terminator,
Skynet becomes smarter than the defense scientists who created it and initiates a nuclear war to achieve machine supremacy and rid the earth of humankind.

Carter and Hurd have joined me on a reporting trip to the Pentagon not to offer any kind of futurism consulting but to listen, discuss, and observe. It’s a warm spring day in 2014 when we arrive at the Pentagon. The five-sided, five-floored, 6.5-million-square-foot structure looms like a colossus. We pass through security and check in. Security protocols require that we are escorted everywhere we go, including the bathroom. We head into the Pentagon courtyard for lunch, with its lawn, tall trees, and wooden picnic tables. Garretson’s colleague Lieutenant Colonel Julian Chesnutt, with the Defense Intelligence Agency, Defense Clandestine Service, tells us a story about the building at the center of the Pentagon courtyard, which is now a food court but used to be a hot dog stand. Chesnutt explains that during the height of the Cold War, when satellite technology first came into being, Soviet analysts monitoring the Pentagon became convinced that the building was the entrance to an underground facility, like a nuclear missile silo. The analysts could find no other explanation as to why thousands of people entered and exited this tiny building, all day, every day. Apparently the Soviets never figured it out, and the hot dog stand remained a target throughout the Cold War—along with the rest of the Pentagon. It’s a great anecdote and makes one wonder what really is underneath the Pentagon, which is rumored to have multiple stories belowground.

During lunch, seated at a long picnic table, we engage in a thought-provoking conversation with a group of Pentagon “future thinkers” about science fact and science fiction. These defense
intellectuals, many of whom have Ph.D.s, come from various military services and range in age from their late twenties to early sixties. Some spent time in the war theater in Iraq, others in Afghanistan. The enthusiasm among these futurologists is palpable, their ideas are provocative, and their commitment to national security is unambiguous. These are among the brains at the Pentagon that make the future happen.

After lunch we are taken to the E-Ring, home to the Joint Chiefs of Staff and the secretary of defense. The maze-like corridors buzz with fluorescent lighting as we pass through scores of security doors and travel up and down multiple flights of stairs. Finally, we arrive in the hallway outside the office of the secretary of defense. Hanging on the corridor walls are large life-sized oil portraits of the nation’s former defense secretaries. I see the five past secretaries of defense portrayed in this book. Neil McElroy asked Congress to approve the creation of DARPA, which he promised would steward America’s vast weapons systems of the future, and it has. Robert McNamara believed that intellect and systems analysis could win wars, and peopled the upper echelons of the Pentagon with whiz kids to accomplish this goal. Harold Brown, hydrogen bomb weapons engineer, became the first physicist secretary of defense and gave America its offset strategy—the ability of commanders to fight wars from a continent’s distance away. Dick Cheney demonstrated to the world that overwhelming force could accomplish certain goals. Donald Rumsfeld introduced the world to network-centric warfare.

As we walk the corridors looking at artwork and photographs of weapons systems adorning the Pentagon’s walls, our group expands, as does the conversation about science fact and science fiction. One officer says he has a poster of the Cigarette Smoking Man hanging on his office wall. Another says that for an office social event, his defense group made baseball caps with
Skynet
written across the front. Science fiction is a powerful force. Because
of the fictional work of Carter and Hurd, many sound-minded people take seriously at least two significant science-fiction concepts: that (as in
The Terminator
) artificially intelligent machines could potentially outsmart their human creators and start a nuclear war, and that (as in
The X-Files
) there are forces inside the government that keep certain truths secret. As a reporter, I have learned that these concepts also exist in the real world. Artificially intelligent hunter-killer robots present unparalleled potential dangers, and the U.S. government keeps dark secrets in the name of national security. I’ve also found that some of the most powerful Pentagon secrets and strategies are hidden in plain sight.

The day after the Pentagon reporting trip, I went to see Michael Goldblatt, the man who pioneered many of DARPA’s super-soldier programs. Goldblatt, a scientist and venture capitalist, ran DARPA’s Defense Sciences Office from 1999 until 2004, and oversaw program efforts to create warfighters who are a mentally and physically superior breed. Goldblatt asked me to come to his home for our interview, and as a car took me from my hotel room in Pentagon City out to where Goldblatt lives in the suburbs, the trip took on the feel of an
X-Files
episode. Traveling through the woodsy environs of McLean, Virginia, down Dolley Madison Boulevard (Dolley’s husband, James Madison, called war the dreaded enemy of liberty), we passed by the entrance to CIA headquarters, Langley, and turned in to a nearby residential neighborhood.

Inside his home, Michael Goldblatt and I discussed transhumanism, DARPA’s efforts to augment, or increase, the performance of warfighters with machines, pharmaceuticals, and other means. Under Goldblatt’s tenure, unclassified programs included Persistence in Combat, Mechanically Dominant Soldier, and Continually Assisted Performance. These programs focused on augmenting the physical body of warfighters, but today I am most interested in the DARPA programs that focus on augmenting the human brain. Not just the brains of brain-wounded warriors but those of healthy
soldiers as well. DARPA calls this area of research Augmented Cognition, or AugCog. The concept of AugCog sits at the scientific frontier of human-machine interface, or what the Pentagon calls Human-Robot Interaction (HRI). In DARPA’s robo-rat and
Manduca sexta
moth programs, scientists created animal-machine biohybrids that are steerable by remote control. Through Augmented Cognition programs, DARPA is creating human-machine biohybrids, or what we might call cyborgs.

DARPA has been researching brain-computer interfaces (BCI) since the 1970s, but it took twenty-first-century advances in nanobiotechnology for BCI to really break new ground. DARPA’s Aug-Cog efforts gained momentum during Goldblatt’s tenure. By 2004, DARPA’s stated goal was to develop “orders of magnitude increases in available, net-thinking power resulting from linked human-machine dyads.” In 2007, in a solicitation for new programs, DARPA stated, “Human brain activity must be integrated with technology.” Several unclassified programs came about as a result, including Cognitive Technology Threat Warning System (CT2WS) and Neurotechnology for Intelligence Analysts (NIA). Both programs use “non-invasive technology” to accelerate human capacity to detect targets. The CT2WS program was designed for soldiers looking for targets on the battlefield and for intelligence operatives conducting surveillance operations in hostile environments. The NIA was designed for imagery analysts looking for targets in satellite photographs. The program participants wear a “wirelesss EEG [electroencephalography] acquisition cap,” also called a headset, which jolts their brains with electrical pulses to increase cognitive functioning. DARPA scientists have found that by using this “non-invasive, brain-computer interface,” they are able to accelerate human cognition exponentially, to make soldiers and spies think faster and more accurately. The problem, according to DARPA program managers, is that “these devices are often cumbersome to apply and unappealing to the user, given the wetness or residue that remains
on the user’s scalp and hair following removal of the headset.” A brain implant would be far more effective.

After Goldblatt left the agency, in scientific journals DARPA researchers identified a series of “groundbreaking advances” in “Man/Machine Systems.” In 2014 DARPA program managers stated that “the future of brain-computer interface technologies” depended on merging all the technologies of DARPA’s brain programs, the noninvasive and the invasive ones, specifically citing RAM, REPAIR, REMIND, and SUBNETS. Was DARPA conducting what were, in essence, intelligence, surveillance, and reconnaissance missions inside the human brain? Was this the long-sought information that would provide DARPA scientists with the key to artificial intelligence? “With respect to the President’s BRAIN initiative,” write DARPA program managers, “novel BCI [brain-computer interface] technologies are needed that not only extend what information can be extracted from the brain, but also who is able to conduct and participate in those studies.”

For decades scientists have been trying to create artificially intelligent machines, without success. AI scientists keep hitting the same wall. To date, computers can only obey commands, following rules set forth by software algorithms. I wondered if the transhumanism programs that Michael Goldblatt pioneered at DARPA would allow the agency to tear down this wall. Were DARPA’s brain-computer interface programs the missing link?

Goldblatt chuckled. He’d left DARPA a decade ago, he said. He could discuss only unclassified programs. But he pointed me in a revelatory direction. This came up when we were discussing the Jason scientists and a report they published in 2008. In this report, titled “Human Performance,” in a section called “Brain Computer Interface,” the Jasons addressed noninvasive interfaces including DARPA’s CT2WS and NIA programs. Using “electromagnetic signals to detect the combined activity of many millions of neurons and synapses” (in other words, the EEG cap) was effective in
augmenting cognition, the Jasons noted, but the information gleaned was “noisy and degraded.” The more invasive programs would produce far more specific results, they observed, particularly programs in which “a micro-electrode array [is] implanted into the cortex with connections to a ‘feedthrough’ pedestal on the skull.” The Jason scientists wrote that these chip-in-the-brain programs would indeed substantially improve “the desired outcome,” which could allow “predictable, high quality brain-control to become a reality.”

So there it was, hidden in plain sight. If DARPA could master “high quality brain-control,” the possibilities for man-machine systems and brain-computer interface would open wide. The wall would come down. The applications in hunter-killer drone warfare would potentially be unbridled. The brain chip was the missing link.

But even the Jasons felt it was important to issue, along with this idea, a stern warning. “An adversary might use invasive interfaces in military applications,” they wrote. “An extreme example would be remote guidance or control of a human being.” And for this reason, the Jason scientists cautioned the Pentagon
not
to pursue this area, at least not without a serious ethics debate. “The brain machine interface excites the imagination in its potential (good and evil) application to modify human performance,” but it also raises questions regarding “potential for abuses in carrying out such research,” the Jasons wrote. In summary, the Jason scientists said that creating human cyborgs able to be brain-controlled was not something they would recommend.

Other books

Silk Stalkings by Diane Vallere
Driver, T. C. by The Great Ark
The Neon Graveyard by Vicki Pettersson
The Warlock's Daughter by Jennifer Blake
Sinful Rewards 11 by Cynthia Sax
Belle's Beau by Gayle Buck
Conqueror by Kennedy, Kris
Rounding Third by Michelle Lynn