Read The Pentagon's Brain Online

Authors: Annie Jacobsen

Tags: #History / Military / United States, #History / Military / General, #History / Military / Biological & Chemical Warfare, #History / Military / Weapons

The Pentagon's Brain (45 page)

BOOK: The Pentagon's Brain
13.29Mb size Format: txt, pdf, ePub
ads

This warning echoed an earlier Jason warning, back during the Vietnam War, when Secretary of Defense Robert McNamara asked the Jasons to consider using nuclear weapons against the Ho Chi Minh Trail. The Jasons studied the issue and concluded it was
not
something they could recommend. Using nuclear weapons in Vietnam would encourage the Vietcong to acquire nuclear weapons from their Soviet and Chinese benefactors and to use them, the
Jasons warned. This would in turn encourage terrorists in the future to use nuclear weapons.

In their 2008 study on augmented cognition and human performance, the Jason scientists also said they believed that the concept of brain control would ultimately fail because too many people in the military would have an ethical problem with it. “Such ethical considerations will appropriately limit the types of activities and applications in human performance modification that will be considered in the U.S. military,” they wrote.

But in our discussion of the Jason scientists’ impact on DARPA, Goldblatt shook his head, indicating I was wrong.

“The Jason scientists are hardly relevant anymore,” Goldblatt said. During his time at DARPA, and as of 2014, the “scientific advisory group with the most influence on DARPA,” he said, “is the DSB,” the Defense Science Board. The DSB has offices inside the Pentagon. And where the DSB finds problems, it is DARPA’s job to find solutions, Goldblatt explained. The DSB had recently studied man-machine systems, and it saw an entirely different set of problems related to human-robot interactions.

In 2012, in between the two Pentagon roadmaps on drone warfare, “Unmanned Systems Integrated Roadmap FY 2011–2036” and “Unmanned Systems Integrated Roadmap FY 2013–2038,” the DSB delivered to the secretary of defense a 125-page report titled “The Role of Autonomy in DoD Systems.” The report unambiguously calls for the Pentagon to rapidly accelerate its development of artificially intelligent weapons systems. “The Task Force has concluded that, while currently fielded unmanned systems are making positive contributions across DoD operations, autonomy technology is being underutilized as a result of material obstacles within the Department that are inhibiting the broad acceptance of autonomy,” wrote DSB chairman Paul Kaminski in a letter accompanying the report.

The primary obstacle, said the DSB, was trust—much as the
Jason scientists had predicted in their report. Many individuals in the military mistrusted the idea that coupling man and machine in an effort to create autonomous weapons systems was a good idea. The DSB found that resistance came from all echelons of the command structure, from field commanders to drone operators. “For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations,” wrote the DSB. The overall problem was getting “commanders to trust that autonomous systems will not behave in a manner other than what is intended on the battlefield.”

Maybe the commanders had watched too many
X-Files
episodes or seen any of the
Terminator
films one too many times. Or maybe they read Department of Defense Directive 3000.09, which discusses “the probability and consequences of failure in autonomous and semi-automatic weapons systems that could lead to unintended engagements.” Or maybe commanders and operators want to remain men (and women), not become cyborg man-machines. But unlike the Jason scientists, the Defense Science Board advised the Pentagon to
accelerate
its efforts to change this attitude—to persuade commanders, operators, and warfighters to accept, and to trust, human-robot interaction.

“An area of HRI [human-robot interaction] that has received significant attention is
robot ethics,
” wrote the DSB. This effort, which involved internal debates on robot ethics, was supposed to foster trust between military personnel and robotic systems, the DSB noted. Instead it backfired. “While theoretically interesting, this debate on functional morality has had unfortunate consequences. It increased distrust in unmanned systems because it implies that robots will not act with bounded rationality.” The DSB advised that this attitude of distrust needed to change.

Perhaps it’s no surprise that DARPA has a program on how to manipulate trust. During the war on terror, the agency began
working with the CIA’s own DARPA-like division, the Intelligence Advanced Research Projects Agency, or IARPA, on what it calls Narrative Networks (N2), to “develop techniques to quantify the effect of narrative on human cognition.” One scientist leading this effort, Dr. Paul Zak, insists that what DARPA and the CIA are doing with trust is a good thing. “We would all benefit if the government focused more on trusting people,” Zak told me in the fall of 2014, when I visited his laboratory at Claremont Graduate University in California. When I asked Zak if the DARPA research he was involved in was more likely being used to manipulate trust, Zak said he had no reason to believe that was correct.

Paul Zak is a leader in the field of neuroeconomics and morality, a field that studies the neurochemical roots of making economic decisions based on trust. Zak has a Ph.D. in economics and postdoctoral training, in neuroimaging, from Harvard. In 2004 he made what he describes as a groundbreaking and life-changing discovery. “I discovered the brain’s moral molecule,” Zak says, “the chemical in the brain, called oxytocin, that allows man to make moral decisions [and that] morality is tied to trust.” In no time, says Zak, “all kinds of people from DARPA were asking me, ‘How do we get some of this?’” Zak also fielded interest from the CIA. For DARPA’s Narrative Networks program, Zak has been developing a method to measure how people’s brains and bodies respond when oxytocin, i.e., “The brain’s moral molecule,” is released naturally.

Researchers at the University of Bonn, not affiliated with DARPA, have taken a different approach with their studies of oxytocin. In December 2014, these researchers published a study on how the chemical can be used to “erase fear.” Lead researcher Monika Eckstein told
Scientific American
that her goal in the study was to administer oxytocin into the noses of sixty-two men, in hopes that their fear would dissipate. “And for the most part it did,” she said. A time might not be too far off when we live in a world in which fear can be erased.

Why is the Defense Science Board so focused on pushing robotic warfare on the Pentagon? Why force military personnel to learn to “trust” robots and to rely on autonomous robots in future warfare? Why is the erasure of fear a federal investment? The answer to it all, to every question in this book, lies at the heart of the military-industrial complex.

Unlike the Jason scientists, the majority of whom were part-time defense scientists and full-time university professors, the majority of DSB members are defense contractors. DSB chairman Paul Kaminski, who also served on President Obama’s Intelligence Advisory Board from 2009 to 2013, is a director of General Dynamics, chairman of the board of the RAND Corporation, chairman of the board of HRL (the former Hughes Research Labs), chairman of the board of Exostar, chairman and CEO of Technovation, Inc., trustee and advisor to the Johns Hopkins Applied Physics Lab, and trustee and advisor to MIT’s Lincoln Laboratory—all companies and corporations that build robotic weapons systems for DARPA and for the Pentagon. Kaminski, who also serves as a paid consultant to the Office of the Secretary of Defense, is but one example. Kaminski’s fellow DSB members, a total of roughly fifty persons, serve on the boards of defense contracting giants including Raytheon, Boeing, General Dynamics, Northrop Grumman, Bechtel, Aerospace Corporation, Texas Instruments, IBM, Lawrence Livermore National Laboratory, Sandia National Laboratories, and others.

One might look at DARPA’s history and say that part of its role—even its entire role—is to maintain a U.S. advantage in military technology, in perpetuity. Former DARPA director Eberhardt Rechtin clearly stated this conundrum of advanced technology warfare when he told Congress, back in 1970, that it was necessary to accept the “chicken-and-egg problem” that DARPA will always face. That the agency must forever conduct “pre-
requirement research,” because by the time a technological need arises on the battlefield, it becomes apparent, too late, that the research should already have been done. DARPA’s contractors are vital parts of a system that allows the Pentagon to stay ahead of its needs, and to steer revolutions in military affairs. To dominate in future battles, never to be caught off guard.

One might also look at DARPA’s history, and its future, and say that it’s possible at some point that the technology may itself outstrip DARPA as it is unleashed into the world. This is a grave concern of many esteemed scientists and engineers.

A question to ask might be, how close to the line can we get and still control what we create?

Another question might be, how much of the race for this technological upper hand is now based in the reality that corporations are very much invested in keeping DARPA’s “chicken-and-egg” conundrum alive?

This is what President Eisenhower warned Americans to fear when he spoke of the perils of the military-industrial complex in his farewell speech in January 1961. “We have been compelled to create a permanent armaments industry of vast proportions,” the president said.

In the years since, the armaments industry has only grown bigger by the decade. If DARPA is the Pentagon’s brain, defense contractors are its beating heart. President Eisenhower said that the only way Americans could keep defense contractors in check was through knowledge. “Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.”

Anything less, and civilians cede control of their own destiny.

The programs written about in this book are all unclassified. DARPA’s highest-risk, highest-payoff programs remain secret until they are unveiled on the battlefield. Given how far along DARPA
is in its quest for hunter-killer robots, and for a way to couple man with machine, perhaps the most urgent question of all might be whether civilians already have.

Can military technology be stopped? Should it be? DARPA’s original autonomous robot designs were developed as part of DARPA’s Smart Weapons Program decades ago, in 1983. The program was called “Killer Robots” and its motto offered prescient words: “The battlefield is no place for human beings.”

This book begins with scientists testing a weapon that at least some of them believed was an “evil thing.” In creating the hydrogen bomb, scientists engineered a weapon against which there is no defense. With regard to the thousands of hydrogen bombs in existence today, the mighty U.S. military relies on wishful optimism—hope that the civilization-destroyer is never unleashed.

This book ends with scientists inside the Pentagon working to create autonomous weapons systems, and with scientists outside the Pentagon working to spread the idea that these weapons systems are inherently evil things, that artificially intelligent hunter-killer robots can and will outsmart their human creators, and against which there will be no defense.

There is a perilous distinction to call attention to: when the hydrogen bomb was being engineered, the military-industrial complex—led by defense contractors, academics, and industrialists—was just beginning to exert considerable control over the Pentagon. Today that control is omnipotent.

Another difference between the creation of the hydrogen bomb in the early 1950s and the accelerating development of hunter-killer robots today is that the decision to engineer the hydrogen bomb was made in secret and the decision to accelerate hunter-killer robots, while not widely known, is not secret. In that sense, destiny is being decided right now.

The 15-megaton Castle Bravo thermonuclear bomb, exploded in the Marshall Islands in 1954, was the largest nuclear weapon ever detonated by the United States. If unleashed on the eastern seaboard today it would kill roughly 20 million people. With this weapon, authorized to proceed in secret, came the certainty of the military-industrial complex and the birth of DARPA. (U.S. Department of Energy)

An elite group of weapons engineers rode out the Castle Bravo thermonuclear explosion from inside this bunker, code-named Station 70, just nineteen miles from ground zero. (The National Archives at Riverside)

BOOK: The Pentagon's Brain
13.29Mb size Format: txt, pdf, ePub
ads

Other books

The Hand of Justice by Susanna Gregory
Cover-Up Story by Marian Babson
Perfectly Dateless by Billerbeck, Kristin
Stalin's Genocides by Norman M. Naimark
Wreck the Halls by Sarah Graves
Pinstripe Empire by Appel, Marty
Progeny by E. H. Reinhard
Poisoned Love by Caitlin Rother