Read Medusa's Gaze and Vampire's Bite: The Science of Monsters Online

Authors: Matt Kaplan

Tags: #Non-Fiction, #Retail, #Fringe Science, #Science, #21st Century, #v.5, #Amazon.com, #Mythology, #Cultural Anthropology

Medusa's Gaze and Vampire's Bite: The Science of Monsters (26 page)

BOOK: Medusa's Gaze and Vampire's Bite: The Science of Monsters
10.93Mb size Format: txt, pdf, ePub
ads

Such discussion of Frankenstein’s creature’s and Dren’s psychological
journeys from innocents to killers might seem out of place when it comes to analyzing the role of science in the formation of their status as actual monsters, but it is not. To get at the fears underlying these creations, it is crucial to realize that the biological work responsible for making their horrific forms believable and terrible to behold plays only a partial role in their transformation into evil beings. The science wielded by Dr. Frankenstein and the researchers in
Splice
creates only hideous creatures. The infusion of evil into them depends upon their interactions with humans. In
Frankenstein,
the evil arises after Dr. Frankenstein’s horrified withdrawal from his own creation, the violent response of the family in the countryside, and the heated words shouted by the young brother whom the monster kidnaps. With Dren, the birth of evil stems from her imprisonment and ultimate treatment as nothing more than a specimen.

Thus, there appear to be two fears crucial to the formation of these monster stories. There is the fear of what horrific things science is capable of creating, and then there is the more subtle fear of society’s inability to recognize a creature’s needs and react appropriately such that the creature does not become so wounded that it turns against humanity. There is also the fact that both Frankenstein’s monster and Dren are physically very powerful. We all know that power corrupts, but in recent years researchers have discovered that giving a person a combination of high power and low social status creates a particularly horrific psychological effect.

In a study conducted by Nathanael Fast at the University of Southern California and published in 2011 in the
Journal of Experimental Social Psychology,
213 participants were randomly assigned one of four situations that manipulated their status and power. All participants were informed that they were taking part in a study on virtual offices and would be interacting with, but not actually meeting, a fellow student who worked in the same fictional office. These people were later assigned either to the high-status role of “idea creator” and asked to generate important ideas, or to the low-status role of “worker” and tasked with menial jobs like checking for typos.

To manipulate power, participants were told there would be a
draw for a $50 prize at the end of the study, and that, regardless of their role, each participant would be able to dictate which activities his partner must engage in to qualify for the draw. Participants who were given a sense of power were told that one part of their job required them to determine which tasks their partner would have to complete to qualify. They were further informed that their partner would have no such control over them. In contrast, low-power participants were advised that while they had the ability to determine the tasks their partner had to engage in, their partner could remove their name from the draw if he or she wanted to.

Participants were asked to select one or more tasks for their partner to perform from a list provided by the researchers. Some of these tasks were rated by a separate pool of participants as deeply demeaning, such as requiring participants to “say ‘I am filthy’ five times” or “bark like a dog three times,” while others were deemed neutral, like “tell the experimenter a funny joke” or “clap your hands fifty times.” Fast found that participants with high status and high power, low status and low power, and high status and low power all chose few, if any, demeaning activities for their partners to perform. In contrast, participants who were low in status but high in power were much more likely to choose demeaning tasks for their partners.

To a certain extent, these results provide a psychological explanation for the behavior of the prison guards at Abu Ghraib in Iraq. They were locked, loaded, and very high in power, but they were prison guards; they knew they were viewed by society as low in social status. Similarly, Fast’s findings make the evil transformations seen in the socially excluded but physically powerful Frankenstein’s monster and Dren all the more believable. We fear this type of transformation because it actually happens in humans all too often.

Of silicon and metal

It is shortsighted, however, to focus only on monsters spawned from biology. While Dr. Frankenstein’s monster was a product of biological
science, there are many recent monsters bearing a striking resemblance to these creations that are not flesh and blood.

Like Dr. Frankenstein’s monster, the computer HAL 9000 in Arthur C. Clarke’s
2001: A Space Odyssey
and Stanley Kubrick’s film version of the story is created by humans. Represented by a single red eye in the film, HAL is found throughout the spaceship that it is meant to help run. However, during the mission, something goes dreadfully wrong. An electronic malfunction leads HAL to make a mistake and declare equipment to be in need of repair when it is operating normally. The astronauts grow concerned about HAL’s error and consider shutting down the computer. They discuss this in private inside a small space pod that they believe HAL cannot eavesdrop on, but HAL, suspicious of the astronauts’ behavior, reads their lips through the pod window and works out what they are planning. This leads HAL to start killing off the crew. The computer is able to rationally explain the reason for its murderous activities since it views itself as critical to completing the space mission, but rational or not, the way HAL snuffs out the lives of the humans on board is undeniably creepy.

In Andy and Lana Wachowski’s 1999 film
The Matrix,
the sagelike character Morpheus comments, “We marveled at our own magnificence as we gave birth to AI [artificial intelligence],” as he explains to the protagonist Neo that this marvelous technology turned on humanity and effectively declared a war that it mostly won. In James Cameron’s 1984 film
Terminator,
a similar plot unfolds, with intelligent machines invented by people rising up against their creators. Even television has carried this story, with the successful
Battlestar Galactica
series always opening with the bold lines “The Cylons were created by man. They rebelled… ,” providing an explanation for why humans are constantly being chased by the robotic Cylons around the galaxy.

The machines in
The Terminator
and Cameron’s 1991 sequel,
Terminator 2, Judgment Day,
have the same reasons for attacking humanity that HAL does. They become self-aware, humans attempt to shut them down, and the machines retaliate in self-defense.
While Skynet, the artificial intelligence that controls the Terminator, and HAL are both bent on killing off humans, is it right to classify a species fighting for its survival as a monster? Are Sky-net and HAL any more evil than a bear that tears the arms off of a hunter who just took a shot at its cub? To a certain extent, the answer is yes, because a bear mauling a hunter does not go off and start mauling every human it meets.
77
With Skynet and HAL, a paranoid logic develops in these systems that all humans, even those who are harmless, must be killed, and this is where the evil begins to seep in.

But why do monsters venture into the world of computers in the first place? Unlike a decomposing corpse or a deformed lab animal, computers are not inherently grotesque. HAL’s lightbulb, on its own, is just a light. There is nothing inherently frightening about it. With
The Terminator, The Matrix,
and
Battlestar Galactica,
this changes somewhat as computers are given more physical form. The Terminator is an eerie-looking skeletal robot covered in human skin, the Cylons are large and powerful with weapons on their arms (or in some cases programmable humanlike machines), and the lethal programs that function as guardians of the computer system in
The Matrix
appear as spooky and dispassionate government agents.
78
But it does not seem that it is the physicality of computers that leads creative minds to transform them into monsters. It is what computers are capable of that drives this process.

A team led by Louis-Philippe Morency at the University of Southern California is showing that, when properly programmed and
hooked up to video cameras, computers are becoming adept at reading human body language. More specifically, Morency and his colleagues have taught computers how to read the all-important human nod.

This might sound insignificant, but nods made at the right time in a conversation can mean “I understand,” while nods made at the wrong time can indicate either a lack of understanding or a lack of interest. Teaching robots and computer avatars to identify these different sorts of nods and to properly nod back has been a nightmare because a definition of exactly when nods of different sorts are supposed to happen has not existed.

Psychologists have tried for ages to figure out the many subtle elements produced by a speaker that lead a listener to nod, and the results have been poor. To solve this problem, Morency turned to computers. By recording movements and sounds during human interactions, he has generated lists of conversation cues—like pauses, gaze shifts, and the word “and”—that lead people to nod. He has also collected facial details that indicate what sorts of nods are being made. This information is now being fed into computer programs and used to teach robots when to nod during conversation and what human nods really mean.

At the most basic level, Morency’s work, and similar face analysis technology, could ultimately prove rather valuable for authorities keen to identify expressions associated with malice and deceit. But there is much more. As computers are required to interact with humans more often, their ability to understand everything that is communicated, rather than just typed or spoken, is going to vastly improve, opening up communication pathways so computers can start playing a larger role in social interactions. Imagine educational software that can detect the glazed look of someone who is totally lost during a lesson or a spaceship computer that suspects two astronauts are lying and reads their lips to learn what they are really thinking. This sort of work is a big step forward. And just in case you thought that all of the
Terminator
and
Matrix
films might have left people wary enough of artificial intelligence
to keep such systems out of war machines, be assured that you are wrong.

A team led by the computer scientist Yale Song at the Massachusetts Institute of Technology is teaching military drones to understand how to read the body language of deck officers on aircraft carriers and follow their commands. The ultimate goal of the work is to have the drones read the silent signs and signals just as well as human pilots do. At the moment, the drones understand what they are being told only 75 percent of the time, but they are going to get a lot better as the work progresses.

Along similar lines, Andrew Gordon at the University of Southern California has designed computers that are adept at reading blogs and constructing meanings from what they find. For example, after scanning millions of personal stories online and correlating these with incidents taking place in the real world, his computers were able to work out that rainy weather was related to increased car accidents and that guns were associated with hospitalization. Connect these sorts of developments to computers that are capable of crushing the brightest humans at chess and beating geniuses on
Jeopardy!
and something emerges in the imagination that certainly gives one pause.

The day when a computer is capable of truly studying the world around it, learning from what it finds, engaging in flawless social interactions, and acting independently is not that far off. Just as Shelley’s early readers were frightened by how far transfusion and transplant technologies could be taken, so too are modern readers frightened by what sort of form artificially intelligent computers will take. Really, the idea of self-aware war machines getting tired of being treated as servants or simply malfunctioning and going on a killing spree is not hard to imagine. And it is from this fear that robotic monsters arise.

Yet mixed with this fear is a lot of hope. In
Terminator 2,
a reprogrammed robot, physically identical to the robot monster in the first film, is sent back in time to protect the future leader of the resistance movement against the machines from assassination when he is
just a boy. The boy alters the Terminator’s programming, giving it the ability to learn. He orders it to not kill humans, teaches it to express emotion, and encourages it to question human behavior, leading to an unexpectedly tender moment where the machine asks him why humans cry.

In the end, the two build a bond, and the robot, which was a vile killing machine in the first film, concludes the second film by displaying an understanding of the value of human life and willingly sacrificing itself to save humanity.

Isaac Asimov, the creator of some of the most profound robotic literature ever written, presents a similar tale in the first story of
I, Robot,
“Robbie.” The tale explores the social interactions that develop between a human child and a robotic nursemaid named Robbie. After two years of happy bonding, the pair are separated by the child’s mother because she decides it is socially inappropriate for robots to become so closely attached to humans. This drives the child into a state of depression that leads her toward a desperate search for her lost companion. Toward the end of this search, she finds Robbie installed in a factory. As she rushes up to meet the robot, she fails to notice an oncoming vehicle. Robbie quickly saves her, proving to the mother she was wrong in believing robots to be cold and soulless.

“Robbie” raises a vital point that deserves some reflection. In this story, it is the mother, not the constructed creature, who is the antagonist. She is not a monster, but she is definitely the force the leading characters must struggle against. In
Splice, Frankenstein, Battlestar Galactica, The Terminator,
and so many other stories of human creations becoming monsters, humans might not be the antagonists, but they are definitely responsible for the monsters that come to haunt them. But when does a sheer lack of responsibility shift a character from simply being incompetent to being a villain? Can such monster-constructing villains become monsters themselves? It is with these questions in mind that it is worth taking a look at
Jurassic Park.

BOOK: Medusa's Gaze and Vampire's Bite: The Science of Monsters
10.93Mb size Format: txt, pdf, ePub
ads

Other books

The First Victim by Lynn, JB
Disciplining the Duchess by Annabel Joseph
Blast From The Past 2 by Faith Winslow
Handbook on Sexual Violence by Walklate, Sandra.,Brown, Jennifer
Cursed by Ella Price
Someone Else's Dream by Colin Griffiths
Mercy's Prince by Katy Huth Jones
Ever After by Odessa Gillespie Black