“What people think of when you say ‘artificial intelligence’ is basically stuff they’ve glommed onto via the media. HAL 9000 or
Neuromancer
—artificial consciousness. But consciousness—we know how that shit works these days, via analytical cognitive neurobiology and synthetic neurocomputing. And it’s not very
interesting
. We can’t
do
stuff with it. Worst case—suppose I were to sit down with my colleagues and we come up with a traditional brain-in-a-box-type AI, shades of HAL 9000. What then? Firstly, it opens a huge can of ethical worms—once you turn it on, does turning it off again qualify as murder? What about software updates? Bug fixes, even? Secondly, it’s not very
useful
. Even if you cut the Gordian knot and declare that because it’s a machine, it’s a slave, you can’t make it
do
anything useful. Not unless you’ve built in some way of punishing it, in which case we’re off into the ethical mine-field on a pogo-stick tour. Human consciousness isn’t optimized
for
anything, except maybe helping feral hominids survive in the wild.
“So we’re not very interested in reinventing human consciousness in a box. What gets the research grants flowing is
applications
—and that’s what ATHENA is all about.”
You’re listening to his lecture in slack-jawed near comprehension because of the sheer novelty of it all. One of the crushing burdens of police work is how inanely
stupid
most of the shit you get to deal with is: idiot children who think ‘the dog ate my homework’ is a decent excuse even though they knew you were watching when they stuck it down the back of their trousers. MacDonald is . . . well, he’s not waiting while you take notes, for sure. Luckily, your specs are lifelogging everything to the evidence servers back at HQ, and Kemal’s also on the ball. But even so, MacDonald’s whistle-stop tour of the frontiers of science is close to doing your head in. Then the aforementioned Eurocop speaks up.
“That is very interesting, Doctor. But can I ask you for a moment”—Kemal leans forward—“what do you think of the Singularity?”
MacDonald stares at him for a moment, as if he can’t believe what he’s being asked. “The what—” you begin to say, just as his shoulders begin to shake. It takes you a second to realize he’s laughing.
“You’ll have to excuse me,” he says wheezily, wiping the back of his hand across his eyes: “I haven’t been asked that one in
years
.” Your sidelong glance at Kemal doesn’t illuminate this remark: Kemal looks as baffled as you feel. “I, for one, welcome our new superintelligent AI overlords,” MacDonald declaims, and then he’s off again.
“What’s so funny?” you ask.
“Oh—hell—” MacDonald waves a hand in the air, and a tag pops up in your specs: “Let me give you the dog and pony show.” You accept it. His office dissolves into classic cyberspace noir, all black leather and decaying corrugated asbestos roofing, with a steady drip-dripdrip of condensation. Blade Runner city, Matrixville. “Remember when you used to change your computer every year or so, and the new one was cheaper and much faster than the old one?” A graph appears in the moisture-bleeding wall behind him, pastel arcs zooming upward in an exponential plot of MIPS/dollar against time—the curve suddenly flattening out a few years before the present. “Folks back then”—he points to the steepest part of the upward curve—“extrapolated a little too far. Firstly, they grabbed the AI bull by the horns and assumed that if heavier-than-air flight was possible at all, then the artificial sea-gull would ipso facto resemble a biological one, behaviourally . . . then they assumed it could bootstrap itself onto progressively faster hardware or better-optimized software, refining itself.”
A window appears in the wall beside you; turning, you see a nightmare cityscape, wrecked buildings festering beneath a ceiling of churning fulvous clouds: Insectile robots pick their way across grey rubble spills. Another graph slides across the end-times diorama, this one speculative: intelligence in human-equivalents, against time. Like the first graph, it’s an exponential.
“Doesn’t work, of course. There isn’t enough headroom left for exponential amplification, and in any case, nobody
needs
it. Religious fervour about the rapture of the nerds aside, there are no short-cuts. Actual artificial-intelligence applications resemble
us
about the way an Airbus resembles a sea-gull. And just like airliners have flaps and rudders and sea-gulls don’t, one of the standard features of general cognitive engines is that they’re all hard-wired for mirrored self-misidentification. That is, they all project the seat of their identity onto you, or some other human being, and identify your desires as their own impulses; that’s standard operating precaution number one. Nobody wants to be confronted by a psychotic brain in a box—what we
really
want is identity amplification. Secondly—”
Kemal interrupts again. You do a double-take: In this corner of the academic metaverse he’s come over all sinister, in a black-and-silver suit with peaked forage cap, mirrored aviator shades. “Stop right there, please. You’re implying that this, this field is mature? That is, that you routinely do this sort of thing?”
MacDonald blinks rapidly. “Didn’t you know?”
You take a deep breath. “We’re just cops: Nobody tells us anything. Humour us. Um. What sort of, uh, general cognitive engines are we talking about? Project ATHENA, is that one?”
“Loosely, yes.” He rubs at his face, an expression of profound bafflement wrinkling his brows. “ATHENA is one of a family of research-oriented identity-amplification engines that have been developed over the past few years. It’s not all academic; for example TR/Mithras. Junkbot.D and Worm/NerveBurn.10143 are out there now. They’re malware AI engines; the Junkbot family are distributed identity simulators used for harvesting trust, while NerveBurn . . . we’re not entirely sure, but it seems to be a sand-boxed virtual brain simulator running on a botnet, possibly a botched attempt at premature mind uploading . . .” He rubs his face again. “ATHENA is a bit different. We’re an authorized botnet—that is, we’re legal; students at participating institutions are required to sign an EULA that permits us to run a VM instance on their pad or laptop, strictly for research in distributed computing. There’s also a distributed screen-saver project for volunteers. ATHENA’s our research platform in moral metacognition.”
“Metacognition?”
“Loosely, it means we’re in consciousness studies—more prosaically, we’re in the business of telling spam from ham.” He shrugs apologetically. “Big contracts from telcos who want to cut down on the junk traffic: It pays our grants. The spambots have been getting disturbingly convincing—last month there was a report of a spearphishing worm that was hiring call girls to role-play the pick-ups the worm had primed its targets to expect. Some of them are getting very sophisticated—using multiple contact probes to simulate an entire social network—big ones, hundreds or thousands of members, with convincing interactions—e-commerce, fake phone conversations, the whole lot—in front of the victim. Bluntly, we’re only human; we can’t tell the difference between a spambot and a real human being anymore without face-to-face contact. So we need identity amplification to keep up.
“The ATHENA research group is working on the spam-filtering problem by running a huge distributed metacognition app that’s intended to pick holes in the spammers’ fake social networks.”
MacDonald magicks up a big diagram in place of the graphs; it looks like a tattered spider-web. “Here’s a typical social network. Each node is a person. They’ve got a lot of local connections, and a handful of long-range ones.” Thin strands snake across the web, linking distant intersections. “Zoom in on one of the nodes, and we have a bunch of different networks: their email, chat, phone calls, online purchases . . .” A slew of different spider-webs, cerise and cyan and magenta, all appear centred on a single point. They’re all subtly different in shape. “Spambots usually get their networks wrong, too regular, not noisy enough. And we can deduce other information by looking at the networks, of course. You know the old one about checking the phone bills for signs that your partner’s having an affair, right? There are other, more subtle signs of—well, call it potential criminality. Odds are, before your partner snuck off for some illicit nookie, there was a warm-up period, lots of chatter with characteristic weighted phrases—we’re human: We talk in clichés the whole time, framing the narrative of our lives. Or take some of the commoner personality disorders: pre-ATHENA, we had diagnostic tools that could diagnose schizophrenia from a sample of email messages with eerie accuracy. Network analysis lets us learn a lot about people. Network injection lets us
steer
people—subject to ethics oversight, I hasten to add—frankly, the possibilities are endless, and a bit frightening.”
“Can you give me an example of what you mean by steering people?” Kemal nudges.
“Hmm.” MacDonald’s chair squeals as he leans back. “Okay, let’s talk hypotheticals: Suppose I’m wearing a black hat, and I want to fuck someone up, and I’ve paid for a command channel to Junkbot.D. First, I build a map of their social connections. Then I have Junkbot establish a bunch of sock puppets and do a friend-of-friend approach via their main social networks—build up connections until they see the sock puppets’ friend requests, see lots of friends in common, and accept the invite. Junkbot then engages them in several conversation scripts in parallel. A linear chat-up rarely works—people are too suspicious these days—but you can game them. Set up an artificial-reality game, if you like, built around your victim’s world, with a bunch of sock puppets who are there to sucker them in to the drama. Finally, you use the client-side toolkit to hire some proxies—neds in search of the price of a pint—who’ll hand your target a package and leg it, five minutes ahead of your colleagues, who have received an anonymous tip-off that the quiet guy living at number seventy-six is a nonce.”
Kemal is rapt, listening intently. He nods, perhaps once every ten seconds. MacDonald has got him on a string with this spiel. You look back at MacDonald. “You wouldn’t dream of doing that,” you say.
MacDonald grins and nods. “Indeed not. Morality aside, it’s stupid small-scale shit. What’d be the point?”
You peg him then. He’s not your typical aspie hacker, and he’s not a regular impulse-control case. MacDonald’s the other, rare kind: the sort of potential offender who does a cold-blooded risk-benefit calculation and refrains from action not because it’s wrong, but because the trade-off isn’t right. You won’t be seeing him in the daily arrest log anytime soon, because he kens well the opportunity cost of a decade in prison: It’d take a bottom line denominated in millions to lure him off the straight and narrow. But
if
he sees such a pay-off . . .
“What do you use ATHENA for?” you ask, bluntly.
“Right now we’re tracing spammers. ATHENA can scope out the fake networks: It can also tell us who’s running them.” There’s something about MacDonald’s body language that puts you on red alert. Something evasive. “ATHENA then probes the spammers to determine whether they’re human or sock-puppet. We’re working on active countermeasures, but that’s not green-lit yet; I gather there’s a working group talking to some staff at the Ministry of Justice about it, but—”
“What kind of active countermeasures?”
“Spoiler stuff, but more active than usual: using their own tools against them. You know it’s an international problem? Crossing lines of jurisdiction—a lot of them live in countries that aren’t signatory to or don’t enforce anti-netcrime treaties. So we’re examining a number of tactics that’d need to be approved by a court order before we could use them. So far it’s just theoretical, but: reverse-phishing the spammers to grab their control channels and shut down the botnets. Fucking with their phishing payloads to make them expose their real identity so you folks can arrest them. Stealing their banking credentials and applying civil-forfeiture protocols. Using their ID protocols to fuck with their personal lives—hate mail to the mother-in-law, that kind of thing. Having their computer report itself as stolen. In an extreme instance, ask the USAF to send a drone to zap them.”
“Uh-huh.” You glance down and try to look as if you’re making notes, so that he can’t see your face. One by one, the alarm bells are going off inside your head. “But you haven’t done any of this yet.”
“No.”
“But?”
“ATHENA is an international effort.” MacDonald leans forward on his elbows, fingers laced before him. “
We
are just academic researchers. We’re trying to find a way to, shall we say, enforce communal standards without turning the corner and ending up with a panopticon singularity, ubiquitous maximal law enforcement by software—
nobody
wants that, so we’re looking for something more humane. Crime prevention by automated social pressure rather than crime prosecution by AI. But . . . once you get into that territory? People don’t all agree on what constitutes crime, or moral behaviour. Some of our associate members live in jurisdictions where there are melted stove-pipes between academia and government, or intelligence. And I canna vouch for what those
third parties
might do with our work.”
ANWAR: Bluebeard
As soon as you open the front door, you know something’s not right.
“Honey? I’m home . . .”
It’s like that inevitable, deterministic scene in every horror video you’ve ever lost two hours of your life to: the dawning sense of wrongness, of a life unhinged. From the subtle absence of expected sounds to the different, unwelcome noise from upstairs in the bedroom, all is out of order.