(Law, Cambridge)
Some friend, huh? The answer to this question must be simply ‘yes’. Even with no exit fee, simply by locking you in, your friend has deprived you of your liberty. Your freedom was restricted the moment the key was turned. What matters is that your choice to come and go was curtailed. Your friend’s demand for £5 before the door is unlocked simply compounds the issue.
*
Liberty has long been seen in the West as one of the most fundamental human rights – so deeply engrained that most of us find any restriction on our freedom at best annoying and at worst a cause for outright defiance. ‘To renounce liberty,’ Rousseau said, ‘is to renounce being man.’ The problem, of course, is that we are not alone in the world, and so we cannot always be free to do anything we like, even in the most liberal of cultures. We cannot be free to rob, beat or kill other people, for instance. And so
*
One other possibility is thrown up by the question. It may be that you are a young child and that your friend is the person responsible for your care. A carer may legally do whatever is reasonable in their duty of care, custody and control over a child in their care. This may sometimes include locking you in your room when you’re naughty, but certainly wouldn’t include locking you in to go out and party. The exchange of money to buy your freedom would suggest a very irresponsible carer and would probably give the authorities good grounds for taking you into care.
we accept that there are circumstances in which people’s freedom can reasonably be restricted. Criminals can be deprived of their physical liberty by prison sentences. Those who cause offence may find their freedom of expression curtailed by libel and slander laws. It’s all part of what philosophers such as Hobbes and Locke call the ‘social contract’ – the deal in which we give up some of our freedom to the state and in return the state maintains order. Rousseau argued that we may give up natural independence but in exchange we get real freedom.
The crucial point, though, is that the law has to specify the circumstances in which someone can be locked up – and the law also usually forbids anyone who is not legally authorised to deprive someone of their physical liberty. So although forceful abduction and kidnapping are more serious crimes than your friend’s game with the door lock, by shutting the door and turning the key he or she is nonetheless committing a crime. By insisting on a payment, he is probably adding extortion to his felonies! Of course, it may be that your friend is an officer of the law who is legally entitled to lock you in a room such as a prison cell because you have committed a crime, in which case his offer to accept payment to allow you out becomes a different crime!
There are certain circumstances in which someone may be deprived of liberty against their will even if they have not committed a crime, nor are about to. The Mental Health Act of April 2009 in the UK, for instance, allows
for authorised hospital staff to deprive patients considered mentally unfit of their liberty for their own good – but there are safeguards to ensure that the patient is incapable of deciding for him- or herself and so on.
In recent years, of course, the rise of terrorism has thrown into the spotlight the issue of how long a suspected criminal can be detained by the authorities without a trial and whether, for the safety of society, it’s right to deprive of their liberty someone who might in future commit a crime but has not done so yet. The fears of what terrorists may do has definitely swung more people behind a willing trade-off of freedom to reduce the dangers, and yet the US government came under fire in the Bush era for the long-term detention without trial in Guantanamo Bay of people suspected of terrorist links. In 2008, the UK government pushed to extend the time that terrorist suspects could be detained before they were either released or brought to trial from 28 to 42 days. After fierce opposition in the House of Lords, they were forced to accept that this extension could only be allowed under specially introduced short-term emergency legislation.
It’s easy to say that we can lock people up when they’ve done wrong. The problem is that not everyone agrees just what is right and what is wrong. That’s why the great nineteenth-century philosopher John Stuart Mill argued in his 1859 book
On Liberty
that right and wrong are irrelevant; the only justification for any restriction on individual liberty is to prevent harm to others. By Mill’s argument, it doesn’t matter if nearly everyone considers something immoral; it should never be restricted by law if no one is harmed by it. There was a famous legal debate in the 1960s between H.L.A. Hart who argued that there should be no laws against ‘victimless crimes’ such as homosexual acts between consenting adults, and Sir Patrick Devlin who insisted that society has a right to enforce morals to prevent damage to the social fabric. Hart won then, but it remains a hot topic, surfacing in the debates over whether people should be allowed to air views encouraging racial tension or terrorism. When does their right to freedom of speech impinge on others’ rights to be free from harm?
(Physics, Oxford)
Back in 1895, H.G. Wells excited the imagination about the idea of travelling through time with his brilliant fantasy,
The Time Machine
, in which a man uses a machine to travel into an imaginary future – but no one could conceive how it could actually be done. Then just a decade later Einstein’s theories demonstrated that time runs at different speeds in different places and is just another dimension of space, like length and breadth, and suddenly it didn’t seem so impossible after all. Ever since, some people have wondered if we could travel through time just as we can travel through space.
Einstein himself believed travelling through time would mean travelling faster than light – and that, he said, was impossible. And yet his theories show how we are all time-travellers. As time passes, and our lives progress, we are, of course, travelling continuously along the time dimension. It might seem, though, that what we cannot do is change the direction or speed of our journey, and that’s what Einstein was getting at. And yet, because time travels at different speeds in different places, we could travel through time in one sense simply by travelling through space. That this time-shift is real is shown by the fact that there is a slowing of time (demonstrable with highly accurate atomic clocks) aboard spacecraft travelling to the moon and back. So if you travelled as an astronaut to the moon, you would actually come back to earth having aged very slightly less than if you’d stayed on earth. The further and faster you travel into space before coming back, the younger you get (relative to stay-at-homes).
Recently, too, it’s been found that light speed is not quite so fixed, and so not quite the ultimate arbiter of time that Einstein thought. At one end of the scale, physicist Lene Vestergaard Hau brought light to a standstill in 2000 by sending a beam through a Bose-Einstein Condensate (a gas chilled to the point where its atoms are virtually motionless). At the other, that same year Lijun Wang sent pulses of laser light through a canister of caesium gas at 310 times the speed of light so that the pulses appeared to
have travelled back in time, emerging from the container before they entered.
Of course, by time travel, most people don’t mean little relative time-shifts or clever tricks like this. They mean things like fast-forwarding into the distant future, or voyaging back through history to eyewitness one of Cleopatra’s wildest parties, and this is where scientific theories take on more of a fantastic air. Back in the 1930s, American mathematician Kurt Gödel showed that someone could at least theoretically travel through time if they found a way of ‘bending’ spacetime. Spacetime is a way of describing space as a continuum including both time and the more familiar dimensions of length, breadth and depth. Mathematical theory shows that it must be curved in shape, so Gödel figured you could travel through time by taking a short cut straight across the curve of space.
To create such a short cut, you have to ‘bend’ spacetime, and you can do that with gravity. so perhaps a would-be time traveller might exploit the unbeatable gravitational power of a black hole. Theory links black holes to white holes (reverse black holes that spew out matter just as black holes draw it in) via tunnels through spacetime known as ‘wormholes’. US astronomer Kip Thorne believes that artificially created wormholes might be just the ticket for shortcuts through spacetime. It’s possible that little wormholes may be created by particle accelerators such as CERN, but for time travel, you’d need something slightly bigger – far bigger than is yet remotely practical. There is a
problem, too, in that according to Stephen Hawking (who, incidentally, insists that if anyone in the future succeeds in time travelling we should have seen them coming back to us by now), wormholes are so unstable that they’d snap shut before you could jump inside. So you’d need an anti-gravity machine, too, to hold your wormhole open using a quantum effect called the Casimir effect.
US astronomer Frank Tipler has another idea that we might use. He suggests rolling a piece of superdense material into a cylinder a few billion miles long, then setting it spinning. Once it’s spinning fast enough, space and time will bend around it, and if we plot a spiral course through it in our spacecraft, the moment we fly in we should fly out the other end in another galaxy and time.
Of course, there are lots of paradoxes that imply that you simply can’t time-travel whatever kind of machine you build. One of the most famous is the idea of a man who travels back to a time before his parents were born and kills his grandfather. This would mean that one of his parents and he himself could never have been born – and if so, then how could he have killed his grandfather? Kip Thorne argues that there are infinite possible lines of cause and effect – each event generating multiple outcomes. If so, these paradoxes are irrelevant; when you go back, you simply start another sequence of events. Maybe you could make a quantum entangled duplicate of yourself and just teleport instantly through time and space …
(Law, Oxford)
Conscience is essentially our ability to judge between right and wrong. It’s the voice in our heads that tells us that we should do this and we shouldn’t do that – and it makes us feel racked with guilt if we don’t. But it’s hard to pin down just where this judgemental voice is coming from. It’s the voice of God, many early Christian philosophers asserted. No, said Thomas Aquinas, it’s simply a God-given ability to make decisions. According to Freud, conscience is our superego at work, doling out the lessons learned at our parents’ knees. Many contemporary sociobiologists describe it as an evolved part of culture imprinted on your brain like language.
Wherever conscience comes from, it’s hard to imagine a computer ever being tortured by guilt. As Pablo Picasso apocryphally and neatly (though not entirely accurately) said, ‘Computers are useless; they can only give you answers’. And it’s hard to imagine a computer with ‘feelings’, despite the efforts of Disney to persuade us otherwise. It may be that a computer might be programmed in future to mimic human guilt so well that it appears to be reacting guiltily. But there are two further hoops the computer has to pass through before it’s going to reach that all-too-human affliction of a guilty conscience. The first is for it to be sufficiently self-aware to direct its display of guilty feelings. The second is for it to really suffer
because of them. To suffer real guilt, a computer has to feel, in the immortal words of George Michael, that it’s never going to dance again. Even the first of these hoops seems remote, until the best efforts of scientists can tell us a little more about human self-awareness.
However, it’s much easier to imagine a computer which can, at least, tell right from wrong. Indeed, medical systems, for instance, are already being programmed to pledge a kind of Hippocratic Oath, in which they will release patients’ confidential information only in certain circumstances. Just as a computer can be programmed to make the decisions needed to play chess, so it might be programmed to make moral judgements. In some ways, this is superficially not so very different from Freud’s superego lessons learned from your parents, or Aquinas’s God-given reason – both of which imply that the judgement-making process is supplied from without, like the computer’s programming. It may also not be so very different from the biologists’ conscience imprint. Interestingly, a computer is less likely to lie than humans. As Isaac Asimov says, ‘Part of the inhumanity of a computer is that once it is competently programmed and working smoothly, it is completely honest’ – unless, of course, it has been programmed to be dishonest.
As artificial intelligence develops, it does seem feasible that a computer might one day take control of itself. Programmed to learn and develop by itself, it chooses certain responses and extends its scope so that to all intents
and purposes it has an intelligence that acts with intention. Computers have already far surpassed the human mind in some limited aspects. Some fear that one day a highly sophisticated intelligent computer which developed its own mode of extending its activities could pose a threat to humans. The worry is that biological theory suggests we humans have developed conscience and reciprocal altruism as evolution directs our ‘selfish genes’. An ‘amoral’ computer like this could be a massive intelligence with only its own interests, and none of the saving virtues of conscience that allows humans to live together. Fortunately, such a possibility is, as yet, only science fiction.
It may be that as artificial intelligence is developed, a conscience – that is, a feedback program that makes the computer respond in a way that mimics human morality – should be made an integral part of every computer system. The program might be set out in such a way that the computer develops its moral judgements as it learns. It would be no different, in some ways, from providing a growing child with lessons in right and wrong, and then that child learning through interaction with the world.