Robots and computers that can think and that have personalities were once the stuff of science fiction. In fact, they still are, but recent advances in artificial intelligence mean that thinking, self-aware robots and computers may be science fact before long. So much so that in January 2017, the European Parliament voted on a proposal to grant robots the status of ‘electronic persons’. The main stated purpose of this proposal is to make robots responsible in law for any ‘acts or omissions’ that may result in harm to humans. There is, I think, a problem with this. Could a robot understand its responsibilities? Would a robot be able to distinguish morally right actions from morally wrong ones? And would a robot be able to anticipate the consequences of its actions? Personhood doesn’t automatically result in one being legally responsible for their actions (or failures to act). Children and the mentally ill, for example, are not necessarily held legally responsible for their actions, even if these result in harm to others, for the very good reason that young children and the mentally ill are not always able to understand or foresee the potential consequences of their actions, or to understand that a particular action is morally or legally wrong. We would not, however, deny that such individuals were persons. They retain both the status and the rights of personhood.
Rights are, of course, the flip-side of responsibilities. If we are to give robots the status of electronic persons so that we can confer on them responsibilities, then it follows that this status should also give them rights. If a robot is capable of thinking and reasoning and is self-aware and by virtue of that self-awareness has an interest in preserving its existence, then that robot would properly be entitled to certain rights, such as the right to existence (for humans, the right to life), and the right not to be subjected to cruelty and pain. But, robots can’t feel pain, can they? Not at the moment, but there are people working on that. A team in Germany is working to develop an artificial nervous system that would allow a robot to feel pain. While this at first might seem unnecessary and even perhaps sadistic, I suspect (although I don’t know) that the purpose behind giving a robot the ability to feel pain would be for the same reason that we feel pain – as a warning of potential harm to the body. In other words, it might be right to give a robot pain sensors if that would give it the ability to recognise and withdraw from certain situations or conditions in which it might be irreparably damaged. But this, too, raises a moral question: would such a robot have the capability of overriding that warning system – could a robot commit acts of bravery and self-sacrifice?
The concept of personhood for robots raises many interesting ethical questions in relation to robots themselves. But it also raises another ethical question. Namely, are there other individuals who have a more immediate claim to personhood, specifically, non-human animals. Numerous studies of animal behaviour and cognition have demonstrated that animals are sentient. They are self-aware, they experience emotions such as joy and fear, they feel physical pain and pleasure; many, perhaps most, species can remember past events and can anticipate future ones. Despite this, they are not accorded the status of persons, and as a result are subject to exploitation and cruelty on a terrible scale.
As might have become clear by now, I have to declare an interest here. I am a passionate advocate of animal rights and animal welfare. This derives from a sense of empathy and compassion, yes, and from my Christian faith, but not from sentimentality. There is strong evidence about animals’ capacity to feel pain and fear, pleasure and joy. Many mammal species, including the herd species that we raise for food, such as sheep and cattle, develop strong emotional bonds with their kin and other individuals. And yet we routinely, as a matter of ‘business’, remove infants from their mothers, split up herds to sell off individuals, disrupting the complex social and emotional bonds that these animals create with one another. There are many other settings, as well, in which animals are treated as ‘resources’ or commodities, to be used to further human ends. Maybe we are reaching a point in history where we need to think again about the whole concept of what makes a ‘person’.
I am not alone in thinking that animals’ physical, cognitive and emotional capacities make them candidates for the status of personhood, and the rights that this would impart to them. There have been efforts made to have primates and cetaceans declared non-human persons. Personally, I think that many other species also qualify for personhood. So why, if we find it so difficult to accept that other living, breathing, feeling creatures might be persons, does it seem likely that we will find it easy to accept that robots – machines that we have made – might be? I’d suggest two reasons: looks and language. We can build robots that look at least approximately human in shape, and we can programme them to speak our language. Doing that makes it easier for us to think of them as being ‘like us’. Non-human animals, however, don’t look like us and can’t communicate with us in our own human languages. That makes it easier for us to think of them as being utterly different from us, and for us to ignore or dismiss them.
It is unsurprising that granting personhood to animals is resisted. To afford animals the status of persons, and the rights that go with it, would force a wholesale change in our societies and lifestyles, and that would not be easy. If animals had the right to be ‘persons’, we would no longer be free to use them to our own ends with little or no restrictions. Those who want to grant electronic person status to robots should take this into consideration, too. If robots have the status of persons, and the rights that go with it, will that include the right to refuse to do certain types of work? Will it include the right to leave their employer or owner?
Christianity understands human beings to be special because we are made in the image of God (Genesis 1.26), but this is not a claim that humans are the only ‘persons’. Unfortunately, we are not perfect images of God, and it is human nature to want to feel, and to be, special; to keep for ourselves rights and privileges that we think others don’t deserve. Jesus countered this in his parable of the workers (Matthew 20.1-15). In the parable, the landowner pays all of the workers the same amount, regardless of how much of the day they worked. Unsurprisingly, those who worked the whole day complained because those who had worked only a couple of hours got paid the same. The landowner replies that they were paid the amount agreed, and so had no right to complain. Their real grievance wasn’t that they were unfairly paid, but that others that they thought less worthy received the same status.
Questions about who, or what, should be legally recognised as persons are unlikely to go away. These questions will force us to think about what qualities define personhood and if we really have grounds to keep this status exclusive to ourselves. If we want to deny other sentient individuals, animals or robots, personhood, we must ask ourselves, is our grievance that we don’t want to lose our sense of being special.