Game Theory by Matt S.
It’s going to happen.
The technology’s already there. The expectation is that sex robots will be commonplace within a few decades. Alternatively, advances in virtual reality or augmented reality, coupled with haptics technology feedback, will allow much the same effect, no robot needed.
From there it’s only a matter of time before some entrepreneurial mind will figure out that creating these things in the likeness of popular video game characters or digital celebrities will be a good way to make money. They’ll be unofficial uses of IP, to be sure, as the IP rights holders won’t ever want to take their characters down this way;but from Lara Croft to Final Fantasy’s Lightning, and from the ladies of Love Live through to Hatsune Miku, there are almost certainly going to be robots or VR applications that people can use to… express just how much of a fan they are.
But will these things be ethical? That’s a whole other debate, and quite a loaded one. So, I had a chat with The University of NSW’s Dr. Matthew Beard to talk through some of the issues that might arise once interaction with digital beings becomes real.
On issues of exploitation
The most obvious potential issue with digital people, once they’ve reached the point where they can be interacted with in a physical, or virtual, manner close enough to feel realistic, is whether exploitation and/or consent becomes an issue. When the people at Koei Tecmo first showed off the VR update to Dead or Alive Xtreme 3, for example, it was possible to “touch” the girls in the game, and they would respond when touched. This led to a surge of criticism, and many critics took issue with the question of exploitation and consent. One can only guess how controversial things will get when the interactions become physical, or realistically emulate physical interaction.
But from a practical ethics perspective, according to Dr. Beard, digital beings aren’t people. There is a host of criteria that establishes the ‘personhood’ of a person in ethics philosophy, and they aren’t the kind of criteria that a digital person or robot, no matter how well made or “smart” the AI in it, will achieve any time soon. These beings are tools, to be owned and used at the owner’s discretion. Because exploitation only really matters if someone or something is able to feel the effects of being victimised in the process, it’s largely a non-issue from an ethical perspective.
“My ethical alarm bell rings at a term like “exploitation” because it is immediately associated with personhood.” Dr. Beard said. “You can use something to your benefit, but to “exploit” it is to use it in a way that you’re not entitled to, or in which you would be wrong to do so. The way to talk about wrongfulness is to talk about rights. If we exploit a human being we wrong their rights in some way. There are other forms of exploitation – for example, of animals or ‘nature’ – but in relation to this topic that’s why I see there being such a strong link between personhood and some of these other ethical questions.”
On issues of consent
So on that level alone, due to lack of victimisation involved, using a digital character or robot for the purposes of sexual fulfilment would not be unethical. They’re not people, therefore they can’t be exploited.
But there’s more to it than that. Digital beings are not people and therefore can’t actively consent to sexual interaction, and this is important, not because of what it does to the digital being, but rather what it would say about the human in the tryst. As Dr. Beard explains:
“We can talk about the ethics of engagement and interaction with those entities, but we need to re-frame the question, if we’re not going to talk about personhood and rights,” Beard said.
“It’s similar in the way that someone like Immanuel Kant, who was an enlightenment philosopher back in the 18th century, talked about animals. His argument was essentially that ‘there are ethical questions regarding the way that we interact with animals, but those questions aren’t about the animal itself. They’re about us. And they’re about what is an appropriate type of behaviour for a human being to participate in, relative to those animals.
“Kant’s argument now seems a little bit dated with regards to animals, but we can apply the same notion to the idea of digital beings and ask the same question. We would say ‘well, perhaps if we were to create a program that allowed virtual reality connoisseurs to participate in virtual sex with this kind of digital person, that person isn’t the type of agent that has the capacity to suffer. They aren’t the kind of agent that has the capacity to feel like their dignity has been violated. But the person that is engaging in that kind of activity is the kind of person that has dignity and is the type of person that wants to live their life in a particular way.’
“People are expected ethically to live their life in a particular way, and so we would reframe the question about whether it’s acceptable to have sexual interaction with a being that can’t give consent to ‘what does this behaviour say about the human being that’s participating in it’, rather than ‘how does this behaviour victimise the digital person.’”
Another way to think about this in simple terms is to imagine those really cute robot dog pets that Sony once made a bundle out of in Japan. You might have seen documentaries about them; now that Sony is no longer producing the spare parts for the dogs, it’s getting harder and harder to fix them, meaning that some people are having their virtual dogs ‘die.’
These virtual dogs are just robot toys, but people become enormously attached to them, and even hold funerals if they’re not able to repair their broken robot dog. Now, on the one hand, everyone is aware that these robots are not real dogs, but if you saw someone with one maliciously destroying one, would you let them near your real dog? You might be less inclined to; even if the person in question would never engage in animal cruelty, their behaviour suggests a capacity for behaviours that make people uncomfortable in terms of that person’s morality.
So too, you could argue that sexual interaction with digital beings suggests an unpleasant personal attitude by the individual that, even were they to never victimise a person,makes their overall moral centre towards concepts like “consent” questionable. And, of course, you can apply much the same to the people who are responsible for making this content.
A question of social impact
The third issue that needs to be considered is the morality of making these things in the first place without due consideration for the impact that they might have on society as a whole. This is not something I discussed with Dr. Beard, but a prior conversation I had with an evolutionary psychiatrist, Dr Donald Hoffman, brings this topic to light.
There exists a phenomenon called “supernormal stimulus,” which is fairly self-explanatory; it is an exaggerated form of stimulus that creates a more powerful response in the subject than the normal stimulus.
To highlight the effect that this can have: in Australia, there was a breed of beetle nearly driven extinct because beer bottles tossed out onto the road very much resembled the form of the female species of the beetle, only superior in every way. So the male beetles were instead trying to breed with these bottles.
While you could argue that humans are more intelligent than beetles (though in some cases…), it’s nonetheless true that supernormal stimulus affects humans too. So there is the risk that – if we’re able to replace the real world with a “more perfect” space, as you potentially can through the use of robots, VR and AR – these things can start to dull a person’s interest in the real world, and real world interactions. Japan already has a great deal of supernormal stimulus in its culture, and while it has a large number of social and cultural challenges, that stimulus is part of the reason for the “shut in” hikikomori and the plummeting interest in dating, marriage, and childbirth which is threatening the country with a drastic decline in population. The Japanese are busy, busy people, and it’s simply too easy to get satisfaction from other sources than having to go through the exhausting process of finding a mate.
Research, according to Dr. Hoffman, shows that no other culture is any more “immune” to supernormal stimulus, so the impact that this might have on behaviours and interactions is an ethical consideration and debate worth having in itself.
As silly as the topic sounds, sex and digital beings and/or robots is real and nothing’s going to actually stop it. It may well have the longest early adopter period in technology history as numerous social and cultural barriers need to be challenged before it could ever hope to approach mainstream acceptance, but it is going to happen; and when it does, ethical conversations like this need to be had in order to ensure there isn’t a significant negative impact on our social and cultural structures.
– Matt S.
Find me on Twitter: @digitallydownld