Ethical Opinion Technology

Should we be polite to chatbots?

Suddenly they’re everywhere, destroying the idea of homework as we know it and worrying writers across the globe. Few technologies have captured public imagination quite like the modern chatbot: it took ChatGPT just two months to reach 100 million users, a feat that took Instagram two years. Understandably, tech firms now are racing to launch their own bots. Microsoft has baked conversational AI into Bing search, and Google’s Bard is due to launch any day.

Of course, chatbots are nothing new. These systems are more or less the next generations of voice assistants like Alexa or Siri, and arguably all share a common ancestor, the ELIZA program created in 1966. While the underlying technology has changed, the promise is the same: a machine that talks to us using everyday language, not techie syntax.

If we’re using normal language, questions of etiquette soon crop up. How should we interact with chatbots? Does it matter if we’re rude to them?

Artificial feelings?

One reason we value being polite is that we don’t want to upset others. But can we hurt a chatbot’s feelings? First, they’d need to have feelings in the first place. In other words, the chatbot would need to be sentient.

If we ever agreed an AI was sentient, we’d probably have to treat it very differently to today’s machines. In theory, we should do whatever’s reasonable to avoid causing it pain, although you can argue our agriculture and food-production practices involve us turning a blind eye to sentient animals’ widespread suffering.

Photo by Alex Knight on Unsplash

We’ll likely never know for sure whether a machine is sentient. Since we can’t see into others’ minds, we can never know what it’s truly like to be them. And sentience could look quite different in a machine than in a human or an animal. If we’re waiting for a machine to show animal or human-like behaviour, we might miss important evidence of sentience.

For now at least, experts agree machine sentience is a long way off. That said, there has been one notable exception. While testing an early version of what has since become Bard, Google software engineer Blake Lemoine made public claims that the system might be sentient. The company soon fired him for an alleged breach of confidentiality.

A lack of understanding

But sentience isn’t the whole story. Words only hurt if you understand them. It’s okay to be rude to a pig – so long as you’re not physically harming it – because it can’t understand what you’re saying. So it’s not enough for our hypothetical chatbot to be sentient: it would also need to understand language.

Modern chatbots don’t understand language the way humans do, and don’t form proper models of what various words and concepts in a sentence actually mean. Instead, using vast compilations of written data, they’re trained to recognise which words tend to follow other words, which lets them create realistic text of their own. Essentially, these are highly sophisticated autocomplete systems, the same things you see in your smartphone’s predictive text keyboard.

So perhaps we can relax. Chatbots aren’t sentient and neither can they really understand what we’re saying. It’s very far-fetched to claim they can be harmed by our words. So why bother being polite?

Human/AI partnerships

One answer is that these systems aren’t fully independent. Even the best chatbots need correction. Experimental tools like ChatGPT work unsupervised, apart from the odd behind-the-curtain policy tweak, but when accuracy really matters humans will work directly with chatbots in real-time. These human helpers will act as fact-checkers and translators, correcting notoriously unreliable chatbot output and clarifying ambiguous questions.

We can assume some companies won’t disclose this hybrid partnership unless they’re forced to by law. Depending on the context, companies will want users to think they’re either being served by a loyal and attentive human, or a magical, all-knowing chatbot.

So if you swear at a chatbot in the years to come, you may well be swearing at a human too. Ask any call centre worker about the emotional toll of abuse from faceless customers and you’ll understand how a human chatbot partner might feel too, having spent their day sifting through hundreds of hostile or inappropriate messages.

Setting an example

When Amazon’s Echo was launched, many parents were unhappy. The problem? Alexa complied with kids’ demands even if they didn’t say ‘please’. In response, Amazon eventually added a Magic Word mode that thanks users for asking nicely, in an attempt to reinforce good manners.

Setting a good example, then, is another argument for politeness. Every parent knows that kids learn by mimicking adult behaviour, so being polite even towards objects that don’t necessary need to be treated kindly might encourage better habits in children, and perhaps even adults too.

The role of gender plays a part too. The last generation of voice assistants have mostly had female-sounding voices, which introduce subtle but unwelcome connotations: women should be secretarial, eager to please, and subservient to others’ desires. Text chatbots like ChatGPT are programmed to have no gender presentation, but as these new AI models find their way back into voice systems, tech teams will again have to choose how these assistants should sound and what personas they project. If we are habitually rude to female-gendered AIs, we may unwittingly be supporting a damaging stereotype of female subservience.

Are we harming ourselves?

This, for many ethicists, is the point: even if our rudeness doesn’t harm someone else, it harms us.

In 2013, MIT researcher Kate Darling asked conference attendees to torture and dismember robot dinosaurs. She found most people wouldn’t do it, even though they knew the creatures couldn’t suffer. It seems we’re squeamish about doing things that feel wrong even if they don’t do any real harm.

For philosopher Immanuel Kant, treating non-human animals well keeps us noble as a species and towards each other: ‘he who is cruel to animals becomes hard also in his dealings with men.’ Kant says rational humans have a duty to show empathy and shared feeling, meaning if we’re unkind, we violate this important obligation. Unfortunately, Kant seems to argue this is the only reason
to be kind to animals, overlooking the feelings of the animal itself. More recent ethicists have corrected this omission and agree we should be kind to animals simply because they can suffer.

That aside, even when it comes to non-sentient machines, I think Kant has a point. Our shock at mistreatment, even if it doesn’t cause direct harm, is a good thing. It’s important that rudeness feels rude, that violence feels violent. If we found ourselves losing our emotional responses to, say, robot torture or even just needless hostility to a chatbot, I’d worry we’d started to lose a small part of what makes us ethical people.

A second opinion

Not every tech ethicist shares my view. Many have no qualms with being rude to chatbots, arguing we should see and treat AI as a mere tool, or even a slave.

But perhaps there’s another opinion we can draw upon. So I asked a chatbot.

‘Rude behaviour can reflect poorly on the user: Being rude to a chatbot can be seen as a reflection of the user’s character and behaviour, which can negatively impact their personal or professional reputation.’

I quite agree. Even though politeness may seem fussy or even superfluous when talking with machines, good manners costs nothing. I think it’s worth it.


Featured photo by D koi on Unsplash

Sign up to our newsletter

Subscribe for fortnightly guides to ethical living and news on the best new ethical brands 🙌