Even as a professional ethicist, I’ll admit it’s not easy to live an ethical life. Heck, it’s a slog at times. Choosing to living well means thinking critically about choices that most people make automatically. Sometimes it means committing to courses of action we don’t enjoy, drawing solace from the knowledge that the greater good matters more than our own immediate desires. It’s a right and important thing to do, but let’s face it: the ethical journey is sometimes lonely and difficult.
Sometimes it’s the decisions themselves that cause us anguish. How much of our income should we donate to charity? Should we take a lucrative job offer from a company with a chequered ethical history? In unlucky moments we may even face impossible ethical dilemmas in which there’s no perfect answer, just a selection of bad options we must choose from.
As Aristotle taught, it’s not enough to want to do good: what makes a person good is actually doing good things. So we also need to follow through on our decisions. But it can be hard to turn even straightforward moral decisions into action. A lifelong carnivore might find the moral argument for veganism compelling but still be unable to resist a juicy ribeye; a guilty teen may nonetheless struggle to own up to cheating on a test.
An artificial leg-up
Perhaps we could use some help. What if we had a trusted artificial advisor, an impartial ethics AI that could not only walk us through our toughest decisions but also give practical advice on applying those decisions in real life?
The idea isn’t so far-fetched. Ask Delphi, a prototype from the nonprofit Allen Institute for AI, already does a surprisingly decent job at suggesting answers to moral questions. Its makers claim a 98% accuracy on certain topics, although we should perhaps be sceptical of claims of such precision when it comes to ethics. The more general large-language models like ChatGPT tend to shy away from contentious moral issues, giving only the vaguest advice, likely on the counsel of
their legal teams. But given how lucid and fluent modern LLMs are on other issues, it seems clear they could offer moral advice if only they were allowed to.
But is it actually a good idea to consult AIs for moral guidance?
The main advantage an artificial ethics advisor could offer is an outsider’s perspective. This is, of course, the main reason we discuss tricky problems with other humans, be it a chat with a trusted mentor or soliciting the advice of strangers through forums like Reddit’s Am I The Asshole? (AITA) community, which is only too happy to pass retrospective judgement on worried posters’ actions.
If two heads are better than one, surely more heads are better still. Trained on millions of written texts from across the globe, an AI could hopefully draw upon the collective crowdsourced wisdom of a wide and diverse public, helping us overcome our own biases, blindspots, and self-centred temptations. Perhaps triangulating moral views through an AI would help us make more thoughtful choices, ultimately helping us all treat each other better.
Better still, we could infuse the AI with the moral wisdom of the greatest philosophers of all time. Imagine a system that would not only tell us how many people would approve of our decision, but also what Confucius himself would say about it. A broad, pluralistic AI studded with the insight of ethical experts could help us reach new levels of moral sophistication.
Trouble in philosophical paradise
One worry is that these machines could be too effective. To paraphrase media theorist Marshall McLuhan, every technological extension is also an amputation. Perhaps we’d find these machines so convenient we’d end up relying on them as moral crutches, leaving us unable to make ethical decisions ourselves. If you’re only doing the right thing because a computer said you should, are you behaving admirably or just becoming subservient to a digital overseer?
Should we even trust technologists to accurately train machines that understand ethical complexity? This may end up being one of the hardest challenges in modern-day AI. The stakes are inevitably higher for humans. An AI may lose consumer confidence if it does something wrong, but that’s nothing compared to the way our personal reputations can be trashed by an unwise moral decision. Without a genuine stake in the future, an AI doesn’t have to live with the
consequences of its recommendations like a human does.
An absence of reason
But perhaps the biggest cause for hesitation is how an AI would actually make an ethical recommendation. Ethics needs reasons. If you want to defend a particular point of view, you should be able to justify it. Why is stealing wrong? Because it unjustly deprives someone of their property and infringes upon the rights society has agreed they have. Why should we keep our promises? Because breaking them undermines collective trust and hampers the prospect of genuine friendship, which is essential to our wellbeing.
Reasoning is the downfall of today’s AI systems. Modern LLMs don’t understand concepts like we do: they essentially use probability formulae to regurgitate and remix humans’ words they’ve previously ingested as training data. ChatGPT can’t weigh up the duties involved in an ethical dilemma, nor can it truly consider the type of person you aspire to be. All it can do is draw on the information it’s been fed and string together a plausible explanation from that. This explanation might even look like a valid reason from the outside, but without the AI really understanding
the situation, trade-offs, and consequences of ethical decisions, it’s hard to say an AI can genuinely justify its advice.
This may change in time. It’s possible that future AIs will be able to understand what words really mean and how ideas relate to one another, meaning they can truly start to understand the context of our ethical questions. I‘d certainly be tempted to listen to the ethical advice of a system that understood the world more deeply than I do. But until then, I suggest we ignore the moral advice of machines. In the end, ethics is about how people treat other people and other living beings. We alone are responsible for our actions, and it’s a mistake to palm that responsibility off on a computer. The road to living well is never easy, but for now at least it’s a road we must walk down ourselves.