Ethical Featured

Future Ethics with Cennydd Bowles at SustainableUX

This is a transcription of Cennyd Bowles’s talk from SustainableUX 2019 –  offered with kind permission.

Cennyd is a London based digital product designer with 16 years of experience advising clients, including Twitter, Ford, Cisco, and the BBC. His focus today is in the ethics of emerging technology. He’s lectured on the topic at Facebook, Stanford University, Google, and most prestigiously of all SustainableUX. His new book, Future Ethics, was published last year and features a great chapter specifically on the environmental ethics and technology, which is what we’re all about.

Watch the video:

Read the transcription:

In the days of empire, the British started to grow concerned about the number of cobras there were within India. So the governors came up with a simple economic remedy. Why don’t we just offer a bounty for cobra hides? And that policy was a hit. In fact, it was so successful that enterprising Indians started to breed cobras just for the bounty. Unsurprisingly, that caused Asia a suspicious uptick in the number of bounties paid out. And eventually the Brits clocked on and cancel the scheme.

Now this posed a problem to the breeders. They now had all these worthless snakes. What are they going to do with them? So they chose to loose these surplus serpents into the wild, causing the wild cobra population to surge past its previous levels and of course, thoroughly defeating the point of the program.

Almost every action we take introduces the risk of unintended consequences. Technology is no different. The cultural theorist Paul Virilio who died fairly recently, knew this well. In his famous words — When you invent the ship, you also invent the shipwreck. Every technology carries its own negativity, which is invented at the same time as technical progress.

And boy, have we discovered these negativities within our field. We’ve served up countless examples of technology gone wrong. The press is now is likely to call technology a danger, as much as a savior.

Now to be clear, these terrible mistakes aren’t intentional. They’re mostly the result of these unintended consequences. But that doesn’t mean the industry gets to escape blame for them. There may have been no malice involved, but the was unfortunately plenty of negligence. And unless this changes, I worry that this will hamper the growth of our industry.

Over the next 10 to 20 years, the tech industry is going to require an enormous amount of trust from its users. We will ask people to connect more and more of their lives to the products and the services that we build. We’ll ask them to trust us with their homes, their vehicles, even the safeties of their families. The potential harms implicit within emerging technology exceed those that we see even in technology today. The stakes will only get higher with time.

Algorithms could reinforce and cement bias in access to work, amenities, even justice. Thirsty data centers and aggressive upgrade cycles could contribute to climate change. Surveillance systems could silently assemble the building blocks of totalitarianism. And autonomous machines could be weaponized putting death-dealing power in the hands of tiny groups. Automation could ravage the world of work, deepening inequality and upending our economy. And of course any kind of advanced artificial intelligence could force us to redefine what it means to be a person.

It’s time, therefore, that our industry took its ethical responsibilities seriously. And this has been my focus now for the past three years or so. So today I’d like to talk about what I’ve learned along that journey and give some thoughts on how we as a community can respond to those challenges.

Perhaps one of the most fundamental shifts that we have to enact in our mindset is to recognize that technology isn’t neutral. And of course it never was.

In 1980 the philosopher Langdon Winner wrote a paper that’s now famous called “Do Artifacts Have Politics?” And his conclusion — essentially, yeah, damn right they do. And he gave the example of Robert Moses. Robert Moses was the city planner of New York City in the middle of the last century. And Moses was a racist. We have plenty of evidence now of Moses’ is prejudiced views. And Winner goes on to assert to set forth in the case of how Moses actually racially segregated the entire New York City area, through the use of essentially technologies, through particularly things like low bridges, such as we see in this slide here, built intentionally low over the parkways leading to Long Island.

Now the point of this design feature, if you like, was the black people, mostly, tended not to have access to private transport within the city at that time. And by building these bridges intentionally low, buses couldn’t pass under them. And as a result, buses were denied access to the beaches of Long Island. So Moses had actually used the hulking great big, inert structures of concrete and steel and stone to affect his racist social policies on the entire City.

So if something is inert as a bridge can have this kind of moral, social and political impact then of course the technologies that we build every day to day can as well, being full of language and light and energy.

It’s a mistake to separate technical capabilities from human capabilities. These things act together. They become interwoven, hybridized actors. Things change what people can do and how they do it. It is true to an extent that old sage that guns don’t kill people; people do. But a gunman, the hybrid actor of a human and a technology coming together, sure as hell can.

So we have to abandon this belief that the things we build are just neutral tools. We have to recognize therefore that we can’t wash our hands of the social, ethical and political consequences of our work. This can be a tough sell to some. Technology and algorithms and then the bedrock of it all, data, are often seen as clean, objective, neutral things.

Ironically, the main ideology that a lot of technologies carry is that technologies carry no ideology. But of course we know that that isn’t the case. Professor Geoffrey Bowker came up with this memorable phrase — Raw data is an oxymoron. All algorithms, all data carry the biases of the people and the cultures that collect, process, analyze and present that data.

Here you can see a fantastic piece of information design by Simon Scarf at the south China Morning Post and Scarf has chosen the very explicit and in fact a biasing metaphor of dripping blood to represent, or to tell a story of the heavy human cost of the Iraq conflict. As I say, I think it’s a wonderful piece of design that is laid in. It almost literally drips with meaning.

However, on the right here, we see essentially a presentation of exactly the same raw data that completely transforms the meaning of that data. This is by Andy Cotgreave who works with Tableau software. And Cotgreave completely transforms the inherent meaning of this same data with just three changes. He flips the data on the vertical access, he changes the hue away from red to blue, and transforms the headline to point to the decline in deaths. The same raw data with a significantly different meaning with only a couple of fairly trivial design changes.

A further mental shift we have to make is to recognize that design actually is the application of ethics. Now sometimes that relationship is explicit. If you’re say, designing razor wire, then whether you realize it or not, you’re making a very clear statement that someone’s right to private property is so important that we should injure someone who chooses to contravene that right.

But even if you’re not making something as morally laden as say, razor wire, or weapons, all design and all making is really a statement about the future. Whenever we design, whenever we create, we are making a claim about how we should interact with technology in the years to come, and therefore by extension, how we should interact with each other and with our governments and with society, and of course without planet.

Now at the same time in making that decision, we’re discounting thousands of alternative futures, is therefore a clear ethical component. Because we’re making a case for how we should all live.

Ideally, we should get to a perspective where ethics becomes an ethos, a lens through which we see the world around us, and transforms the way that we work. But of course we also have to start somewhere. We’re coming from quite a long way behind where we should be, and there are concrete steps that we can take all the way through the design process to make sure that ethical considerations are foregrounded.

So this is my oversimplification of the design process. Of course, designers in the audience will know that it’s far messier than this.

Right at the start of the project, we have an opportunity to make sure that we are putting ethics at the heart of how we define a projects and success criteria and things like that.

Perhaps the first place to start is to redefine who we think of as a stakeholder for our project. If you open any MBA textbook, then there’ll be plenty of information on stakeholder analysis. And this will be some quite detailed work on how to identify people who can affect your work. Typically these are of course your colleagues, or they might be regulators or partners, generally people in suits. But of course there’s a second group of stakeholders for any project and it’s a cohort that we often overlook — the people who can be affected by a project. Too often we’ve overlooked those people and we’ve not really considered their needs.

If we consider Airbnb, I think this is a classic example. Airbnb, according to the canonical mindset of user-centered design is a very well designed service, but it’s a well designed service for precisely two groups of people. The first group is people who have spare property that they want to put on the market or make available for short term lettings. And the second group is of course people who want to rent that property, usually in a different city. And for those folks, that service works particularly well.

But of course it doesn’t end there. The problems, the costs of that system, the cost of the decisions that Airbnb have taken, the “externalities” to use economic language, all fall on people outside of those two groups. It falls on local residents, local neighborhoods. If you’re unfortunate enough to live next door to an Airbnb rental property, then you no longer have a neighbor. You have a new set of neighbors every few days, each of them dragging those damn wheelie suitcases over the cobbles at 4AM. They’re not going to spend their money locally. They’re going to spend it more likely in the tourist traps. They’re not going to be interested in forming neighborhood and a society within your particular region. Instead, they will obviously bring their own set of assumptions and norms for behavior. And of course, in doing so, they help to push up the cost of renting within your area as well. So that are the costs paid by people who weren’t really part of that definition of the stakeholders that the Airbnb have come up with.

So we have to find a way instead to move away, not just away from user needs, or at least to add the idea of focusing on society’s needs. User centered design I think has become significantly individualistic and narrow. We have to find a way to consider these large cohorts within this definition phase and to try and use those lenses to extrapolate where might those harms potentially fall and how can we ensure that we work for the benefit of broader society.

We might even want to try and consider abstract concepts here. If for example, you work for Facebook, then I think there’s a reasonable case that some of the work the company has come up with over the last few years has harmed things like democracy or the freedom of the press, or at least it’s started to tease apart the fabric of these things. Perhaps we need to find a way to explicitly protect these values, these abstract names, these concepts, within the design and development work that we do.

One way we can try to extract or extrapolate risk within this phase is to create what I call a persona non grata. This is essentially an inverse of a regular persona, it’s almost a dark persona, used to shed light on the bad actors within a system. So this may be say a hacker, a troll or a terrorist, a harasser, and they would get the full persona treatment. We will explicitly list out their goals and motivations when they want to use our service for ill. And of course then our job becomes to hamper these individuals. It’s almost a reversal of the typical user centered design process. How can we try to reduce their task success with what they want to try and do? Exposing them in this way I think is an important step to try and treat the problem.

When we come to the idea generation phase, we have I think a couple of potentially very valuable strategies. When we talk about unintended consequences, it’s helpful to be precise. Just because a consequence is unintended, it doesn’t mean that it’s unforeseeable. We can actually take steps to tease out the second and third order effects of our work.

If you talk to an ethicist about this, they’ll often refer to the idea of a moral imagination, the idea that we should anticipate what might happen next and then figure out the ethical impacts that that might have. But that feels pretty abstract, right? That’s a thought experiment. It’s easier to do if you have some kind of prompt.

So I like to borrow from the field of speculative design and to create something that I call a “provocatype.”

It’s easiest if I just walk through an example. So here’s one. This is by Marcel Shouwenaar and Harm van Beek, two designers in the Netherlands, and this was a provocatype that they built for two energy clients also in the Netherlands.

And the scenario, the brief that they were given, essentially brings together two quite likely futures. One, the future of SKes energy provision, and second, a future of widespread electric vehicles. And it’s a reasonable assumption to make that people won’t all have private charging infrastructure for electric vehicles because the drain is significant and these are expensive things to install. So this is looking at some kind of public charging infrastructure within that type of future.

And the challenge we have then if we have scarce energy and increased demand on the grid for electric vehicles — how do we prioritize, how do we allocate those precious resources. So this provocatype is an attempt to answer the question.

This is a quite large physical device. It’s perhaps two meters high and it’s, as you see from the picture here, it’s made of metal and steel and LEDs and things like that. And I’ll run through how it works from the bottom. So at the bottom, we actually have as part of the provocatype, they’ve actually added these charging cables. So you pick one up and you plugged them into the slot. Above that we see we have LEDs, these light up to show the color that you’ve been allocated within the system. Then above that you have an RFID reader. And you take the RFID card that you’ve been allocated, you tap it onto that reader and it authenticates you and allocates your status, if you like, within that system. Above that we have a couple of controls, these little colored knobs that we see. And the idea behind this is you can twist these to request different types of energy. Maybe you just need a quick charge, and then, just 10 minutes, a quick top up to carry out some errands. Or maybe you need a full charge to the vehicle. So you use these knobs to control your request.

And then above that we see essentially the algorithmic output, how the system has decided to allocate energy provision starting at the bottom, that’s our current time, and you see red and blue have a little bit each. Then as we go up a little bit further into the future, blue, gets their turn and then green will finally get a chance as well. So this shows essentially the priority and the queue for energy. Notice also that we have different amounts of energy bandwidth, if you like. We have points of extreme constraints and we have points further up of wider energy availability.

So this is really an attempt to expose what this algorithmically driven world — maybe we could call it an “algocracy” — might look like in a decade when applied to say, public charging infrastructure.

But the most interesting thing for me is the designers didn’t stop there. They actually went to prototype the RFID cards themselves. These are the cards, remember, that you use to authenticate with the system. So people are issued with these cars that allocate different energy status and different social status. So in the middle we see a card given to a doctor. Doctor has top priority. But there’s a caveat. You use this with discretion, there’s a threat that unauthorized use is punishable. Beneath that we see a probation ID. This might be something given to say, a recent offender or someone out on bail. And their energy use is capped. They’re given low priority within the system. So we see high social status and potential inequality could be encoded and in fact reinforced through the incentive things and through energy scarcity.

It’s important to note the designers here aren’t saying this is the best solution. Instead, they’ve created the design that gets the right conversations happening that stimulates some moral imagination. If we make certain assumptions about how technology should happen over the coming years, what are the consequences of that? How might those benefits and harms fall upon this wider set of actors within the system?

So for me, the provocatype is an interesting sort of artifact that creates a wormhole between making and research. It is a prototype; it is a designed object, but it exists mostly as a research probe.

Another reframing that’s particularly important I think at this phase of the process is to think carefully about how we view ethics.

Ethics obviously can be seen as a drag — something that might slow down or hamper innovation. But I like the phrasing from Peter-Paul Verbeek, who’s a philosopher of technology also from the Netherlands. He says that ethics should accompany technological progress, it doesn’t necessarily have to directly oppose it. So I think it’s helpful to start to see ethics as a constraint, a way to infuse ethics into our work, a way to view the opportunities that we have in front of us.

Designers understand constraints quite well. We know that they may well indeed trim off some possibilities. I mean that’s indeed that the point of ethics, is in some way to prevent certain types of harmful development. But designers also know that constraints can be seeds of innovation. They can in turn create, they can inspire new design solutions that otherwise wouldn’t have existed. I think that’s an important reframing and quite a subtle one, but one that can be strongly to our advantage when we’re talking about bringing this stuff into our work.

When we come to the design phase proper, if you want to call it that. Perhaps one of the most powerful approaches we can take is to materialize things that have been previously hidden. So much of the ethical impact of technology comes because certain things have been obfuscated or hidden beneath the hood. Under the mantras of simplicity that run through particularly the design industry, we’ve treated things like data flows as complexity. We’ve trained the user that they have no business poking around and understanding what’s happening beneath the surface. But so many of tech’s ethical dangers stem from this invisibility.

Sometimes people say we should make a case for transparency here to try and change those assumptions and the functions there. But I think that’s entirely the wrong framing. Transparency of course means that we can see something. And I’m much more interested in bringing things into the visible spectrum, making them material. By shifting things like say data flows or algorithmic persuasion and bringing those into the light, people can make more informed choices.

Perhaps one of the best examples in current consumer tech is the energy monitor that we see in a lot of hybrid vehicles. This is from the Prius, I think quite an old version, 2003 or so. But what I love about this display, this piece of design, is that it gives insight into a complex system of energy flow within a vehicle, something that’s previously been completely hidden, even even within some hybrids. And this means that you can learn the effects of your driving style, in other words, your interactions with the device, and then see how that affects the flow of this invisible resource. And that means then of course, you can adjust your actions. Accordingly to preserve energy, you can start to shift gear differently or modify your acceleration and braking to try to preserve energy and obviously to maximize your mileage. So by materializing the invisible, we’re actually helping people act in more sustainable ways.

Here’s an example for some hypothetical home hub. Think something like a nest or one of these IoT devices that aggregates and coordinates a lot of home automation. And let’s say for example, that we have a system that’s provided free of charge to users and in exchange you agree to see some adverts on the device. A system like this, taking that idea of exposing, materializing the energy flows within the Prius, and applying that to data, would allow us to visualize at a glance, just what’s happening within the system, materializing these previously invisible data flows. So we might have say, updates on the data flows that are going to home automation devices, but also the data that’s being taken from users and then passed through some system of anonymization to advertisers. What I particularly like about an approach like this is it gives us the opportunity to — we’d get closer to informed consent, which is something that a lot of these systems currently, pretty much lack. For example, if I’m a user of the system, I can then say — well, actually I don’t think it’s right that you have access to certain biometrics — I could then go in and intervene and remove that and take that out of that agreed data exchange. So I think this brings us closer to the utopia of a fair consensual data exchange, which as I say, unfortunately is sadly lacking.

Another design tenet we often hear and one that the industry has relied upon for so long is the idea of — don’t make me think — this is of course of the title of Steve Krug’s bestseller.

And yes it is often true. It is generally good to minimize friction in interfaces. But the downside is at times that can cause us to take agency away from the user. Sometimes that principle is better inverted. Occasionally we should make the user think, and actually increase the friction within the things that we build.

Coming back to this same hypothetical device, if we have, say, a request for sensitive data, if that home hub would like to access some fairly personal information like my locations or my home location, some biometrics for example, then I think we have an important role here to ensure that the user really understands the value exchange that they’re about to enter into.

We will of course promise that the data is anonymized. But anyone who’s worked in the privacy field knows that any anonymized data always carries some risk of future re-identification with better algorithms or other data sets to cross correlate and then organize against that data. So I think it’s important that we educate the user of that risk and actually make them pause and assess whether this is a value exchange and they want to enter into freely. So I’m suggesting here we actually increase the friction — I’m going to make them get a stylus or their finger out and physically write the words “I agree.” Of course, this is a contradiction of classic usability; we want to reduce friction. But I think in this case it’s certainly warranted.

Another method we can employ at this stage is to consider what’s called the veil of ignorance. This is an idea that comes from the philosopher John Rawls in his book of “Theory of Justice.” The idea behind the veil of ignorance is essentially that a fair system is one designed as if we don’t know our place, we don’t know where we’ll end up in our system. So let’s say we’re designing a welfare program. If we’re designing behind the veil of ignorance, that system should be fair, whether we end up as a taxpayer, a welfare recipient, an administrator of the scheme, however the dice land, essentially, we should feel like we’ve got a fair deal within that system. Essentially, this is about agreeing the rules of the game before dealing the hands.

But I think perhaps the easiest vehicle for some of these conversations to happen is through critique. It lends itself very well to this sort of ethical discourse.

One way we can do this is to a point what’s called a designated dissenter. This is an idea that I came across in Eric Meyer and Sara Wachter-Boettcher’s book “Design for Real Life.” And this role, this designated dissenter, they’re essentially a role of constructive antagonism. So [inaudible] the occasional grenadier defines to challenge the assumptions that the team has made.

An example I like to give, for example, would be say, if we have a form that’s asking for marital status. Then this designated dissenter could role play as someone going through a messy divorce. Like — how dare you, why do you want this information? — and just to challenge those assumptions. And I think that can be a very healthy way to encourage debate about the moral implications of the decisions that we’re about to make.

Of course, this is a role you want to rotate. You don’t want one person to be a designated dissenter for months at a time because the team will very quickly find a way to route around them and that person went exactly have a fun time. But rotating that helps everyone to be attuned to the potential dangers of the decisions that you’re making.

The second thing that I think is particularly important at this stage is to rely a little bit on essentially what’s come before us. There’s a somewhat frustrating tendency I think in the tech industry to try to solve everything from first principles, to believe that we’re the first on any new shore. But of course when it comes to ethics, there are millennia of thoughts before us. So this is the ideal opportunity to consider what’s come before and to try to apply what I call the four ethical tests that are inspired by the three main schools of contemporary ethics. So I’ll go through those now.

The first test I’ll offer is simple. What if everyone did what I’m about to do? Would a world in which this was a universal law of behavior — would that be a better place or a worse place?

And this idea comes from Emmanuel Kant. It’s what’s known as a deontological theory of ethics and deontologists believe that ethics is about duty. It’s about following some kind of moral obligation, some kind of set of rules here. So let’s apply this question to say dark patterns. Dark patterns, as we all know, intentionally deceptive interfaces that are designed to try to push the user towards certain behaviors that are profitable to us, but not necessarily for them.

So let’s say I have someone in my organization, I mean, I hesitate to throw a product manager under a bus, but it’s likely to be a product manager, I’m afraid, saying — I’d like you to ship a particular dark pattern. Let’s say I have a subscription product and my MP wants me to make the auto-renew flag default to On rather than Off. Well what if everyone did what I’m about to do? Well, if we believe that this is an intentionally deceptive solution, then I think this question gives us pretty clear guidance. If everyone shipped all the deceptive interfaces, all the dark patterns that they could think of, well it’s pretty clear what happens. The world of technology gets significantly worse. We will trust our devices far less. We really won’t want to engage with connected technologies and anything like the way that we do today. So this I think, gives us ammunition to rebuff that approach and say, I don’t think this is something that’s for a broader benefit.

Another deontological question, another test that we can apply is this. Am I treating people as ends or as means? This also comes from Emmanuel Kant. This probably takes a little bit of unpacking. When think about this in the context of digital technology and design, this is mostly about users — am I treating them as free individuals with their own goals that are as important or potentially even more important than my own? Or am I seeing them as ways for me to hit my own targets?

I don’t think this is a question that most designers should struggle with, but I do see within larger data driven companies, the answer to this question become rather blurred. In some of these companies, the framing of users slowly shifts over time. Users become not the raison d’être, become not the reason that we’re doing this in the first place, but they start to become experimental subjects. They start to become means for us to hit our own targets, our KPIs our OKRs, whatever they may be. In short, sometimes in these types of company, people become masses. And when that happens, unethical design is the natural result.

The third ethical test we can offer reflects the second main branch of modern ethics called utilitarianism — am I maximizing happiness to the greatest number of people? And by extension, am I minimizing suffering for as many people as I can?

This utilitarian approach is sometimes called a consequentialist perspective and it’s called that because these folks, these utilitarians, aren’t interested really in moral rules or duty or obligation, they care about impact. They care about what actually happens and you the effects that their choices have. And this feels somewhat promising I think to the tech industry because it has a hint of measurability. It feels like we might be able to do something here, to assess whether our decisions have had a positive outcome or not.

There’s an offshoot, if you like, of ethics, a growing interest in what’s called the scientific morality, the idea that we can actually try to calculate through looking at MRI scans, galvanic skin response, self-reported satisfaction and so on, to have some idea of what decisions and what rules we should live by that do indeed maximize happiness for the greatest number of people.

I must admit, I’m personally not convinced by those efforts, but there is a growing interest in that. And this is perhaps one of the advantages of utilitarianism is it gets us a little bit closer to having some kind of replicable process for making these decisions.

But of course it comes with some downsides. One is, well, I mean, are we really expected to perform this arithmetic for every decision that we make? It’s just going to turn us into number crunches rather than living, breathing people. And it’s also particularly prone to what’s called the tyranny of the majority. If you have, say, 99% of a population that wants to exile or execute a 1% minority [inaudible] you were utilitarian, at least it leaves the door open for that conversation to happen. It can lead to some fairly counter intuitive, moral results.

The fourth ethical test that I’ll offer that we can offer in critique is this. Would I be happy for this to be a front page story? And this is grounded in the third main branch of modern ethics, which is called virtue ethics. Virtue ethicists aren’t interested in rules or really the outcomes. Instead, that motivated by moral character. The choices that we make as individuals, do they live up to certain values? Do they live up to the sort of virtues that we want to aspire to? So this is really a question about accountability. Would I sign my name to a particular decision? So it speaks to our identity, and I think that’s an important hub. It’s a real focal point for ethical reasoning.

Again, there’s perhaps a downside here that if you’re acting only out to fear of embarrassment, well that sounds like fairly shaky grind for true moral action. If you’re only doing something because you’re worried people would criticize you, well, I think if you’re Emmanuel Kant, you probably object to that.

So any of these four ethical tests we can draw up throughout the design process. But I think they’re particularly salient to the critique phase to try and draw out those deeper questions and to lend some rigor to the way that we talk about ethics.

Finally, when we get to the testing phase, of course we have an opportunity to test whether our assumptions were right, whether the impacts of our technologies are actually going to be as positive as we’d hoped.

Kate Crawford, the well-known researcher [inaudible] foundation, and at Microsoft research as well, talks about fairness forensics. These are essentially mathematical and statistical techniques that we can apply to training data sets, for example, to assess whether they might be skewed and biased and therefore obviously biasing the decisions that algorithms make.

But I think it’s important to balance these quantitative approaches with qualitative approaches too. And this is really where deep research has no substitute, testing our assumptions with diverse participants, because of course, who better to act as that designated dissenter than someone who has every course to dissent. Getting people in, therefore, to understand that context abuse to test provocatypes that we’ve built.

A lot of the time these folks will actually be from vulnerable groups. Per the discussion. I had right at the start, the way that technologies and humans tend to operate together, it sadly ends up being that most of the harms or many of the harms from technology fall upon society’s most vulnerable. So we need to ally with those people. We need to listen and understand when they start to warn us about the consequences of the decisions that we make. We need to bring activists who are representative of those communities into our companies, into our design studios to really deeply understand those contexts before our products are released and actually enact that upon them.

Underpinning all of this, we have an opportunity to create what I would call ethical infrastructure, ways to improve the company’s resilience and lend it the habit, if you like, of making proper ethical decisions.

The idea of diversity in business is certainly a politicized issue at the moment, but I make no apologies for bringing it up here. I think diverse teams can act as early warning systems for ethics. So hiring teams with high levels of both inherent and acquired diversity can really help inoculate yourself against ethical risk. I’ve been in critique sessions where I’ve seen people who come from different backgrounds, who are not essentially like a lot of the other folks within that team, stick a hand up and say, well, you do realize this decision we’re about to make wouldn’t play that well with the folks I know from my communities, where we will behave in a different way or we would look at it with our different lens. Hiring for diversity helps to centralize that risk within the design process rather than externally once you’ve shipped product.

If you’re fortunate enough to be a leader within your organization, then you of course can help to instill the right behaviors. You can make it clear that ethical behavior is part of everyone’s responsibility, and you can set the right rewards in place for that, rewarding behaviors and not just results.

Another way to concretize this is to instill good core values and design principles. These become essentially tie breaks, I suppose, for ethical debates. If we have two perspectives at loggerheads, then we can retreat to those core values and say, well, as a company we have agreed that we will always operate by these foundational principles.

Now some of these questions do start to get into the realm of politics. Tech workers are finally starting to see the power of collective action. We look at what’s happened particularly in Google with project Maven and Google Walkout. We’re seeing that employees are starting to recognize that while they may be individually quite weak, collectively they’re strong. Tech workers are hard to hire, they’re expensive, they’re hard to replace, and of course they’re able to stop production, quite simply lift your hands off the keyboard and not a whole lot gets built.

The tech industry is starting to be pressured now by both political wings. The pressure will increase externally on our field. More regulation is guaranteed. GDPR, for example, was just started that process. So I think there is perhaps a duty on those of us who are literate in these topics to try to get involved in that process, in the regulatory process, to shape that future. Because no one, not even the most pedantic bureaucrat wants bad regulation. We all want, if there is to be regulation imposed upon our field, we want to make sure that that’s good regulation that truly protects the people it needs to, but it doesn’t hamper some of the upsides, if you like, of the technologies that we build.

Ultimately, this kind of change has to come through practice though rather than theory. Morality is a muscle that needs exercise. The choice to live well is just that, it is a choice. It’s not some abstract quality that gets bestowed on some lucky people. So we have to ask ourselves uncomfortable questions — how might not be screwing up in my everyday work? Thinking back to those ethical lenses, those ethical tests, which of those makes the most sense to me? How can I deepen my knowledge of that? What would my limit be? What would be something that would make me have to refuse that request and then how would I have that conversation? How would I begin to broach that topic with my colleagues, my peers, my superiors and so on?

Ultimately, I believe where it’s safe to do so, the road to moral behavior involves increasing personal risk.

Cass Sunstein, who some of you may know as one of the architects of Nudge theory, coined the idea of the norm entrepreneur. It’s kind of an ugly phrase, but this is to label someone who stood up and said, well, things don’t have to be the way that we’ve always done that.

Now, of course, we have to recognize that it’s a privileged position. Not everyone has the safety and the comfort to be able to do that. But I will say if you feel comfortable and safe and respected within your team, within your industry, then you’re in the perfect position to use up a little bit of that goodwill to push for moral change.

But we have to be realistic as well. That journey, I can tell you from experience, can be difficult. That work can be emotionally draining. It becomes easy to punish yourself for your own mistakes and your own ethical inconsistencies.

The good news and what I cling to, and what gives me hope in this, is that this journey can still help bring us all closer to the kinds of people that we want to be in the long run. But it does mean we have to draw from each other. Given the challenges of the future, we’re going to need all the thoughtful technologists that we can get — people who care enough to make a difference in this crucial industry and want to help it navigate toward better futures.

Of course, I dearly hope that you will choose to be one of those people.

So I do have a book here, “Future Ethics,” which has been out a few months, and is available now in paperback and digital. Obviously I’d be thrilled if you chose to look it up.

Feature image by muhammed sajid

Sign up to our newsletter

Subscribe for fortnightly guides to ethical living and news on the best new ethical brands 🙌