Ethical

Humane: A New Agenda for Tech with Tristan Harris – Presentation and Transcript

This is a transcript of the presentation offered by Tristan Harris at the “Humane: A New Agenda for Tech” event –  shared with kind permission.

Paleolithic Emotions, Medieval Institutions, and Godlike Technology

What’s going on in the tech industry right now is a kind of cacophony of grievances and scandals and — oh, they screwed this up, and now there’s this election thing over here — and it’s kind of all over the place.

So if you ask people — What’s wrong, what are we trying to fix here? What we wanted to do was say — OK, how can we get behind a common understanding of what’s actually the problem? And how can we fix it? So that’s my hope for today, that we can do in this room together.

To do that, having seen everything you just saw — what is the problem in the tech industry? We’ve got tech addiction, polarization, election manipulation, outrage-ification, vanity-ification, teens mental health, bots…

I’m going to claim to that there’s actually one thing that is driving all of these problems. And it has to do with something that E.O. Wilson said, which is that — the real problem of humanity is that we have Paleolithic emotions, Medieval institutions, and Godlike technology.

This is kind of the problem statement of humanity. Because while we have these ancient instincts, these Paleolithic ancient instincts, we’ve got exponential tech.

Where I first got exposed to these Paleolithic instincts was actually as a kid, as a magician. Magic is kind of a study of the limits of these evolutionary features of our minds, right? You’re looking at these evolutionary limits of our attention, distraction, menus, the limits of misdirection. These are the kinds of features of the minds. And later as a technologist, learning how, how do you get into people’s brains when you’re designing software?

The Persuasive Technology Lab, which I’ve talked about so many times, you can pull on these little strings and how people’s brains work, and then you can do that actually in their relationships and you can control and shape the way people are playing in their relationships.

And then of course at Google as a design ethicist doing this with 2 billion people’s thoughts — how are you affecting 2 billion people’s Paleolithic instincts?

Because while we were all looking out for the moment, when this is technological progress, we’re all looking out for the moment when technology would overwhelm human strengths and intelligence — when is it going to cross the singularity, replace our jobs, be smarter than humans? This is the thing everyone’s talking about.

But there’s this much earlier moment when technology actually overwhelms human weaknesses. And I want to claim to you today that this point being crossed is at the root of bots, addiction, information overload, polarization, radicalization, outrage-ification, vanity-ification — the entire thing which is leading to human downgrading. Downgrading our attention spans are relationships, stability, community, habits, downgrading humans.

The Race to The Bottom of the Brain Stem

And we’ve been missing a name for this problem, which is kind of the system of results that takes place when you have an extractive attention economy. So when I say extractive attention economy, I mean that there’s this race to the bottom of the brain stem, which everybody knows, which is this race to reverse engineer human instincts.

Who’s a better magician evolutionary biologist who can figure out these tricks in the human mind? I’m going to go through the basics stuff that everybody knows, but just bear with me.

Obviously, turning our phones into slot machines, turning Tinder and dating into slot machines, our email into a slot machine — we check our emails 74 times a day, crawling in our dopaminergic systems, overloading social proof, social availability, we have to be there for each other, social reciprocity with streaks. We’re playing with all these different cognitive biases in people. And it’s all in the name of getting our attention.

But in the race to the bottom of the brain stem, it’s not just about getting your attention. I have to get you addicted to getting attention because that’s how I get more attention. So it works better when you sit there wanting to up your follower count, when you sit there wanting to get more likes, when you sit there wanting to get more attention for what you look like. And it’s created a world where you get rewarded only by looking different than how you actually are.

And we spend now about a fourth of our lives on the screen in these artificial social systems. And it’s not just on the screen, it’s also off the screen, because it shapes the way that we now think about our relationships. It’s colonized how we see our relationships.

It’s had serious results. Here’s a graph of the percentage of women with high depressive symptoms starting in 1991 at 17% and if you go all the way to 2013 about the rise of kind of the modern age of social media, it hits about 20, 21% but then look what happens after that.

So, this is Jean Twenge’s research, which shows pretty deterministically that for 10 to 14 year old girls especially, when you hook up social validation to a variable schedule of slot machines and social validation from friends with your self image, we know what this is doing. It’s downgrading and overwhelming who we are in our identities. So downgrading our attention, downgrading our need for attention from other peoples, and downgrading our identity.

Downgrading Humans

But then it starts to downgrade something else, which is our free will. So when I say that, you might think I’m going too far, but let me give you an example.

Let’s say you’re about to hit play on a YouTube video. How many people have done this? You’re about to hit play, you think I’m going to watch one video, just one, and then I’m going to do something else. And then of course what happens is you wake up from a trance and it’s been two hours and you’re like — what the hell just happened to me?

And it’s because when you hit play on that YouTube video, what actually happens is that it wakes up a little avatar Voodoo doll version of you. Inside a Google server somewhere based on all your clicks, based on all your likes, based on everything you’ve watched and how that compares to the other Voodoo dolls, we’re building — those are like your nail clippings and hair filings — so all those clicks, we’re just making this boot it all look and act more and more like you.

And we don’t have to manipulate you because we just have to manipulate the Voodoo doll. We test a bunch of videos on him and we say — which one’s going to keep you here the longest? And it’s like playing chess against the chessboard of your mind. I’m going to win because I can see way more moves ahead on the chess board.

This is such now that YouTube’s average watch time is more than 60 minutes a day on mobile. And it’s because specifically of what the recommendation engines are putting in front of you, as said by the chief product officer at YouTube.

And this has actually colonized our choice because with over a billion hours on YouTube watched daily, 70% of those billion hours now are from the recommendation system. So the AI’s are actually downgrading and overwhelming our free choice. So freewill is being colonized.

But then it’s deciding what we believe and downgrading what we believe. Because if I’m YouTube, and imagine this is a spectrum, and on one side of YouTube you have the calm Walter Cronkite, rational science, Carl Sagan, side of YouTube, and the other side of the spectrum, you have Crazytown, you have Bigfoot, UFO’s, etc.

If I’m YouTube and I want you to watch more, I’ve got the regenerative section, but I’ve got the extractive section. Which direction am I going to steer you if I want you to watch more? I’m always going to tilt it towards crazy town. You could start in the calm section, you could start in the crazy section, but if I want you to watch more, I’m always going to tilt you that way.

And three examples of that. About a year ago, if you were a teen girl and you started on a dieting video, it would recommend anorexia videos, because those were the things that we’re better at keeping people’s attention. If you were watching a 9/11 news video, it would recommend conspiracy theories about 9/11, it recommended Alex Jones 15 billion times. Just consider for one moment 15 billion times Alex Jones is recommended to people. And if you are watching a NASA moon landing, it would recommend Flat Earth.

But it’s not just the videos that recommend it. Also, and this is just taken from a week ago, from Guillaume Chaslot, the YouTube recommendations whistle-blower. This is from a week ago. It’s also polarize and change the language that we’re exposed to on a daily basis.

So from a week ago, what are the most recommended keywords?

  • gets schooled
  • shreds
  • debunks
  • dismantles
  • debates
  • rips
  • confronts
  • destroys
  • hates
  • demolishes
  • obliterates

So even if you solve fake news, this is kind of the background radiation of what a population the size of Islam is exposed to on a daily basis for 60 minutes a day.

So now let’s take a look at one of these examples. So what percentage of the results, if you look at flat earth, for example, what percent of results were positively in favor of flat earth?

This is from data about two years ago, if you looked on Google search, the results were 20%. So 20% of the Google search results were in favor of flat earth. If you looked on YouTube search, it was 35%. But if you looked at YouTube recommendations, it was 90%.

So it’s not like anybody wants this to happen, it’s just that this is what the recommendation system is doing. So much so that Kyrie Irving, the famous basketball player, said he — believed the earth was flat and he apologized later because he blamed it on a YouTube rabbit hole.

When he later came on to NPR to say — I’m sorry for believing this, I didn’t want to mislead people — a bunch of students in a classroom were interviewed saying, oh, the round-earthers got to him. Which shows you something really critical and important that once you tilt people into these crazy directions, it’s actually really hard to bring people back.

So now think about a population the size of Islam and languages the engineer’s don’t speak being tilted in this direction, not because anyone wants it to happen, but because that’s the extractive attention economy better at getting your attention.

This also happened with Facebook. About two years ago, Mark Zuckerberg said — our new mission is not to make the world more open and connected, it’s the bringing the world closer together; and they said, one of the ways we’re going to do that is we’re going to build an artificial intelligence to start suggesting Facebook groups to people.

He says — and it works because in the first six months we helped 50% of people join more meaningful communities.

So what does that mean? So there you are, and you’re about to join a Facebook group, let’s say it’s a new moms Facebook group. It wakes up the avatar Voodoo doll version of you, based on all your clicks and everything you’ve ever joined. And it says — which these groups, if I got you to join them, would keep you there the longest, would be most engaging? What would be the most engaging groups?

And what do you think one of the most popular groups was? Anti-vaccine conspiracy theories.

If you join anti-vax, then you’re in what Renee DiResta calls a conspiracy correlation matrix because it recommends from anti-vax, chem trails, flat earth, etc., and now anti-vaccine is a top 10 global health threat, not entirely because of social media, but it’s been exacerbated massively by social media.

So now we can hire 10,000 content moderators in English. We can start hiring people in Burma to speak the languages that we want to start moderating all this content. But imagine that tilt graph, you’ve hired 10,000 boulder catchers that are trying to stop the boulders from rolling down the hill, but you’ve created a system that’s impacting 2 billion people. It doesn’t scale.

So how many engineers… By the way, YouTube has fixed the flat earth, teen girl Anorexia problem and the anti vaccine conspiracy issues, those have been cranked down now.

But the point is, for each one of those three issues, which are only the ones we reported on in English because we have journalists looking into this stuff, how much is this happening around the world in languages the engineers don’t speak? How many engineers have looked at 9/11 conspiracy theories and Arabic?

And we know from an MIT Twitter study that fake news spreads six times faster than true news. And languages the engineers don’t even speak, where you have, for example, in Burma, in a genocide happening where there is only a couple of years ago for Burmese speakers at Facebook for a population of 7.3 million users in Burma who are not digitally literate, amplifying a genocide.

This is the situation that’s tilting the world in this crazy-making direction. And we have about 4 billion new people coming online into this environment in the next six years.

So this is just situational awareness.

Artificial Social Systems Overpowering Human Nature

What is happening? So are humans just bad? Is this who we are? Is this humans choosing all of this? No.

That’s not why this is happening. It’s happening because we have these artificial social systems that have hijacked and overpowered human nature, combined with overwhelming AI that have again, overwhelmed human weaknesses by building these Voodoo dolls, combined with an extractive attention economy, built on getting attention from people.

And this is what’s causing the downgrading of humans. Because while we’ve been upgrading the machines, we’ve been downgrading our humanity. And this is existential because our problems are going up — climate change, inequality — our ability and need to agree with each other and see the world the same way and have critical thoughts and discourse is only going up. But our capacity to do that is going down because of human downgrading. So we’re going the wrong way.

So that’s why now this is everything up until now. And that’s why we need to change course. We need to change course now because this situation is about to get even more dangerous.

How many people here have been with a friend and you’re convinced that Facebook was listening to your microphone because an ad for the thing you were just talking about, shows up in your Facebook feed? How many people have that experience?

I want you to look around, there’s a large number of people who have had this experience.

Well, it turns out that data forensics show that they’re not listening to the microphone. It’s a conspiracy theory. But the reason why you believe it is because they don’t have to listen to your microphone, because they have a little Voodoo doll version of you and they just listen to what that guy says. And because they know and can predict what that person’s saying, they know exactly what you were thinking about anyway.

And that’s the point is that we went from a world from just hijacking people’s minds and cognitive biases to now completely out-predicting human nature. I don’t have to, with Cambridge Analytica anymore, steal your data, with, there’s a new paper out with 80% accuracy, I can actually get your big five personality traits just by looking at your mouse movements and click patterns. You can also do it with eye gaze by the way.

And now AI can actually completely generate faces. None of these faces that you’re seeing are real. This is not a cross morph between existing faces. These are all generated from scratch and they can be generated to be ones that you would find completely trustworthy.

For example, this person might look eerily familiar and you don’t even know why. He might be the combination of people that you might’ve seen on this stage, and you can have these faces animate to say the things you want them to say, and in the future, whisper in your ear the words that you will find the most trustworthy.

With technology, you don’t have to overwhelm people’s strengths. You just have to overwhelm their weaknesses. This is overpowering human nature. And this is checkmate on humanity.

Because while we were all watching out for them only it crosses our strengths, it’s been manipulating our weaknesses, and as a magician, that’s the only thing you need to know about someone.

How to Solve the Problem

So… Just take a moment and think into, for just a second, what you’ve heard. So, what should we do about this?

I know. Let’s go gray scale. Let’s turn off those notifications. Let’s put those apps into some folders. It’ll be great.

That’s like being in a burning, you know, a situation with climate change and being like — I got it; let’s ban straws; we got this.

OK, so this is clearly massively insufficient. It’s a systemic problem that needs systemic solutions. So what are we going to do?

What if we train the engineers in ethics? Let’s have them study Jeremy Bentham and Emmanuel Kant in consequentialism; that will clearly solve the problem.

Believe me, I’m far in support of training everyone in ethics, but this is not sufficient to the problem we just laid out of human downgrading.

I know, let’s put it on the blockchain. Let’s just put the attention economy on the blockchain. This will be great. That’s not going to solve the problem. Although the blockchain community is involved in the solution so we can talk about it later.

What did they pay us for our data? Well yeah, the companies should pay us for all the data that they’re using to profit off of us. But that’s like this digital Frankenstein is wrecking the world and then handing us in pennies every now and then while it’s doing that. That’s not enough. And if they protect our data, well that’s nice, but we just showed how I don’t need your data because I can predict everything I need to know about you. So that doesn’t solve the problem.

What if we wait for more research? We’ll just figure this thing out. No. Because this problem is urgent. It’s urgent.

There’s no question that Silicon Valley is sophisticated about technology. But the thing that we have not been very sophisticated about and we need to get sophisticated about is human nature. And that means having a much more full stack understanding of how we really work. A magician’s view of ourselves. An anthropologists view of ourselves, a meditation teacher’s view of ourselves.

And that’s like taking this evolutionary diagram and saying, maybe we need to look in the mirror and see a full appraisal of all of our human weaknesses, frailties, and also our brilliance. Where are we brilliant?

And per what Aza said at the beginning. This is a quote from his father, Jeffan interface is humane if it is responsive to human needs and considerate of human frailties.

That’s why we’ve been calling this humane technology. Because we have to start with the view of human frailties. And design ergonomically to wrap around those human frailties and human needs and figure out how do we leverage our brilliance.

And that’s why we invited all of you here today to launch a coherent agenda for technology that actually addresses the core concerns that are actually effecting people’s real lives, real elections, the mental health of kids, polarization, vanity-ification, outrage-ification, the entire thing. The good news is that all of it has to do with just one thing, which is the overpowering of human nature. And we can bring it back into alignment.

So how are we going to do that?

A Full Stack Socio-Ergonomic Model of Human Nature

Well, the three things that got us to this world of human downgrading are — artificial social systems, so again, social systems that we have to bend ourselves to be inside of these things that are kind of hijacking our minds. These overwhelming AI’s, so these AI’s that out-predict human nature. And these extractive incentives. These three things are what have led to the downgrading of humans.

But the good news is that by changing these three things, going from artificial to humane social systems that bring back into alignment all of our social instincts and recognize the natural brilliance, humane AI’s that are a fiduciary to our values and the limits of human nature instead of exploiting them, and from extractive incentives to regenerative incentives that are based on competing in a race to the top to help us, and doing these things can go from the downgrading of humans to the recognizing the brilliance and amplifying the brilliance of human beings. Our brilliance of our relationships, our civility. Our creativity, our common ground, our shared truth, everything.

So this is what we have to do.

Let’s take a look at how we do that. How do you build humane social systems? I mean, after all — we got here by trying to do human-centered design. Somehow that wasn’t enough.

A joke about this between A’s and I’s, the bug in human-centered design is it puts human bugs at the center.

So we need to protect what we can see. We have to start learning how to see new things about ourselves that we haven’t been projecting. And that means again, looking in the mirror and having a full stack socio-ergonomic model of human nature.

And we’re going to introduce just a little overview how we might do that. So this is a full stack socio-ergonomic model. And it involves different layers — our physiology, emotions, cognition, decision making, sense making, choice making, group dynamics. You have to have this full understanding of how do we work.

And I’m not going to go through all of this, but I’m going to give you a few examples. There’s this two sides of these individual ergonomics for how do we work individually, our attention, physiology, tension, cognition. There’s a whole model for how these things are Paleoethically wired and a kind of ergonomics of how this works. And then also our social ergonomics. So how do we work in groups? How do we do social reasoning?

And if you’ve had this ergonomic model, then you can use it to start diagnosing problems.

So let’s take an example that we all face. How many of you here — you’re writing an email and you get midway through writing the email and suddenly, boom, a new tab opens. How many of you’ve done this? You self interrupt. New Tab opens…

We actually do this about every 40 seconds. That’s the average time that we focus on a computer screen now. Can you believe that? This is actually from two and a half years ago? 40 seconds.

Now notice this wasn’t from an external interruption. So if we’re trying to solve this problem with technology, what are we going to do here? Should we build better AI? Do we need more data? Do we need better machine learning? Do we need a “do not disturb” button? Well notice, where did the problem come from? It wasn’t from the outside. The call was coming from inside the house. So the problem was inside of us. So where inside of us was it coming from?

So let’s take a look at this full stack or economic model. Let’s start with our physiology. How are we breathing in that moment?

There’s actually something called email apnea. We don’t breathe when we read our email. And that has a large effect on your cognition.

Take a breath right now. Did you notice what you’re breathing was like before you took that breath? When you are subject to not breathing, you can feel stress, anxiety, all these other feelings. But it’s as simple as putting your attention back on it that suddenly you gain agency over the thing that had been downgraded you.

So that’s the theme we’re going to show you, is that putting attention back on the thing we can’t see can actually reclaim these things. Because our physiology, our heart rate, our stress, our cortisol, affects our emotions and our emotions affects our attention and our cognition.

So starting with this deep understanding, we’re starting at the micro and we’re going to go to the macro.

So that’s how we start. And then now let’s take a look at an example of how this affects our social reasoning.

Leveling up. So what’s the good example of that? Well, when we feel socially isolated. This is one of the most common experiences with technology. Today, only half of Americans now say they have meaningful daily face to face social interactions. Only half. 18% say they rarely or never feel like there are people that they can even talk to.

All of us have those moments when you feel you’re alone and you don’t even know who to call. How many people have had that feeling of that you’re alone, you don’t even know who to call, and you’re just sort of trapped in it…

Our brains are brilliant at social reasoning, but somehow we’re getting confused. I mean, for a second you could say, well, hold on. We have things like this. Why don’t we just do this when we’re lonely? Why don’t we just do this? Maybe we’re not doing this because our revealed preference is that we want to be alone. We actually want to just sit there alone. It’s like, no.

Well, our nervous system is confused because when we’re not given a really great menu of choices where the burden is all on our minds to think about what are those thousand relationships I might want to type in and which letter do I want to type? It’s not very ergonomic to the social signaling that our brains are looking for that information.

And imagine if instead the kinds of information our brain is looking for are those signals that our friends care about us. For example, our brains are really good at receiving the information from our friends when they say — call me anytime you need to talk — when your friend says that to you. We know what that feels like in the moment. But our brains are not so good at remembering that, when we feel down.

But imagine if technology was designed differently because it recognized that fact about human nature and made those signals visible when you were feeling alone. So imagine you were alone and you would see this as the opening menu on say, a screen. And imagine if technology and Facebook and these things were designed to help deepen those one or two relationships that are those “being there for each other” relationships.

And what if it was easy to get support from each other? What if this was as easy as getting knowledge from Wikipedia? Or as easy as making content go viral on social media? What if this was the thing we’re making easy? What would that do to addiction, to polarization, to conspiracy thinking? If instead of being isolated, we were more connected because we could get support from each other.

So that was social reasoning. Now let’s go up a level and look at polarization.

So, consider where you are in the political spectrum. Raise your hand if you think common ground exists with people on the other side. Yep. Keep your hand up if you find it easy to find it.

We find it really hard to find common ground with people on the other side. We think we can’t agree. And polarization has been going up from 1991 to 2001, 2004 to 2014. Some of it exacerbated by social media. But are we actually polarized we can’t agree? Or are we being presented with experiences that don’t bring out our ability to find that natural common ground?

Here was a study from 2004 asking Americans to estimate how much of the total wealth does the top 20% own. How much of the total wealth does the top 20% own?

If you ask Democrats, they thought it was about 60% or 55%. But if you ask Republicans, they actually agree with how much they thought that the top 20% owned. But then if you ask them, OK, maybe that’s fine, but what should the top 20% out? What’s the ideal amount? How much should the top 20% actually have? And Democrats also, and Republicans, agree basically, on what they think the top 20% should have. And yet the actual wealth the top 20% has is much, much, much more than what both the ideal and the estimate are.

So if we agree, why are we fighting about this radioactive topic? Maybe we’re not being asked the right questions. And what if we agree on more than we think? And maybe it’s a matter of social media and technology not putting us into the kinds of group dynamics where you have these mass-broadcasts-at-each-other groups, which our brains are not designed and evolved for. So if you think about where are our minds naturally brilliant, if you gain awareness at basically finding common ground.

So things like camp fires where we are much more easy to find reasonableness and openness and civility. Dinner tables are as ancient as any human experience. Getting a drink with one another. And these things are based on certain features of those groups. So, how big is the group size? How much trust is there? How many people are at the table? What are the well-framed questions?

And there are things like living room conversations that give our brains the kind of trust signals we need because they create these small safe spaces where a small number of people can gauge in well-frame questions with good group facilitation with diverse perspectives — this has actually been working.

Or things like change-my-view, which was actually born out of a Reddit channel that was basically notice that if you give people the incentives to say — hey, I want you to change my view; I’m giving you an invitation to change my view about vaccination.

And people… If you see this little Delta 12 there — people are rewarded for the more that they actually change each other’s minds. And this works so well that they actually have spun it out into its own situation.

Now, what if finding common ground was easy? What if we made this as easy as accessing Wikipedia? Or this as easy as making content go viral on social media? What happens to polarization when finding common ground is easy? What happens to conspiracy thinking when common ground is easy? What happens to addiction when finding common ground is easy, and it’s easier to have these conversations?

It has rippling-out cascading benefits. And what happens, more importantly, to our existential threats that depend on us agreeing and finding that common ground? And imagine that scaling up at scales where this was the default way that things like Facebook and Twitter were actually designed for — smaller spaces with good group facilitation, with good shared questions. And some of you in the audience are actually working on problems like this.

So this is an example of a more full stack ergonomic view of how do you take technology and wrap in an amount a more subtle and sophisticated view of human nature? And that means starting by asking what do we want to protect? What are the zones we want to protect?

Because children are naturally brilliant at doing playful stuff with each other, we need to wrap technology around these experiences instead of replacing them or manufacturing synthetic alternatives.

OK, so that was a humane social systems — realigning technology to work with human social systems.

Amplifying Our Natural Brilliance

Now we have overwhelming AI and we need to go to humane AI. So how would we do that?

So we’ve got these Voodoo dolls lying around. And we’ve got this warehouse of 2 billion Voodoo dolls. One out of four people on Earth, we’ve got a Voodoo doll for all of them. And each one can be used to perfectly predict what will persuade us, what can keep us watching, what can politically manipulate us. What do we do with all these Voodoo dolls?

Well, we need a new kind of fiduciary, because this is a new kind of power. It’s a new kind of power asymmetry. If you have technology that can perfectly manipulate you into doing and feeling whatever you need to do, you need to make sure it’s in our interest, it acts in our interest. And imagine if this is like your AI sidekick that’s designed to basically sit to protect the limits of human nature and be acting in our interests as opposed to the opposite side working for extraction.

And lastly, we need to go from extractive incentives to regenerative incentives. Because if you look at the scope of human downgrading, the scope of how it affects our attention spans, our polarization, our civility, our trusts, our decency, we can’t just solve this with a bunch of band-aids. We need a new set of incentives that accelerate a competition to fix these problems. It’s like making the market work for solving climate change. We need a new set of incentives that create a race to the top to align our lives with our values.

So what I mean by this is, you might say — well, hold on a second; what about the business model?

How many are thinking, what about the business model? Because it’s, we have Free right now. What are we going to do to replace these free products? Well, great, we’re getting free social isolation, free downgrading of attention spans, free destruction of our shared truth, free incivility. Free is the most expensive business model we’ve ever created.

And you would’ve never thought seven years ago that you might be taking Lyft and Uber rides everywhere constantly and paying for that, if you ask someone that wouldn’t have thought that. But we do it and we pay for it gladly because it makes life happen. It unlocks more life, more economic opportunity because it’s taking us where we want to go.

But imagine we had the kind of incentives that there’s a race to the top so everyone is competing to help you find a relationship instead of competing to keep you swiping on screens forever, where things are competing for our trust to take us where we want to go in our lives.

This is just like a better party. This isn’t just, we should do this, it’d be nice. It’s like, I want to live in a world where Facebook and Tinder are actually asking no, get out of the way, I think I’ve got a better idea about how to help this person with their dating life.

And imagine a world where you love how you make choices and you love how you’re paying attention to things because everything is actually competing to find that natural brilliance in us, and coordinate those experiences to make that happen.

So this is the agenda — going from artificial to humane social systems, from overwhelming and overpowering AI to humane AI, and from extractive incentives to regenerative incentives. This is a new agenda for technology that actually addresses the human consequences and harms that we’re currently experiencing. And this might seem really far off, but let me tell you why I have hope.

Because it was about six years ago that I was at Google and I made this presentation, saying — hey, we have this moral responsibility in how we shape 2 billion people’s attention and we’re hosting this race to the bottom of the brain stem and we have to do something about it.

And when I was at Google, I felt completely hopeless. There was literally days I went to work and I would read Wikipedia all day and check my email and I would have no idea how, once you see something as massive as the attention economy and its perverse incentives, how could a system this big ever change? I mean, I, I truly felt hopeless. I was depressed.

At some point actually through Aza’s help, I actually got an opportunity to give a TED talk on time well spent. And I saw that there’s actually a lot of power in shared language. A lot of people were feeling the same way, but there wasn’t language for it. And I saw how language like — the attention economy, brain hacking, hijacking our minds — that language and shared understanding creates a unified surround sound and things can start to change.

Because what would happen was, over the last years, people go to work and they would hear these phrases is coming around. You go to three meetings in one day, someone tells you about something and what happens? That’s pressure. If you think, where does pressure exist? Political pressure? Show me the atoms of pressure. What is the material reality of pressure? You don’t exist anywhere. Where they exist is in common surround sound, and from people starting to speak up about these problems.

I saw the power of shared understanding and shared people speaking up about these problems — Roger McNamee, Jaron Lanier, Justin Rosenstein, who invented the Like button, Sandy Parakilas at Facebook, Renee DiResta, Guillaume, the ex-YouTube recommendations engineer, Marc Benioff — that when people start speaking up like people in this room, and say there’s a problem here, with shared language, things can start to change. Because what happened was common sense media did a report in September, 2018, 72% of teens now believe that they’ve been manipulated into spending more time on these services. No one was thinking about that two years prior to that.

The Verge says, the time-well-spent debate is over and time well spent won. Why did they say that? Because Mark Zuckerberg a year ago said that — our new big goal is to make sure that Facebook is time well spent. Apple launched time-well-spent features to help you manage your screen time on your phone. So did Google. And now they have grayscale on phones by default at night.

YouTube included time well spent. You have a billion phones now basically running tiny time-well-spent features.

Without writing any lines of code, just creating a shared understanding, you can move an entire ecosystem from where it is to where you want it to be. Now these are baby steps. These are the tiniest baby steps. But the point of this is that we’ve actually set off a race to the top for well being.

Now Apple and Google are competing for who can better provide a well being experiences for people. But we need to upgrade that race to be a race to find humans’ natural brilliance. We need to upgrade the bar to actually competing to align with this deeper model and sophistication about human nature. So that’s what we have to do.

This is a civilizational moment in a way that I’m not sure we’re all reckoning with. Because it’s an historical moment when a species that is intelligent builds technology that that technology can simulate like a puppet version of its creator, and the puppet can control the master. That’s an unprecedented situation to be in. That could be the end of human agency when you can perfectly simulate, again, not the strengths of these people, but just their weaknesses, and the surround sound of their social environment, and the mental health and the social norms of all their friends, and pulling an entire mental health generation of teenagers in a direction. That is a dark and important and civilizational moment. And it could either be the end of human agency, or… And that would happen by being unconscious to what’s happening. That would be, if we let the self driving car of the extractive attention economy go, we know exactly where that would lead to, which is human downgrading. So we could let it do that.

But just like Jack and Trudy did at the beginning, and I can ask you to do now of taking a breath, when you gain awareness over that, you suddenly go from being subject to that to gaining choice.

So this is like civilization taking a breath and saying — let’s see the ways that this stuff has hijacked human nature and let’s have a design agenda that fixes it.

So we are just one group at the Center for Humane Technology that wants to drive this change. But our role is to help support the entire community in catalyzing this change.

We’re launching a design kit that relates to the things I’ve shown you today, to help be better at examining these are these features, these subtle features of human nature. We’re actually going to be launching a podcast, interviewing people who are experts in Russian disinformation, magicians, conflict mediators, the people who are the experts on this subtle terrain on the inside of how human nature works, so we can rapidly accelerate the tech industry’s common understanding of these issues.

We also want to host a conference where we’re going to be bringing many of you, and everybody in this room who’s already working on these topics, together to accelerate change in this area.

Human downgrading is like the global climate change of culture. Like climate change, it can be catastrophic. But unlike climate change, only about a thousand people need to change what they’re doing. And guess what — many of us are here in this room and many of us are watching this. So product teams can start integrating full stack humane design into their products. Tech workers can start raising their voices, repeating this three times a day of human downgrading. Journalists can shine light on the systemic problems and solutions instead of the scandals and the grievances. Voters can, once they understand it, can demand policy from policy makers saying — hey, we don’t want our kids downgraded. And policymakers can respond to that by protecting citizens in shifting incentives for tech companies. That leads to shareholders saying — hey, we want to demand commitments from these companies to shift away from human downgrading-based business models. VCs funding that transition to helping humanity be brilliant. And entrepreneurs building products that are sophisticated about our humanity, culminating and platforms that provide these incentives, this sort of Uber and Lyft competition for who gets you there first, Apple and Google competing to allow apps to be competing not for your attention, but for our trust and getting us to align with our values. This would be together, creating a race to the top. And doing this can move it from being impossible to inevitable.

Again, this is a civilizational moment. For the first time in history, we could be facing the end of human agency if we let this car run on autopilot, or by just becoming aware of what this is and of human downgrading and how it’s happening. The good news is it has to do with just one thing, which is how human nature gets hijacked. And once we become aware of that, we have choice.

And this is the Humane Agenda that we are hoping all of us can get behind. Doing that, we can take E.O. Wilson and say — we can embrace our Paleolithic emotions, upgrade our medieval institutions, which helps us have the wisdom to wield the godlike technology.

What’s Next?

This is the beginning of a long journey. Instead of a race to the bottom of “how can we most effectively manipulate you,” we must create a new a race to the top to completely reverse human downgrading. To start facilitating this transition, we are announcing four initiatives:

  • Opportunities for key stakeholders to plug into working groups to take action.
  • “Your Undivided Attention” — a new podcast launching in June where Tristan and Aza gather insights about the invisible limits and ergonomics of human nature from a wide range of experts to address human downgrading.
  • Design guides to facilitate assessment across human sensitivities and social spaces to help guide designers in redesigning their products.
  • A Humane Technology conference in the next year to bring together people working on many different aspects of Human Downgrading.
  • We’re hiring to take on the scope and urgency of this work. The best candidates often come through referrals, so please point outstanding candidates to our Jobs page.

Check out the Video Presentation