AI and the Ever-Increasing Distortion of Reality

The methods we use to discern what is true will become the most effective way to deceive us

Paulo Esteves
6 min readSep 23, 2020

The Outrage Machine

More and more people are aware of the tragic consequences of the ad-based business model many tech companies adhere to. Here’s the recursive algorithm that’s been changing the world:

1 — To earn more ad revenue, users must spend more time on these platforms (e.g., social media, YouTube, etc.), which happens if they provide content that most effectively captures their attention. AI algorithms calculate, for each person individually, what most effectively captures their attention, which happens to be, quite often, fake, misleading, or outrageous content — the algorithm doesn’t really know whether the content is deceptive, out of context, or outrageous, only that it increases engagement.

2 — Different people watching different outrageous content will become outraged for different reasons.

3 — Let the whole thing run its course for a while. The result is a society divided into countless factions of people, each with a completely distorted view of the world, whether by overestimating or underestimating real problems or by merely believing fake conspiracy theories. Each group, frightened, furious, and shocked at how other groups seem to ignore what is obviously “true” (informed by content that never actually shows up in any other group’s feed), gets ready to furiously defend what is “right.”

4 — But that’s mostly fine. Because as violent and tragic consequences materialize in real life over, and over, and over again, everyone will get super outrageous and share that on social media and YouTube, thankfully increasing what matters most: ad revenue — back to step #1.

One of the most insidious aspects of this whole thing is that, even if there were no actually fake content being shared, users would still be misled by the frequency with which they are exposed to outrageous content, overestimating the radicalization of their adversaries. This overestimation, in turn, results in actual radicalization as a response to the exaggerated perception of others’ radicalization, creating a tragic feedback loop that kind of works like a self-fulfilling prophecy. Once radicalization numbers get higher, everyone involved will have their biases confirmed. No one will spend more than two minutes wondering about the impact that recommendation engines might have had in all of this.

These companies don’t intend to cause social havoc. In fact, they certainly would prefer to make money while bringing all sorts of benefits (which they do), without any drawbacks. That’s certainly what most executives thought they were doing when they started this. But this isn’t about intentions; it’s about consequences resulting from certain incentives. And the incentives have to change, whether through government regulation, boycott, or some new, more ethical monetization source.

An Even More Outrageous Future

The scary thing here is that what I wrote above is where we are right now. But unless incentives change, this can get much worse.

Here are two examples: deep fakes indistinguishable from reality and individually targeted automatically generated text. The former will be used intentionally as a source of disinformation, whereas the latter, whether or not used to spread lies, will inevitably lead to polarization.

Deep Fakes

Deep fakes aren’t perfect, you can usually tell that there’s something off, but soon that won’t be the case anymore. Hopefully, the same technologies that allowed for the development of deep fakes will become better at detecting them, but that likely won’t entirely prevent fake videos’ proliferation, as the indiscriminate removal of any videos with deep-fake technology might not be desirable. The technology can be used for satire or art, but the line between humor and deception can become tenuous. And even if misleading fake videos get correctly identified and reported, just the fact that any video can be bogus will be enough to demoralize and give the sense that one can’t trust anything anymore. That’s the most significant risk in the end. Despite being known for not having qualms related to censorship, China is trying to solve this problem by making it illegal to share deep fakes, unless users label them as such. This could help respect the principle of freedom of expression while preventing the proliferation of dissimulative practices. Whichever solutions appear, both governments and tech companies must admit that this is a severe problem that needs addressing. Since most people can’t spend hours investigating the timestamps and origins of videos — though blockchain technology might help with this - there will come a time when people will debate which video came first, the deep fake or the original, depending on which version makes the other side look good or bad. At the very minimum, scams using deep fakes will become much more common.

Automatically Generated Text

In the podcast Your Undivided Attention, Tristan Harris and Aza Raskin discuss the hypothetical AI usage of generating text at the individual level. An incredibly narrow form of microtargeting, this would allow a company like Facebook to use the history of all messages that a given user has sent and received to generate an ad targeted in a way to be as familiar as possible to the user. Imagine if everyone received a separate, perfectly suited message, whether it’s a text message or some sponsored post that felt like a conversation you would have with your best friend or sound like the public intellectuals you follow online and trust the most. The ability to use tone and choice of words as a tool to gauge credibility would become useless and could instead be used to tailor a message that would most likely move you. Without the public exposure that a billboard or a news channel has, tracking these sorts of campaigns would be nearly impossible, and having these messages tailored individually would raise the levels of deceit and polarization to new heights. With the recent release of the GPT-3 system, generated text indistinguishable from a human is already here. There’s an issue here that shouldn’t be overlooked: even if the content isn’t explicitly misleading, the bubble effect of being exposed to the content that most likely will engage you, will distort your perception of reality, for instance, by leading you to overestimate the frequency of certain events, due to the availability heuristic.

What Can We Do

The widespread consequences of these companies’ recklessness are very real. From the alarming increase in depression rates, self-harm, and suicide in teenagers to multiple murders in India resulting from fake rumors, from the killing of specific ethnic groups to the decline of democratic principles worldwide. All fueled by drastically distorted perceptions of what’s true and what matters.

Tech companies transcend borders. They affect the world horizontally, making it difficult to regulate their behavior in an effective way. Some countries are developing laws to tackle these problems, but only a few. And those laws are far from being enough. Moreover, the accelerating speed with which technology evolves makes it incredibly hard for lawmakers to know what to look for unless their background is rich in knowledge and experience of this industry, which is quite rare. Because of all this, the rate of public discussion related to this problem has to increase drastically.

There are two things we should do right away:

1 — Get informed. As I was writing this text, the Center for Humane Tech released a documentary called The Social Dilemma that is the best introduction to this problem I know. It leaves out many important details that are discussed in their podcast Your Undivided Attention. I also recommend their website, which includes calls to action, suggestions to improve our relationship to tech, and forums where this is being discussed in more depth.

2 — Inform and help others understand the problem so we can all incentivize lawmakers and tech companies to do something about this. One good example of this is the Stop Hate For Profit campaign.

The software these companies develop should work as tools that serve us, not the other way around. Many of these companies started with a commitment to improve communication, build connections, and strengthen relationships. Let’s make sure they stand by their commitments.

If you found this article useful, feel free to 👏 or leave a comment.

--

--