The “Big” in Big Tech Breeds Extremism

Tech companies hit a turning point this week and have actually started wielding power against far right groups aiming to incite violence. Most dramatically, Facebook and Twitter both permanently banned Donald Trump, and Parler, a “free speech” social network that minimizes moderation and has become a favorite among white supremacists, has been removed from Google’s and Apple’s app stores, and in an unprecedented move, Amazon will no longer let Parler run on Amazon’s AWS infrastructure in the cloud, forcing the entire service offline (as far as I know, Amazon has never banned a company of this size because of the company’s own lack of handling its users’ misconduct).

This has inevitably led to conversations about whether Big Tech has gone too far, and conversations about the role big tech companies should play in moderating what’s on their platforms.

I don’t want to get too in the weeds about whether tech companies made the right move; my longstanding opinion has been that these are their platforms and it’s up to them what they decide to let people do on them, and that I wish they prioritized the safety of their most vulnerable users. I will also acknowledge that as much as Parler had it coming, when you see the entire business get pulled offline in just a couple of days because of a few external companies, it’s only natural to have the panicked realization that these big companies could just as easily do this to your business if they really wanted to. As much as conservatives have been pretending to be oppressed by tech companies (and that’s a whole post unto itself, but I can guarantee you that “anti-conservative bias” is 100% bullshit), it really is true that for most of the tech companies you interact with, they could pull the plug on your account if they wanted, and you would have little to no recourse.

I do agree that it’s complicated for owners of big platforms hosting user-generated content to effectively come up with a set of policies that govern what people can talk about on these sites. These are hard questions because a huge chunk of the world’s communication happens on networks like Facebook and Twitter. If Facebook and Twitter get really strict about what you can say and do on these networks, they could be stifling much of the communication that occurs on the internet in general.

These will never stop being hard questions, but there is one straightforward way we can make these questions less necessary to even ask: take the “Big” out of “Big Tech”.

It’s the Scale, Stupid

Put simply: our society and institutions are not equipped to correctly handle the existence of massive tech companies that have literally billions of users.

Social networks aren’t a super new development; they’ve existed in one form or another since before the internet, and while there has always been some concern for fringe groups online, none of these groups or networks had previously posed an existential threat to democracy in the US.

But that changed when companies that run social networks hit real scale. And when I say “real scale,” I’m talking tens of millions, hundreds of millions, and even billions of users.

Facebook isn’t just a community; it’s a community of communities that is centrally managed by Facebook itself, where Facebook has an incredible amount of data about you. In the US there was a national debate for years about creating a new national ID card to replace state-issued IDs but there was this concern about centralizing control with the federal government. And yet, here Facebook stands as a central entity that tracks the identities of more people than the most populous countries on earth.

A network as large as Facebook is essentially a government. Facebook’s CEO has said as much on the record. But unlike most democratic governments around the world, Facebook’s policies don’t get determined by its users or people its users choose to represent them; Facebook is free to make these decisions unilaterally, and its users don’t have much recourse. “If you don’t like it, leave” is a tough sell when the network has pretty much every online user in the world on it.

Automatic radicalization at scale

On massive social sites like Facebook, you don’t need to find new communities; new communities will find you. Facebook will analyze your profile and activity and recommend new groups to join. On the surface, that sounds perfectly innocent; Facebook helps you find new groups you might like. But in a world where Facebook is home to all of these non-publicly visible extremist groups, that’s super dangerous, because now Facebook is doing the heavy lifting of recruitment for these groups. Facebook knows what those group members are like, so it can identify other people that might be sympathetic to these extremist causes and just casually recommend the hidden group to them. And just like that, Facebook just unwittingly became a tool to radicalize people at scale. Whoops.

It’s not a problem unique to Facebook; I just keep referring to them because it’s easier to point to a concrete example. It also isn’t just a problem unique to social networks. YouTube’s got a similar issue where if you follow recommendations on certain innocuous videos for long enough you go down a rabbit hole that often leads to increasingly extremist videos.

Critically, these tools to radicalize people through recommendations are only possible because companies like Facebook and YouTube have absolutely enormous numbers of users.

Facebook and YouTube are so big, when they make tweaks to their algorithms that promote content, it can destroy businesses and livelihoods. And individual bans can be useful (Twitter banned Milo Yiannopoulos a few years ago and he and his toxic fan base haven’t resurfaced meaningfully since), but individually banning people is at best a band-aid when your network overall is continuously producing new extremists.

Scale is hard

For years, there has been sizable public pressure on these companies to do more to moderate. Initially companies seemed to hope they could handle this kind of moderation automatically, but in practice that works poorly; algorithms are bad at understanding the full context behind the content of everything posted and can’t make accurate determinations.

So now Facebook’s moderation is powered by an army of humans that must toil away around the clock, slogging through deeply disturbing content and trying to make human decisions at the pace of a machine. It’s a mentally taxing job.

In reality, the viability of tech companies running networks of user-generated content at scale is a myth. Moderation is a nightmare, and these companies are barely even trying to pretend they can keep up with it, and they really only tread water with moderation by subjecting a team of people to terrible working conditions.

But big tech companies want you to ignore that and just continue to let them exist because they don’t want you to even fathom that it’s possible for the world to exist without companies with billions of users.

But that scale, and that scale alone, is the very core of this problem. If we stop having companies with billions of users, we suddenly stop worrying that there are companies that function as pseudo-governments. If we stop having companies with billions of users that are breathlessly handing out recommendations to get those users to join new communities that are trying to overthrow democracy, democracy can be safer.

The question, then, is how we might do that, and what a world that rejects scale might look like.

Leave a Reply

Your email address will not be published. Required fields are marked *