Taming the Tech Giants
Everyone’s angry at the tech industry these days! Tech companies continue to cement their place as some of the most powerful companies in the world, and taking shots at them has become a popular sport. Most recently, Facebook and Twitter suppressed a controversial New York Post article Facebook And Twitter Limit Sharing 'New York Post' Story About Joe Biden The social media companies said they wanted to slow the spread of possibly false information. But their actions drew charges of censorship from President Trump and his allies. www.npr.org/2020/10/14/923766097/facebook-and-twitter-limit-sharing-new-york-post-story-about-joe-biden , raising accusations that the social networks are putting their thumbs on the scale of the upcoming election.
In response, conservatives — led by the President Donald J. Trump on Twitter “If Big Tech persists, in coordination with the mainstream media, we must immediately strip them of their Section 230 protections. When government granted these protections, they created a monster! https://t.co/velyvYTOR0” twitter.com/realDonaldTrump/status/1316821530769149952?s=20 — have set their sights on Section 230, the legislation that protects the right of Internet companies to moderate content Section 230 47 U.S.C. § 230 The Internet allows people everywhere to connect, share ideas, and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. Congress recognized that for user speech to thrive on the Internet, it had to protect the services that power users’ speech. That’s why the U.S. Congress passed a law, Section 230 (originally part of the Communications Decency Act), that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. It states: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)). Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say. Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it’s the speaker that should be held responsible, not the service that hosts the speech. Section 230’s protections are not absolute. It does not protect companies that violate federal criminal law. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims. Section 230 Protects Us All For more than 25 years, Section 230 has protected us all: small blogs and websites, big platforms, and individual users. The free and open internet as we know it couldn’t exist without Section 230. Important court rulings on Section 230 have held that users and services cannot be sued for forwarding email, hosting online reviews, or sharing photos or videos that others find objectionable. It also helps to quickly resolve lawsuits cases that have no legal basis. Congress knew that the sheer volume of the growing Internet would make it impossible for services to review every users’ speech. When Section 230 was passed in 1996, about 40 million people used the Internet worldwide. By 2019, more than 4 billion people were online, with 3.5 billion of them using social media platforms. In 1996, there were fewer than 300,000 websites; by 2017, there were more than 1.7 billion. Without Section 230’s protections, many online intermediaries would intensively filter and censor user speech, while others may simply not host user content at all. This legal and policy framework allows countless niche websites, as well as big platforms like Amazon and Yelp to host user reviews. It allows users to share photos and videos on big platforms like Facebook and on the smallest blogs. It allows users to share speech and opinions everywhere, from vast conversational forums like Twitter and Discord, to the comment sections of the smallest newspapers and blogs. Content Moderation For All Tastes Congress wanted to encourage internet users and services to create and find communities. Section 230’s text explains how Congress wanted to protect the internet’s unique ability to provide “true diversity of political discourse” and “opportunities for cultural development, and… intellectual activity.” Diverse communities have flourished online, providing us with “political, educational, cultural, and entertainment services.” Users, meanwhile, have new ways to control the content they see. Section 230 allows for web operators, large and small, to moderate user speech and content as they see fit. This reinforces the First Amendment’s protections for publishers to decide what content they will distribute. Different approaches to moderating users’ speech allows users to find the places online that they like, and avoid places they don’t. Without Section 230, the Internet is different. In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects. In non-democratic countries, governments can directly censor the internet, controlling the speech of platforms and users. If the law makes us liable for the speech of others, the biggest platforms would likely become locked-down and heavily censored. The next great websites and apps won’t even get started, because they’ll face overwhelming legal risk to host users’ speech. Learn More About Section 230 Most Important Section 230 Legal Cases Section 230 is Good, Actually How Congress Censored the Internet With SESTA/FOSTA Here's an infographic we made in 2012 about the importance of Section 230. www.eff.org/issues/cda230 . “Repeal Section 230” has become a popular rallying cry from people who believe that large social networks are abusing this ability to enact a political agenda. There’s a lot of rhetoric around “publishers” and “platforms” (the idea being that if you decide to moderate content on your app or website, you should assume legal liability for it) and that Internet companies are breaking the rules by deciding what content to allow.
Naturally, the left disputes the claim that conservatives are being censored. But we can still analyze the power of gatekeeper platforms even if we disagree about how they’re wielding it.
As we’ll see, the law as it exists today expressly permits the moderation in which tech companies engage. More to the point, the platform/publisher dichotomy is rooted in constraints of traditional media that don’t apply to the Internet.
Those constraints — or the lack thereof — should guide our efforts to make the Internet a more equitable place. The web in particular was built with the promise of giving everyone a voice; it’s only in the last decade or so that power became truly centralized. We keep that promise not by forcing gatekeepers to play fair, but by getting rid of them entirely.
In the analog era, the act of publishing was subject to physical constraints. A newspaper, for example, printed a set number of pages a few times daily. If they wanted to publish content by third parties, they had to read it all and make an editorial decision about what made the cut. As a result, publishers were responsible for vetting everything that ran under their masthead.
By contrast, it was unreasonable to expect a news stand to check every word in every newspaper they sold every day. These entities were called “distributors”. If an article in a newspaper turned out to be libelous, only the publisher was on the hook legally, while the distributor enjoyed legal immunity.
The Internet turned this model on its head. Space to publish became infinite, time to publish became instant and distributors became unnecessary. Websites could host discussions between thousands of people in real time, and checking every single comment was infeasible. That then raised the question: who was liable for the comments?
Two lawsuits that are often cited as catalysts for Section 230 are Cubby, Inc. v. CompuServe, Inc. Cubby, Inc. v. CompuServe Inc - Wikipedia en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc and Stratton Oakmont, Inc. v. Prodigy Services Co. Stratton Oakmont, Inc. v. Prodigy Services Co. - Wikipedia en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co 1 — both defamation cases against Internet service providers. In the former case, the court ruled that because CompuServe didn’t review any of the content on its forums, it was acting as a distributor and therefore not liable for defamatory content. In the latter case, the court ruled the opposite: because Prodigy moderated users’ comments, it was acting as a publisher.
To resolve this ambiguity, Congress added Section 230 to the Communications Decency Act, which it passed in 1996. The portion most people are talking about 47 U.S. Code § 230 - Protection for private blocking and screening of offensive material www.law.cornell.edu/uscode/text/47/230 is printed below:
(1) Treatment of publisher or speaker No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Both the letter and the spirit of the law are intended to allow Internet services to moderate third-party content however they wish, while shielding them from legal liability if it turns out to be defamatory. Legally, the “publisher vs. platform” dichotomy does not exist.2
The logic behind the repeal of Section 230 seems to be that doing so would force platforms like Twitter to not moderate content. This isn’t strictly true; surely everyone, no matter where they sit on the political spectrum, can think of a biased publisher or two! Twitter would still be free to censor conservative content. The difference is that they would then be liable for any unrelated content they allowed that happened to be defamatory.
A more nuanced read is that the threat of litigation would essentially be a stick that would prevent platforms from moderating. Users couldn’t sue Twitter for removing content — but Twitter would choose not to anyway, for fear of getting sued by someone with deep pockets if they accidentally let a defamatory tweet go viral.
There’s no way to know exactly what would happen if 230 were repealed, but here’s a guess at how things might shake out.
At first, a lot of platforms might stop moderating. This would seem like a win for Team Repeal, until they realized what moderation keeps at bay. Blatant racism, doxxing and threats of violence would be commonplace. Spam filtering is a form of moderation, so discussions would be filled with links to porn sites and scams. Actual conversations would get drowned out.
That would be a deal breaker for a lot of people, who would stop using those platforms. It’s possible things would end here, and those who could be more thick-skinned would just deal with the death threats and spam. On the other hand, since there’s no actual legislation indemnifying platforms if they don’t moderate — just a shaky legal precedent resting on some decades–old cases — it’s possible that we’d start seeing lawsuits anyway.
In response to the legal risk and decaying user bases, companies would start heavily moderating — spending a lot of money to make sure they only allow content that doesn’t risk a lawsuit. This might function similarly to the New York Times comment section, where comments are screened before being published. Most of the content people say is being censored now (like the story in the New York Post, which was explicitly chosen as a publisher because it wouldn’t vet the story New York Post Published Hunter Biden Report Amid Newsroom Doubts (Published 2020) Some reporters withheld their bylines and questioned the credibility of an article that made the tabloid’s front page on Wednesday. www.nytimes.com/2020/10/18/business/media/new-york-post-hunter-biden.html ) would probably end up on the cutting room floor — it might give someone grounds to sue, so better to just err on the side of caution.
Smaller platforms wouldn’t have the resources to do this. They’d end up either limping on, severely compromised, or just going out of business. There would be a steep drop in the number of platforms created, since the spam issue would be especially overwhelming for those in their infancy.
Independent websites would suffer even more. Let’s say I added a comments section to this blog, and then one day I started getting flooded with spam. I’d face a dilemma: delete the spam and risk a lawsuit, or let it continue to suffocate the conversation? (Assuming that allowing a free-for-all would indemnify me, which, again, is not guaranteed). I’m not even making any money from this website — is it worth the legal risk?
Meanwhile, some crucial web infrastructure simply could not exist without incurring liability. Ranking, for example, is a fundamentally biased act that depends on inferences about both the intent of the reader and the nature of the content being ranked.3 Search engines would be the most obvious casualty, as every search result would present a legal liability. Requiring humans to check each of the 1.7 billion websites This is how many websites exist globally Today, the number is 1.7 billion and rising, although only around 200 million of them are active. www.weforum.org/agenda/2019/09/chart-of-the-day-how-many-websites-are-there/ would result in far less of the web being searchable. The added difficulty and expense would further entrench Google’s search monopoly, which is already the target of a Justice Department antitrust lawsuit Justice Department Hits Google With Antitrust Lawsuit The Justice Department lawsuit marks the most aggressive U.S. legal challenge to a company’s dominance in the tech sector in more than two decades, with the potential to shake up Silicon Valley and beyond. www.wsj.com/articles/justice-department-to-file-long-awaited-antitrust-suit-against-google-11603195203 .
Whatever the case, the end result would be a chilling effect at all levels of Internet discourse. Most platforms would be far more censorious than they are today, and the rest would be overrun by trolls and spammers. Small platforms would go under — and worse, far fewer would be created in the first place. Interaction on personal websites would almost entirely disappear. Many people would feel intimidated by abuse and threats and withdraw from their public presence online. Others would just get fed up with the spam.
A common response to these scenarios is that platforms should have to be neutral with regard to ideology, but still be free to filter spam.
Okay, so what counts as spam? Some is obvious “I made $1,000 from my couch” type stuff. But the majority falls in a gray area that would create huge headaches for platforms trying to comply with the law and courts trying to enforce it.
Most people who have been online have seen messages for erectile dysfunction drugs or porn sites. But how do you disambiguate between that and online sex workers promoting themselves? What’s the fundamental difference between a sketchy online pharmacy and Roman Roman | Digital Health Clinic for Men by Ro Roman is a digital health clinic for men. We handle everything from online evaluation to delivery of treatment and free ongoing care for ED, PE & more. www.getroman.com/ ?4
If you think those examples are a bit contrived, it’s easy to come up with an example where it’s not only ambiguous whether something is spam, but also unclear whether blocking it would be political censorship.
Say I run a pro-Trump forum on which I occasionally encourage people to buy Trump campaign merch. One day, someone shows up and starts posting links to Biden merch. Is that spam, or political discourse? I can’t very well say advertising campaign merch is prohibited, since I do it myself. Could I ban all promotion of campaign merch? What if they’re not posting links, but just repeatedly mentioning how much they like their Biden shirt in otherwise innocuous comments?
We could respond by trying to make sophisticated rules about what counts as spam. Anyone who’s tried to run an online community can probably tell you how well that would work On a technicality Apropos of nothing, I’d like to tell you a story. I’ve touched on this before, but this is the full versio eev.ee/blog/2016/07/22/on-a-technicality/ . You’d get a lot of people finding ways to post things that aren’t quite spam, barely fitting within the letter of the rules while clearly violating their intent.
Conversely, we could define spam loosely and leave it to the discretion of the platform. This would just put the shoe on the other foot — the platforms would be doing the rules lawyering rather than the spammers. You’d hear a lot of justification for why some content a platform wants to censor technically fits within the definition of spam. This would end up a lot like things are today.
You might object that we’ve been fairly successful at fighting email spam, so it shouldn’t be hard for other platforms to do so. Unfortunately, without Section 230, even that becomes risky legal territory — what if an email provider doesn’t block an email that turns out to be defamatory? There are laws regarding email spam 15 U.S. Code Chapter 103 - CONTROLLING THE ASSAULT OF NON-SOLICITED PORNOGRAPHY AND MARKETING www.law.cornell.edu/uscode/text/15/chapter-103 — mostly regulating unsubscribe links and misleading content — but they impose burdens on the senders, not the email platforms. We’ve never really had a test of the legality of filtering spam, because Section 230 has made those questions moot.
Hopefully it’s clear that “platforms must be neutral except for this special case” is a fraught proposition.
Proposed solutions like these emerge when you don’t clearly articulate the actual problem. For those who see platforms suppressing conservative content, the knee-jerk reaction is to prevent them from doing that.
But hold on — there are plenty of platforms where conservative content is welcome. The Internet is a decentralized place, and it’s easy to create a community that doesn’t moderate content if you want to. And indeed, there are plenty of communities where aggrieved conservatives can seek refuge from heavy-handed moderation. The issue is that the content is basically what you’d expect from a platform where 90% of the members are alt-right, and most people don’t want to spend time there.5
As I see it, the problem is not that conservatives are wanting for places to post conservative content online. It’s that they want to do it on Facebook and Twitter and YouTube.
That’s the crux of the issue. No one wants to be consigned to a marginal social network. Conservatives want to be welcome in the spaces that everyone frequents.
In a way, I sympathize with this. Even if you believe — and I do — that society writ large should get to decide what speech is socially acceptable, it shouldn’t be hard to see that there’s a difference between norms determined by an organic consensus of people vs. a handful of executives at a gatekeeper corporation. And it’s not just conservatives who are occasionally out of step with those executives. TikTok, for example, has admitted to censoring posts by users it identified as overweight, disabled or LGBTQIA+ TikTok owns up to censoring some users' videos to stop bullying Video-sharing site restricted posts by users it identified as disabled, fat or LGBTQ+ www.theguardian.com/technology/2019/dec/03/tiktok-owns-up-to-censoring-some-users-videos-to-stop-bullying , while Instagram has taken down photos showing periods Instagram has a problem with women: Bloody accident photos are fine, but periods are "inappropriate?" How a bloodstain went viral www.salon.com/2015/03/27/instagram_tries_to_fix_its_period_problem/ and Facebook has banned the use of eggplant and peach emojis in a sexual manner Facebook has a new policy on ‘sexual’ emoji (yes, that means the peach) The platform is selectively censoring emoji in an effort, it says, to curb requests for “nude imagery, sex or sexual partners, or sex chat conversations.” www.fastcompany.com/90424513/facebook-has-a-new-policy-on-sexual-emoji-yes-that-includes-the-peach from all its social networks.
One suggestion is writing neutrality regulations which would only kick in when platforms reach a certain size. That way, we could avoid overburdening small communities and personal websites, while the platforms who get to “public square” sizes can’t play favorites.
First, we’d have to figure out what it means to be a public square. But even if we managed to do that, we’d face a new problem: platforms would end up intentionally staying small to avoid incurring the additional regulations. Since companies wouldn’t want to grow that large unless they could be more profitable despite the public square rules, the rules would act as regulatory moats protecting the current incumbents.
Some people propose that network effects make it so hard to compete that social networking sites should be considered natural monopolies, the way physical infrastructure does with electric and gas utilities. I don’t buy that. There are several key differences between social networks and actual utilities, not least the fact that people often use multiple social networks simultaneously. And frankly, the number of dominant social networks — both present and past — makes me very skeptical. If we accept the utility argument, there are at least three current social networks that could be credibly described as such.
Moreover, we’d still be at the mercy of tech giants. The “neutrality” would be an uneasy truce between government officials and profit-seeking companies, the latter of which would be trying to get away with as much as they could without raising the ire of the former. We’d still have corporations controlling our public discourse, but they’d be benevolent dictators, to the extent that any highly regulated industry is “benevolent”.
For giant companies, whether or not to comply with regulations is just a profit/loss calculation. Last year, the FTC hit Facebook with a record-breaking $5 billion fine for privacy violations — and their stock went up Facebook’s $5 billion FTC fine is an embarrassing joke Facebook gets away with it again www.theverge.com/2019/7/12/20692524/facebook-five-billion-ftc-fine-embarrassing-joke . They’re very sorry, and I’m sure they’ll never Facebook's 'monopoly power' hurts user privacy, finds Congress The 449-page Congressional report also claims Facebook's lack of competition fuels misinformation on the platform. mashable.com/article/house-antitrust-report-facebook-privacy-misinformation/ do Facebook suspends “tens of thousands” of apps from 400 developers over improper data use It’s been cracking down on app makers since Cambridge Analytica. www.theverge.com/2019/9/20/20876021/facebook-developers-apps-suspensions-data-privacy-cambridge-analytica it Facebook says 5,000 app developers got user data after cutoff date A Facebook privacy mechanism blocks apps from receiving user data if users didn't use an app for 90 days. Facebook said 5,000 apps continued to receive user data regardless. www.zdnet.com/article/facebook-says-5000-app-developers-got-user-data-after-cutoff-date/ again How Facebook and Other Sites Manipulate Your Privacy Choices Social media platforms repeatedly use so-called dark patterns to nudge you toward giving away more of your data. www.wired.com/story/facebook-social-media-privacy-dark-patterns/ .
Combine that with the fact that the people in control of the government can twist well-meaning laws in order to intimidate organizations that are trying to do the right thing — the DoE’s selective investigation of Princeton Scores of college presidents urge Education Department to stop its investigation of Princeton The presidents of Amherst College and Wesleyan University issued a statement decrying the Education Department's inquiry into whether Princeton University is in compliance with federal anti-discrimination law, after the school's president wrote about efforts to end "systemic racism" on campus. www.washingtonpost.com/education/2020/09/25/scores-college-presidents-urge-education-department-stop-its-investigation-princeton/ for nominally grappling with its own systemic racism leaps to mind — and you have a recipe for things being much worse than they are today.
Given all that, I think there’s exactly one way to solve this issue: keep Section 230 and break up giant tech platforms.
Writing laws that forbid bias is an attempt to put users on equal footing. Breaking up the platforms, on the other hand, would put the platforms themselves on equal footing. It would be fine if a platform decided it didn’t want to host a certain type of content, because it wouldn’t have a de facto monopoly.
That’s not to say that some platforms wouldn’t be bigger than others. Just like now, we’d have bigger ones and smaller ones. The important thing is that we not let the bigger ones grow so large that they can act as stand-ins for public infrastructure.
It’s possible that many platforms would make the same moderation choices the tech giants are making now; that the same content would end up being marginalized. I think that’s fine. The important thing is that we, the people, decide the bounds of acceptable discourse, rather than unaccountable gatekeepers enforcing it by fiat.
Obviously, breaking up these platforms is easier said than done. But if the goal is to avoid the country’s discourse being controlled by a small group of executives and politicians, I don’t think there’s another solution that checks all the boxes. We’d still have to figure out that size threshold. But as a rule of thumb, if a company is big enough to be a “public square”, it’s too big.
The promise of technology was that it would be an equalizing force, inverting power structures and giving a voice to people. I believe in that promise — and I believe that it’s fundamentally incompatible with corporations controlling our discourse.
There are many things wrong with the Internet today. But Section 230 isn’t one of them. It was instrumental in the development of the Internet we have today, and removing it would harm individuals far more than the platforms it protects.
For the Internet to truly empower people, it’s not enough to try to force the gatekeepers to be neutral. We have to neutralize the gatekeepers.
Footnotes
-
You may remember Stratton Oakmont as the fraudulent brokerage in The Wolf of Wall Street The Wolf of Wall Street (2013) ⭐ 8.2 | Biography, Comedy, Crime 3h | R www.imdb.com/title/tt0993846/ . ↩
-
One thing that gets lost in the discussion is that the immunity granted by Section 230 is specifically for third-party content. Platforms are still treated as the publisher of content they produce themselves. If Section 230 were repealed, it’s likely that e.g. Twitter would still be able to place fact check warnings around the President’s tweets. They would just be liable if those warnings turned out to be defamatory — as they are today. ↩
-
Here’s a simple example to illustrate why ranking is so subjective. Which result is more relevant when searching for “Giuliani scandal” — the New York Post article in which he presented alleged evidence of corrupt behavior by Joe and Hunter Biden, or this MSNBC article claiming that the real scandal is the federal investigation into whether he was given the evidence by a hostile foreign power Instead of creating a scandal, Giuliani's in the middle of one Rudy Giuliani apparently thought he could create an anti-Biden scandal. Instead, he's found himself in the middle of one. www.msnbc.com/rachel-maddow-show/instead-creating-scandal-giuliani-s-middle-one-n1243996 targeting the US presidential election? Maybe the most relevant result isn’t news at all but the Wikipedia article for the Trump–Ukraine scandal Trump–Ukraine scandal - Wikipedia en.wikipedia.org/wiki/Trump%E2%80%93Ukraine_scandal , in which “Giuliani” appears 223 times? Which is more important, recency or text match or backlinks? There’s no objective answer to these questions; ranking consists entirely of tradeoffs. ↩
-
Some people, hearing about sex workers and sketchy pharmacies, think that those are acceptable tradeoffs. That’s the other problem with making exceptions like this: it shifts the goalposts from “I’m pro-free speech” to “I’m pro-free speech for speech I care about.” Which is a fine position — almost no one is actually a free speech absolutist — but it’s certainly not a principled stance against censorship. Spam is speech too, after all! I would welcome a discussion about what the boundaries of acceptable speech should be, just not on the terms of someone who thinks their own boundaries are some sort of objective standard. ↩
-
There’s a quote by Scott Alexander that I think about sometimes: “[I]f you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.” Although the bulk of his post was about conservatives withdrawing from mainstream traditional media, this quote refers to a “free speech” Reddit clone, and it’s not hard to envision them following a similar trajectory with regard to other social networks. I end up disagreeing with his conclusion, but it’s an interesting piece Neutral vs. Conservative: The Eternal Struggle I. Vox’s David Roberts writes about Donald Trump and the rise of tribal epistemology. It’s got a long and complicated argument which I can’t really do justice to here, but the the… slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/ . ↩