by Kate Ruane

ACLU Senior Legislative Counsel

Tomorrow, the Senate Commerce Committee is holding a hearing entitled “Does Section 230 Enable Big Tech Bad Behavior?” This is just the latest attempt by Congress and the Trump administration to amend, reinterpret, or eliminate Section 230, a key provision of federal law that generally ensures online platforms, including social media, can’t be held liable for the speech and content their users post on these platforms. This law means Yelp can’t be held legally responsible every time one of its users posts a false negative review. The Bed Bug Registry doesn’t have to visit every hotel with a magnifying glass to confirm reports from the public. And Facebook can offer a forum for billions of users to share their thoughts, pictures, memes, and videos freely without having to approve every post before they go up. 

Over the summer, Donald Trump issued an executive order calling for the Federal Communications Commission to commence a rulemaking to reinterpret Section 230 in ways entirely contrary to its purpose. Meanwhile, Congress has put forward numerous legislative proposals to amend 230. These efforts are confused at best. Many Republicans believe that the platforms at stake display an anti-conservative bias, disproportionately censoring and fact-checking conservative speakers. Many Democrats are concerned about censorship of communities of color, LGBTQ voices, and women and nonbinary people. Others are concerned that platforms promote disinformation, conspiracy theories, misinformation about voting, violence, and hate speech.

To sum up critics’ views: platforms are censoring people too much and not enough all at once. Somehow, policy makers think that the solution to these alleged problems is to expand the circumstances under which platforms can be liable for their users’ speech by amending Section 230. 

To be clear, amending this provision will not solve any of these concerns. In fact, many of the proposed changes would exacerbate over-censorship, and other proposals would promote the proliferation of misinformation about voting. Yet President Trump and Congress continue to advocate for changes to the law in an effort to encourage the censorship they like and discourage the censorship they don’t. 

Section 230, in addition to providing a shield against liability for users’ speech, enables online platforms to cultivate orderly, pleasant, and useful sites. While the biggest social media companies, responsible for hosting the speech of billions, should resist calls to censor lawful speech, Section 230 allows sites to delete abusive accounts, remove content that violates the site’s terms of service, or eliminate voter misinformation without risking liability for the speech that they do host.

Almost as important as what Section 230 protects is what it does not. Section 230 does not shield bad actors. If you harass or defame someone online, you are responsible for your illegal conduct. 

Moreover, platforms are liable for their own illegal conduct. They can be sued or prosecuted for the content they create and the conduct they engage in, or materially contribute to, that violates any law. That is why Illinois residents were able to sue Facebook under Illinois’ Biometric Information Privacy Act when the platform used facial recognition technology on them without their consent. And that’s why we were able to file charges with the Equal Employment Opportunity Commission to challenge Facebook’s targeting of employment ads to younger men only, excluding all women, non-binary people, and older men. 

Section 230 has permitted platforms to host all kinds of user content, and created space for social movements like #MeToo and content creators on sites like Instagram, TikTok and Twitch to flourish. It has also enabled platforms to host the speech of activists and those organizing protests, from the Arab Spring to today’s protests against police brutality. If it weren’t for Section 230, website owners would be far more reluctant to freely permit public posts knowing that the site could be investigated, shut down, sued, or charged with a felony over one user’s illegal tweet or post — including that of the President. And if that protection is unavailable only for a certain category of content, as some proposed reinterpretations of the law propose, the platforms will censor far more speech related to that category than they could constitutionally be liable for, simply to spare themselves the massive court costs they could face.

This is exactly what has happened in the past. Congress last amended Section 230 in 2018, through a law called SESTA/FOSTA. It was purportedly aimed only at creating platform liability for illegal sex trafficking. We warned at the time that rather than face liability, platforms would engage in excessive censorship that would harm the LGTBTQ and sex worker communities in particular. As predicted, after SESTA/FOSTA’s passage, entire websites that provided forums for sex workers to connect, share information to protect themselves, and build community disappeared. Speech suffered, and so did the health and safety of vulnerable communities. If the EARN IT Act, which is currently moving through the Senate, becomes law, it will create similar harm.

Amending 230 to narrow the scope of the editorial decisions platforms can make while still receiving 230’s immunity shield can also be dangerous. For example, the Online Content Policy Modernization Act, which the Senate Judiciary Committee is set to mark up this week, would create new liability risks for platforms that “editorialize” or make virtually any other changes to user-generated content. That means sites that label and flag misleading or incorrect user speech, or point users to trusted and fact-checked sources to counter falsehoods, could be exposed to greater liability risk. Such a change could discourage sites like Twitter from providing links to factual information about mail-in ballots under tweets that blatantly lie about voting by mail, not unlike the incident that sparked President Trump’s executive order.

The desire on the part of policy makers to do more to create platform accountability is understandable. The ACLU shares that goal, and has long advocated for strong consumer privacy protections at the federal and state level for that very reason. We have also pressed the platforms to provide transparency and meaningful review processes for their content moderation practices. However, we should be wary of proposals that risk harming online expression and be skeptical of focusing on Section 230 as a method of requiring platform accountability. 

Section 230 protects people’s ability to create, communicate, and build community online. The ACLU will remain vigilant in ensuring that the internet continues to be a place for self-expression and creation for all. We urge members of Congress to do the same, particularly as they examine proposals to amend Section 230.