Supreme Court Punts Flimsy Laws Limiting Social Media Content Moderation

Rex Mullens


What if the US Supreme Court supported a law in which the government could force the New York Times and Wall Street Journal to publish stories against their will? What if the editors at those news outlets no longer had the power to turn down stories that they found objectionable — ones containing hate speech or misinformation?

State-Run Content Moderation Is Inherently Un-American

The Supreme Court just sent two cases back to the lower courts for laws proposed by Florida and Texas that would restrict social media companies from moderating objectionable content on their platforms. The push back centers on a theme: Does the First Amendment protect the editorial decisions of social media platforms?

It should. Social media companies are not state-run, and they are not government entities. These companies are run by private citizens; as such, they have the right to allow whatever content they see fit on their platforms. If consumers do not agree with how social media platforms moderate, they can leave. In fact, this March, over half (54%) of US online adults said social media companies have the right to moderate content based on their own terms and conditions, and 46% believe social media companies are protected by the First Amendment when they deplatform users for posting misinformation/hate speech.

An Unmoderated Social Media Experience Would Be — At Best Frustrating

Social media platforms argue that without moderation, their feeds would be filled with harmful content and spam. Consumers already think these platforms are overrun with misinformation and hate speech, and putting laws like these into place would only make it worse. Eighty-one percent of US online adults said there’s a lot of fake news and misinformation on social media, and 74% say it’s easy to be tricked by scams. This exposure worries consumers. In Forrester’s Global Government, Society, And Trust Survey, 2024, US online adults said they are concerned about their online safety when exposed to the following content:

  • 72%: Disinformation, misinformation, or fake news
  • 71%: Child exploitation or abuse material
  • 62%: Hate speech

Content Moderation Promotes Responsible Governance, Not Censorship

As we published previously regarding Section 230 and misinformation issues in media, safety shouldn’t mean censorship. Content moderation promotes responsible governance that attracts advertisers and consumers alike. If the US government dismantles content moderation on social media platforms:

  • Misinformation will take over. As it stands, social media algorithms often amplify misinformation and disinformation, despite existing content moderations. During the 2020 US presidential election, news publishers known for publishing misinformation got six times the engagement on Facebook compared with trustworthy news sources. Pulling back content moderation would unleash an already-untamed beast.
  • Consumers will spend less time on social media. Forty-three percent of consumers already say they’re spending less time on social media than they did in the past. If their experience becomes inundated with spam and hateful content, consumers won’t want to spend their time there.
  • Marketers will divest to other media channels. Brands suspended their media spend on X (formerly Twitter) because they appeared next to neo-Nazi content. X celebrates itself as a free speech platform, but really, it’s an unmoderated platform — one that many brands deem unworthy of their advertising. If safety concerns become untenable across social media platforms, brands will abandon them for safer media spaces.

Forrester clients, set up a guidance session to discuss this topic further or to pressure-test your social media strategy.

Share this Article
Leave a comment