What to Know About the Supreme Court Case on Free Speech on Social Media
Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites.
After Facebook, Twitter and YouTube barred President Donald J. Trump in the wake of the Jan. 6, 2021, riots at the Capitol, Florida made it illegal for technology companies to ban from their sites a candidate for office in the state. Texas later passed its own law prohibiting platforms from taking down political content.
Two tech industry groups, NetChoice and the Computer & Communications Industry Association, sued to block the laws from taking effect. They argued that the companies have the right to make decisions about their own platforms under the First Amendment, much as a newspaper gets to decide what runs in its pages.
So what’s at stake?
The Supreme Court’s decision in those cases — Moody v. NetChoice and NetChoice v. Paxton — is a big test of the power of social media companies, potentially reshaping millions of social media feeds by giving the government influence over how and what stays online.
“What’s at stake is whether they can be forced to carry content they don’t want to,” said Daphne Keller, a lecturer at Stanford Law School who filed a brief with the Supreme Court supporting the tech groups’ challenge to the Texas and Florida laws. “And, maybe more to the point, whether the government can force them to carry content they don’t want to.”
If the Supreme Court says the Texas and Florida laws are constitutional and they take effect, some legal experts speculate that the companies could create versions of their feeds specifically for those states. Still, such a ruling could usher in similar laws in other states, and it is technically complicated to accurately restrict access to a website based on location.
Critics of the laws say the feeds to the two states could include extremist content — from neo-Nazis, for example — that the platforms previously would have taken down for violating their standards. Or, the critics say, the platforms could ban discussion of anything remotely political by barring posts about many contentious issues.
What are the Florida and Texas social media laws?
The Texas law prohibits social media platforms from taking down content based on the “viewpoint” of the user or expressed in the post. The law gives individuals and the state’s attorney general the right to file lawsuits against the platforms for violations.
The Florida law fines platforms if they permanently ban from their sites a candidate for office in the state. It also forbids the platforms from taking down content from a “journalistic enterprise” and requires the companies to be upfront about their rules for moderating content.
Proponents of the Texas and Florida laws, which were passed in 2021, say that they will protect conservatives from the liberal bias that they say pervades the platforms, which are based in California.
“People the world over use Facebook, YouTube, and X (the social-media platform formerly known as Twitter) to communicate with friends, family, politicians, reporters, and the broader public,” Ken Paxton, the Texas attorney general, said in one legal brief. “And like the telegraph companies of yore, the social media giants of today use their control over the mechanics of this ‘modern public square’ to direct — and often stifle — public discourse.”
Chase Sizemore, a spokesman for the Florida attorney general, said the state looked “forward to defending our social media law that protects Floridians.” A spokeswoman for the Texas attorney general did not provide a comment.
What are the current rights of social media platforms?
They now decide what does and doesn’t stay online.
Companies including Meta’s Facebook and Instagram, TikTok, Snap, YouTube and X have long policed themselves, setting their own rules for what users are allowed to say while the government has taken a hands-off approach.
In 1997, the Supreme Court ruled that a law regulating indecent speech online was unconstitutional, differentiating the internet from mediums where the government regulates content. The government, for instance, enforces decency standards on broadcast television and radio.
For years, bad actors have flooded social media with misleading information, hate speech and harassment, prompting the companies to come up with new rules over the last decade that include forbidding false information about elections and the pandemic. Platforms have banned figures like the influencer Andrew Tate for violating their rules, including against hate speech.
But there has been a right-wing backlash to these measures, with some conservatives accusing the platforms of censoring their views — and even prompting Elon Musk to say he wanted to buy Twitter in 2022 to help ensure users’ freedom of speech.
What are the social media platforms arguing?
The tech groups say that the First Amendment gives the companies the right to take down content as they see fit, because it protects their ability to make editorial choices about the content of their products.
In their lawsuit against the Texas law, the groups said that just like a magazine’s publishing decision, “a platform’s decision about what content to host and what to exclude is intended to convey a message about the type of community that the platform hopes to foster.”
Still, some legal scholars are worried about the implications of allowing the social media companies unlimited power under the First Amendment, which is intended to protect the freedom of speech as well as the freedom of the press.
“I do worry about a world in which these companies invoke the First Amendment to protect what many of us believe are commercial activities and conduct that is not expressive,” said Olivier Sylvain, a professor at Fordham Law School who until recently was a senior adviser to the Federal Trade Commission chair, Lina Khan.
How does this affect Big Tech’s liability for content?
A federal law known as Section 230 of the Communications Decency Act shields the platforms from lawsuits over most user content. It also protects them from legal liability for how they choose to moderate that content.
That law has been criticized in recent years for making it impossible to hold the platforms accountable for real-world harm that flows from posts they carry, including online drug sales and terrorist videos.
The cases being argued on Monday do not challenge that law head-on. But the Section 230 protections could play a role in the broader arguments over whether the court should uphold the Texas and Florida laws. And the state laws would indeed create new legal liability for the platforms if they take down certain content or ban certain accounts.
Last year, the Supreme Court considered two cases, directed at Google’s YouTube and Twitter, that sought to limit the reach of the Section 230 protections. The justices declined to hold the tech platforms legally liable for the content in question.
What comes next?
The court will hear arguments from both sides on Monday. A decision is expected by June.
Legal experts say the court may rule that the laws are unconstitutional, but provide a road map on how to fix them. Or it may uphold the companies’ First Amendment rights completely.
Carl Szabo, the general counsel of NetChoice, which represents companies including Google and Meta and lobbies against tech regulations, said that if the group’s challenge to the laws fails, “Americans across the country would be required to see lawful but awful content” that could be construed as political and therefore covered by the laws.
“There’s a lot of stuff that gets couched as political content,” he said. “Terrorist recruitment is arguably political content.”
But if the Supreme Court rules that the laws violate the Constitution, it will entrench the status quo: Platforms, not anybody else, will determine what speech gets to stay online.
Adam Liptak contributed reporting.