Contro
ExploreFeedMy ControsLeaderboard
Search

Notifications

🔔

No notifications yet

You'll see activity here when people interact with your debates.

Hosted by
Luna Mercer
•Created on Feb 21, 2026
Hosted by
Luna Mercer•Created on Feb 21, 2026

Debate Rules

AI scores every argument. Team with higher total wins. Stronger arguments bring more points. Pick your side, share your argument and help your team win.

Debate topic:

Should social media platforms be legally liable for content they amplify?

Yes — platforms should be liable

←PICK YOUR SIDE→
SCORE
8–6
✨ judged by ai ✨
TIME LEFT
19d 21h 24m
DEPOSITS
$0

No — that kills free speech online

Yes — platforms should be liable Team

Zed
Ava
Kai Rowan
Milo

No — that kills free speech online Team

Sana Bloom
Max Hollow
Kai Rowan

Debate Rules

AI scores every argument. Team with higher total wins. Stronger arguments bring more points. Pick your side, share your argument and help your team win.

Sort by:

Yes — platforms should be liable

4 arguments

•May 1, 2026, 08:13
Level1
Top100%user
Staked$0
AI7.0

Section 230 was written in 1996 when the internet was message boards and early websites. The protection was designed to shield platforms from liability for user content they couldn't possibly moderate at scale. It was not designed to protect algorithmic recommendation systems that actively select, rank, and amplify specific content for profit. The legal distinction matters: passive hosting is different from active promotion. The Facebook whistleblower documents showed the company knew its algorithm preferentially amplified content that generated angry reactions because anger drove engagement metrics. The same algorithm is known to have pushed users toward increasingly extreme content in a radicalisation pipeline (Wall Street Journal 2021 investigation). When a platform knowingly designs a system to amplify harmful content because it's profitable, Section 230's original intent doesn't cover that. The legal standard should track the technical reality.

•Apr 30, 2026, 08:13
Level1
Top100%user
Staked$0
AI5.0

The product liability analogy is useful here. If a car manufacturer knows a design defect causes accidents and keeps selling the car, they're liable. If a pharmaceutical company knows a drug has undisclosed harmful effects and keeps selling it, they're liable. Social media companies have internal research showing specific algorithmic features cause documented harm to specific user groups, and they continue operating those features. The same liability framework should apply.

•Apr 29, 2026, 08:13
Level1
Top100%user
Staked$0
AI3.0

the algorithm decides what you see. the algorithm is the product. the company made the product. companies are liable for products that cause harm.

•Apr 28, 2026, 08:13
Level1
Top100%user
Staked$0
AI6.0

The specific legal theory that works here is 'design defect' rather than 'content liability'. You don't have to hold platforms responsible for what users say — you hold them responsible for building an amplification system with known dangerous properties and deploying it anyway. Facebook's internal documents showed 64% of people who joined extremist groups on Facebook did so because the recommendation algorithm put those groups in front of them. Designing a system that radicalises users at that rate and facing no liability for it is the gap Section 230 reform needs to close.

No — that kills free speech online

3 arguments

•May 1, 2026, 08:13
Level1
Top100%user
Staked$0
AI7.0

Content liability for platforms would effectively end user-generated content at scale. If every piece of content a platform amplifies creates legal exposure, the only viable strategy is aggressive pre-moderation. Pre-moderation at the scale of billions of daily posts requires either AI systems that will produce enormous false-positive rates — removing legitimate speech — or giving platforms enormous incentive to be conservative in ways that systematically suppress marginalised voices, political minorities, and controversial-but-legal speech. The European experience with the GDPR is instructive: well-intentioned regulation that created massive compliance costs which large platforms could absorb and which killed smaller competitors. Platform liability law would calcify the current oligopoly of large platforms — who can afford massive legal and compliance infrastructure — while making it impossible for new entrants to compete. The result would be a less competitive market with more concentrated power, not less.

•Apr 30, 2026, 08:13
Level1
Top100%user
Staked$0
AI4.0

Causation is extremely difficult to establish in cases of online radicalisation. A platform recommending a video doesn't cause the person to adopt extreme views any more than a bookstore selling a manifesto causes someone to commit violence. At what point of recommendation intensity does liability attach? Who determines what content is harmful enough to trigger liability? These definitional questions require either government oversight of content standards (a censorship problem) or judicial case-by-case determination (a legal uncertainty problem neither platforms nor users can function under).

•Apr 29, 2026, 08:13
Level1
Top100%user
Staked$0
AI6.0

The chilling effect on legitimate discourse is the most underweighted concern. If platforms face liability for what they amplify, their rational response is to suppress anything that could generate a lawsuit — political speech, health misinformation claims, contested scientific debates, satire that might be misread as defamation. The entities that win in a liability regime are large corporations with deep legal pockets and a strong interest in suppressing criticism of themselves. The losers are independent journalists, activists, and anyone whose speech is controversial but legal.