Liability & Digital Ethics

Social Media’s Crime of Negligence: When Platforms Know but Don’t Act

Published Jan 2026 • 14 min read • By Charu Bigamudra

Social media platforms have long been marketed as avenues for community. However, evidence suggests a pattern of routine negligence: companies often possess internal data indicating serious dangers to users yet fail to enact remedial measures until forced by public or regulatory pressure.

Internal Knowledge, External Harm

In 2021, internal Facebook documents revealed the company knew its Instagram algorithm contributed to body-image issues, depression, and anxiety among teenage girls. Despite describing the findings as "pretty crazy," the response was delayed, prioritizing engagement over welfare.

“The platform’s systems are designed first for engagement, not minimal harm.”
— European Commission Official

Evidence of Missing Signals

Negligence often manifests as a failure to act on clear indicators. From coordinated efforts to incite violence during the Capitol riots to the persistent presence of extremist content and hate speech in the EU, platforms frequently lack the "will to enforce standards consistently," even when they possess the necessary detection technology.

Exhibit A: Child Safety The National Center for Missing & Exploited Children (NCMEC) emphasizes that swift detection of CSAM saves lives. Yet, critics argue companies often hide behind privacy justifications (like encryption policies) to avoid the responsibility of protecting vulnerable users.

The Choice Not to Act

A 2024 U.K. Online Safety Commission report found platforms like TikTok and Snapchat allowed self-harm content to remain visible for days, despite automated systems being capable of flagging high-risk keywords instantly. Dr. Sarah Greene, an expert on cyberpsychology, notes that this is a policy decision, not a technical limitation.

Are Business Models Against Safety?

The deeper tension lies in the incentive structure. Algorithms are programmed to maximize viewing time, often elevating sensational or harmful content because it drives revenue. As Consumer Protection Lawyer Ed Mierzwinski notes, these incentive structures often outweigh safety metrics.

The Regulatory Shift: Duty of Care

With the EU’s Digital Services Act (DSA) threatening fines of up to 6% of turnover, the legal landscape is shifting. Courts and regulators must decide if platforms are passive agents or entities with a "duty of care." Failure to mitigate known risks—such as teenager suicide or radicalism—is increasingly being viewed as actionable negligence.

The focus for policymakers is no longer whether platforms *should* act, but how to compel them to act before the next tragedy occurs.