Section 230
47 U.S.C. § 230 — Communications Decency Act of 1996
Twenty-six words that created the internet: the legal foundation shielding online platforms from liability for user-generated content and empowering voluntary content moderation. Under unprecedented political and technological pressure.
If you have ever posted a photo on social media, left a product review, or commented on a news article, Section 230 was quietly working in the background. This short provision of US law is the reason platforms like Facebook, YouTube, and Reddit can host billions of posts without being sued for every one of them. Without it, most of the internet as we know it could not exist.
Here is the basic idea: if someone writes something harmful online, the person who wrote it can be held responsible -- but the website that hosted it generally cannot. This is different from traditional publishing. A newspaper that prints a defamatory letter can be sued alongside the letter's author. A website that lets users post the same letter typically cannot. Section 230 draws that line. The law also encourages platforms to clean up their spaces -- if a social media company removes a post it considers harmful, it will not be punished for that decision.
Today, that bargain is under strain from all sides. Critics on the right say platforms use their moderation power to silence conservative voices. Critics on the left say platforms profit from amplifying harmful content while hiding behind immunity. Courts, Congress, state legislatures, and foreign governments are all pushing for change. The Supreme Court had an opportunity to narrow Section 230 in Gonzalez v. Google (2023) but declined to rule on the question, leaving the broad interpretation intact.
The rise of AI-generated content adds an entirely new dimension that the law's 1996 authors never imagined. When a platform's own AI system produces harmful content, it is no longer merely hosting someone else's speech -- it may be the speaker. Whether Section 230 can stretch to cover this scenario is one of the defining legal questions of the next decade.
Section 230 shapes the global internet. Any company that operates a website, app, or digital service accessible to US users is affected by how American courts interpret platform liability. Swiss companies hosting user-generated content, running online marketplaces, or deploying AI chatbots need to understand where Section 230's protections apply -- and where the EU's Digital Services Act imposes stricter obligations.
Switzerland has no domestic equivalent of Section 230. Swiss intermediary liability is governed by general tort law principles and the Federal Supreme Court's case law, which imposes a duty to act once a host has knowledge of unlawful content. Companies operating across both US and EU markets face a fundamental divergence: broad immunity in America versus conditional protection tied to active compliance obligations in Europe.
Section 230 provides two distinct protections. Understanding which shield applies -- and where each has limits -- is essential for any platform operator.
Key Supreme Court decisions that have shaped -- or pointedly avoided shaping -- Section 230's scope.
Dozens of bills to amend or repeal Section 230 have been introduced since 2019. These are the most significant active proposals in the current Congress.
While Section 230 is federal law, states are increasingly passing their own platform regulation. Many face legal challenges on First Amendment and preemption grounds.
The rise of generative AI creates novel questions that Section 230's 1996 text never anticipated. These are the key open issues.
The world's two largest digital markets take fundamentally different approaches to platform liability. Section 230 grants broad immunity; the DSA conditions limited protection on active compliance.
| DIMENSION | US SECTION 230 | EU DIGITAL SERVICES ACT |
|---|---|---|
| Liability framework | Broad unconditional immunity for third-party content. No knowledge standard required for (c)(1) protection. | Conditional exemption (Articles 4-6): must act expeditiously upon obtaining actual knowledge of illegal content. |
| Content moderation | Voluntary. Platforms may moderate in good faith without liability. No obligation to monitor or remove content. | Mandatory notice-and-action system. Must provide clear mechanisms for reporting illegal content. Reasoned explanations for removals. |
| Transparency | No transparency requirements. Platforms may operate content policies without disclosure. | Extensive transparency reports required. Algorithmic transparency for very large platforms. Annual audits. |
| Risk assessment | No risk assessment obligation. | Very large platforms must conduct annual systemic risk assessments covering illegal content, fundamental rights, elections, and public health. |
| Enforcement | Private litigation. No federal regulator. FTC has general consumer protection authority but no Section 230-specific enforcement. | Digital Services Coordinators in each Member State. European Commission as direct supervisor for very large platforms. Fines up to 6% of global turnover. |
| Scope | Any "interactive computer service." No size thresholds. Same rules for a personal blog and a platform with 3 billion users. | Tiered obligations by size: intermediary services, hosting services, online platforms, and very large online platforms (45M+ EU users). |
| Algorithm regulation | No algorithmic regulation. Courts have consistently held that algorithmic curation is protected activity. | VLOPs must offer a non-profiling-based recommendation option. Must assess systemic risks from recommender systems. Researchers granted data access. |
| Children and minors | No specific minor protection provisions. Separate laws (COPPA) address children's data. KOSA pending. | Platforms must take measures to ensure high level of privacy, safety, and security for minors. Targeted advertising to minors prohibited. |
Select your company type for tailored guidance on Section 230's relevance to your operations.