To: congress
Remove Ai automation decisions in youtube
This campaign has ended.
**Petition to Congress: Enact Federal Legislation Requiring Meaningful Human Review and Transparency in AI-Driven Content Moderation on Online Platforms to Prevent Wrongful Deplatforming and Economic Harm**
**To the United States Congress, including:**
- Members of the Senate Committee on Commerce, Science, and Transportation
- Members of the House Committee on Energy and Commerce
- Members of the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet
- All Members of the House and Senate
**We, the undersigned citizens and residents of the United States, respectfully petition Congress to address the growing harms caused by fully automated AI decision-making in online content moderation on platforms such as YouTube (Google/Alphabet).**
**The Problem**
Major online platforms increasingly rely on AI systems for content moderation, monetization decisions, recommendation algorithms, and enforcement actions—including strikes, demonetization, feature limitations, and permanent channel terminations. While AI enables scale for billions of videos uploaded daily, current implementations frequently result in:
- Wrongful terminations and demonetization of legitimate creators, often with no meaningful human review before irreversible actions.
- Opaque criteria and training data leading to false positives, bias, and suppression of protected speech.
- Ineffective or automated appeals processes that deny due process, destroying livelihoods, channels built over years, and freedom of expression.
- Amplified low-quality or harmful content while penalizing original, compliant voices.
These issues have escalated in recent years, with widespread creator reports of mass erroneous bans and CEO statements defending expanded AI use despite documented errors. Such practices raise serious concerns about accountability, fairness, discrimination, and First Amendment values in the digital public square.
**The Solution We Demand**
Congress must enact federal legislation to protect users and creators by establishing baseline standards for AI-driven moderation on large platforms. Specifically, we urge introduction and passage of a bill that includes:
1. **Mandatory meaningful human review** — No final destructive action (e.g., strike, demonetization, termination, or major algorithmic suppression) may be taken solely by AI/automation. Platforms must provide timely, substantive human review before enforcement.
2. **Transparency requirements** — Platforms must publicly disclose key AI moderation criteria, decision thresholds, training data categories (anonymized), error rates, and appeal outcomes to enable understanding, accountability, and independent auditing.
3. **Robust appeal rights** — Guaranteed access to a human-led appeal process with detailed explanations, reasonable timelines (e.g., 7–30 days), and the option for independent oversight in high-stakes cases.
4. **Algorithmic impact assessments** — Require platforms to conduct and report regular assessments for bias, error rates, and civil rights impacts in moderation systems, with remedies for identified harms.
5. **Enforcement mechanism** — Empower the FTC, NTIA, or a dedicated body to investigate violations, impose penalties, and promulgate rules, building on frameworks like the AI Accountability Act.
These reforms would promote responsible innovation while safeguarding economic opportunity, free expression, and due process—without banning AI outright, which is impractical at scale.
**Why Now?**
The absence of federal standards allows unchecked harms to continue, eroding trust in online platforms and the digital economy. Congress has already studied AI risks in communications; it is time to act with targeted protections.
We urge you to introduce, cosponsor, and advance legislation addressing these issues. Your leadership can restore fairness and accountability in the platforms that shape public discourse and creator livelihoods.
**Thank you for your attention to this critical matter.**
**Petitioner:** Lethal Shadows (Ghost of Gueber)
**Date:** 3/6/26
Why is this important?
Dear [Senator/Representative/Committee Staffer],
Thank you for considering my petition calling for federal legislation to require meaningful human review, transparency, and due process in AI-driven content moderation on platforms like YouTube.
The need for reform is not hypothetical—it is causing widespread, documented harm to American creators, small businesses, and free expression in real time. As of early 2026, YouTube's increasing reliance on fully automated AI systems for enforcement decisions has led to:
- Mass wrongful terminations and demonetization: In 2025 alone, over 12 million channels were terminated, with many legitimate creators (including tech reviewers, educators, and animators with hundreds of thousands of subscribers) reporting sudden, irreversible bans due to AI errors. High-profile cases include tech creator Enderman's channels (350k+ subscribers) terminated over false associations, and others reinstated only after public outcry or legal action—yet many remain lost.
- Irreversible economic damage: Creators lose years of work, revenue streams, and livelihoods overnight. Appeals are often automated or boilerplate-denied, with little human involvement. Prominent voices like MoistCr1TiKaL have publicly called the system "delusional" for punishing original content while failing to address real issues.
- Platform CEO doubling down despite backlash: In late 2025 and into 2026, YouTube CEO Neal Mohan has repeatedly defended and expanded AI moderation, stating it improves "every week" and is essential for scale—even as creators report escalating false positives. In his January 2026 letter, Mohan prioritized fighting "AI slop" (low-quality generated content) via more AI tools, but this has amplified risks to innocent users without adequate safeguards.
These problems erode trust in the digital economy, suppress diverse voices in the public square, and raise serious questions about fairness, bias, and due process under the First Amendment. Without federal standards, platforms face no incentive to prioritize accuracy over speed and cost savings—leaving creators without recourse and viewers with a degraded experience.
Congress has already advanced related efforts, such as the AI Accountability Act (H.R. 1694) for studying AI risks in communications platforms, and the Platform Accountability and Transparency Act (S.3292) for moderation reporting. Building on these, targeted requirements for human review in high-stakes decisions would protect innovation while preventing unchecked corporate overreach.
I urge swift introduction or cosponsorship of legislation to establish these baseline protections. This is a bipartisan issue affecting creators across the political spectrum, small businesses in every state, and the future of online expression.
Please let me know how I can provide more details, personal testimony, or connect with affected constituents in [your district/state].
Thank you for your leadership on this pressing matter.
Sincerely,
Lethal (@GhostOfGueber)
Lethal (@GhostOfGueber)