Social media platforms have an obligation to their users to moderate their content and combat election disinformation and other harmful content online. These platforms cannot become overrun with extremists and lies -- so we need rigorous content moderation policies -- and strong enforcement.
Why is this important?
Social media platforms are woefully unprepared to stop election deniers and disinformation.
Elon Musk’s Twitter is the perfect example – one of his first moves upon taking control was to fire the content moderation chief, signaling a free pass for anyone who wants to spread election disinformation, transphobia, homophobia, racism, and more.
We’ve even learned that Twitter has stopped enforcement around the ‘Big Lie’ – even though rampant election denialism on sites like Twitter fed the violence that culminated in the January 6th attack on our democracy.
It doesn’t end with Twitter. Our Social Media Monitoring team spent Election Day tracking blatant, evidence-free attempts to convince people their votes don’t count or that we can’t trust election officials – and raises them to social media platform officials to delete.
But this year, the response we received was haphazard to say the least. Companies like Meta, Twitter, and TikTok failed to learn from the 2020 elections. We, along with 120 other groups, called on them months ago to make specific policy changes to prevent this, but they didn’t, allowing harmful content to continue to spread – to say nothing of right-wing sites like Gab and Parler that are designed to spread disinformation.
Tell social media platforms: we need them to stand up against election disinformation, hate speech, harassment, and other harmful content.