5,000 signatures reached
To: Microsoft, Google, Twitter/X, Facebook, Amazon, and other leading AI labs, social media platforms, and cloud providers
Protect Taylor Swift & All Women from Non-Consensual A.I. Deepfakes
What happened to Taylor Swift is horrible—fake images spread across the internet showing her performing sexual acts, seen by over 47 million people. But it’s not just Taylor. A growing number of women—including underage girls—have been targets of non-consensual deepfakes for years, with devastating consequences, including lost jobs and depression. While anyone can be the target of a deepfakes, in a variety of ways and for different reasons, 96% of all deepfakes on the internet are non-consensual explicit images and videos of women.
It’s been amazing to see Taylor’s supporters come together to expose what was happening. Yet for most women subjected to non-consensual deepfake porn, there’s no army to defend them—they’re left to deal with the consequences on their own, without recourse, and usually in silence.
Creating real consequences for perpetrators is important, and Congress is crafting new legislation to do that. It’s a great start, but it doesn't go far enough. It focuses only on users who make and distribute non-consensual deepfake porn—not the companies that enable it every level, and it shifts the burden to victims to find and go after their perpetrators, after the harm has been caused.
To put a real stop to the harm we need to look at the root of the problem: tech companies like Microsoft, Facebook, Twitter/X, and Google that have built, and continue to manage and profit from the technology that allows for the easy creation and distribution of deepfake porn. Even as it’s become clear these companies are perpetuating harm, they have failed to take adequate steps to stop it. Worse, many have actually pulled back safeguards and disbanded teams that would help curb the problem.
Join us in holding these companies accountable, and demanding they make changes to bring an end to the harm caused by non-consensual deepfakes.
It’s been amazing to see Taylor’s supporters come together to expose what was happening. Yet for most women subjected to non-consensual deepfake porn, there’s no army to defend them—they’re left to deal with the consequences on their own, without recourse, and usually in silence.
Creating real consequences for perpetrators is important, and Congress is crafting new legislation to do that. It’s a great start, but it doesn't go far enough. It focuses only on users who make and distribute non-consensual deepfake porn—not the companies that enable it every level, and it shifts the burden to victims to find and go after their perpetrators, after the harm has been caused.
To put a real stop to the harm we need to look at the root of the problem: tech companies like Microsoft, Facebook, Twitter/X, and Google that have built, and continue to manage and profit from the technology that allows for the easy creation and distribution of deepfake porn. Even as it’s become clear these companies are perpetuating harm, they have failed to take adequate steps to stop it. Worse, many have actually pulled back safeguards and disbanded teams that would help curb the problem.
Join us in holding these companies accountable, and demanding they make changes to bring an end to the harm caused by non-consensual deepfakes.
Why is this important?
In addition to the companies that have unleashed the technology the enables deepfakes, hosting and cloud computing providers like Amazon have continued to provide the infrastructure needed to run large websites specializing in non-consensual deepfake porn, even though the harm they;re doing has become clear. They, too, are part of the problem.
Normally, when a company releases products that cause unexpected harm, they’re responsible for fixing it. Like when Toyota found it had cars on the road with faulty gas pedals that could result in crashes, or when Samsung got reports of Galaxy smartphones overheating and catching fire, they recalled their products. Tech companies releasing, distributing, and hosting content generated by cutting edge AI technology should not be an exception.
Now that deepfakes have become a major media story, you have Satya Nadella, CEO of Microsoft, saying “we have to act,” while not specifying or committing to real action—and with most other tech leaders remaining completely silent.
There are good conversations in the AI safety community about what the best approaches are for reining in deepfakes, but until the tech companies at the root of the problem act, it will be meaningless.
Some parts of solving the problem are harder than others, but there is plenty that can and should be done right now:
- AI companies (like Microsoft) should stop releasing software that has been shown to create harmful, non-consensual deepfakes, until they can prove that it is safe.
- Social media platforms (like Facebook and Twitter/X) should take much stronger steps to detect deepfakes; freeze accounts that appear to have distributed harmful, non-consensual deepfakes; and permanently ban those that have been determined to have done so.
- Cloud providers (like Amazon) should drop large websites that are clearly and overtly in the business of creating and distributing non-consensual deepfakes.
Congress is now working on The DEFIANCE Act of 2024, which would make it a crime to produce and distribute non-consensual deepfake images, audio, and video–an important part of what’s needed and which Sexual Violence Prevention Association has recently started a campaign to support.
But Congress has yet to hold accountable the tech companies that are at the core of the problem, and that control the means of production and distribution of deepfakes. These companies have poured billions into the technology that makes deepfake creation and sharing possible. It’s time they prioritize addressing the harms they’ve created as a result, and invest in making such technology safe from causing harm.
Until they can demonstrate that a baseline level of safety has been achieved, with no one in the future being subjected to what Taylor Swift and others currently are experiencing, these companies need to do all that’s in their power to stop the harm created by the spread of non-consensual deepfakes.
Normally, when a company releases products that cause unexpected harm, they’re responsible for fixing it. Like when Toyota found it had cars on the road with faulty gas pedals that could result in crashes, or when Samsung got reports of Galaxy smartphones overheating and catching fire, they recalled their products. Tech companies releasing, distributing, and hosting content generated by cutting edge AI technology should not be an exception.
Now that deepfakes have become a major media story, you have Satya Nadella, CEO of Microsoft, saying “we have to act,” while not specifying or committing to real action—and with most other tech leaders remaining completely silent.
There are good conversations in the AI safety community about what the best approaches are for reining in deepfakes, but until the tech companies at the root of the problem act, it will be meaningless.
Some parts of solving the problem are harder than others, but there is plenty that can and should be done right now:
- AI companies (like Microsoft) should stop releasing software that has been shown to create harmful, non-consensual deepfakes, until they can prove that it is safe.
- Social media platforms (like Facebook and Twitter/X) should take much stronger steps to detect deepfakes; freeze accounts that appear to have distributed harmful, non-consensual deepfakes; and permanently ban those that have been determined to have done so.
- Cloud providers (like Amazon) should drop large websites that are clearly and overtly in the business of creating and distributing non-consensual deepfakes.
Congress is now working on The DEFIANCE Act of 2024, which would make it a crime to produce and distribute non-consensual deepfake images, audio, and video–an important part of what’s needed and which Sexual Violence Prevention Association has recently started a campaign to support.
But Congress has yet to hold accountable the tech companies that are at the core of the problem, and that control the means of production and distribution of deepfakes. These companies have poured billions into the technology that makes deepfake creation and sharing possible. It’s time they prioritize addressing the harms they’ve created as a result, and invest in making such technology safe from causing harm.
Until they can demonstrate that a baseline level of safety has been achieved, with no one in the future being subjected to what Taylor Swift and others currently are experiencing, these companies need to do all that’s in their power to stop the harm created by the spread of non-consensual deepfakes.