500 signatures reached
To: The Brave employees from AI labs speaking out
AI scientists and engineers sounding the alarm about safety – Thank you!
To the brave employees speaking out:
Thank you for sounding the alarm on AI safety. We appreciate and support you, and hope that others in your field will follow your lead and take the same step.
Most of us are not in a position to know what you know, or to influence the behavior of the major AI labs, or government officials. Putting your careers and livelihoods at risk—for the benefit of all—it’s a selfless act, a beautiful act, and it’s both remarkable and commendable.
Why is this important?
So far, over a dozen employees from OpenAI and Google’s Deepmind—two of the most prominent AI labs—have put their careers and livelihoods at risk. They’re sounding the alarm about the dangers of the “move fast and break things” approach to AI development that has, among other problems, resulted in the proliferation of deepfake porn.
These engineers and scientists see the potential dangers of AI firsthand. Right now, they’re stepping out alone. We can and should let them know we have their back. If we want to ensure that AI technologies have guardrails and that the AI labs prioritize safety, we need to make sure AI lab workers feel like the public is watching.
Please join us in thanking these lab workers, and encourage others to join their ranks.
Challenging the “move fast and break things” culture that permeates Silicon Valley is a bold undertaking. This way of doing business underlies the US tech sector’s lead in global innovation, and has created massive changes in society–everything from the introduction of the microprocessor and smartphone to services like AirBnB and Uber. But this culture has another common adage, which is “ask for forgiveness, not permission,” which—when it comes to ground breaking technologies like social media and, now, AI—can produce real harms that are too damaging to correct after the harm is done.
In the case of AI, the level and pace of change is drastic. We are already seeing unanticipated, negative consequences, like the damage being done to young women across the globe from the advent of deepfake porn, as AI technology rapidly develops without guardrails. And it’s likely just the tip of the iceberg.
And that’s why a number of those closest to the technology are risking their careers and personal economic futures, to the tune of millions of dollars, in order to sound the alarm.
When Jeffrey Hinton, ‘the godfather of AI’, left Google just over a year ago so he could speak freely about the dangers of AI, it sparked a movement among lab employees who wrestle internally with balancing their professional pursuits and their social responsibility. As we witnessed the OpenAI board, which included Hinton’s former student and OpenAI co-founder Ilya Sutskever, attempt but fail to oust CEO Sam Altman, many reported that safety concerns inside the company were at the center of boardroom battle. And more recently, when thirteen more researchers and scientists from OpenAI and Google’s Deepmind stepped out risking their jobs by calling for a “right to warn” for all employees of the labs, it further showed how serious those closest to the situation are taking the matter.
In contrast, the response of AI lab leadership and some of their biggest investors is essentially “trust us—we understand the technology better than anyone, and we can’t risk slowing down and losing this technological race.” The problem is history makes it clear that companies racing to win almost never police themselves, or worry enough about the harms to society, and even often punish and destroy whistleblowers (think tobacco, fossil fuels, pharmaceuticals, airplane manufacturers).
Thankfully, we now have those who are trying to blow the whistle, many anonymously, and others, like Geoffrey Hinton, Jan Leike, Ilya Sutskever, willing to be named. Their sacrifice–making themselves the canaries in the coal mine–is a gift to us. You can support them, and amplify their voice as they try to help all of us, simply by adding your name.