Twenty leading technology companies, including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe vowed Friday to help prevent deceptive uses of artificial intelligence from interfering with global elections.
In 2024, 4 billion people in over 40 countries around the world are expected to vote. Experts said easily accessible generative AI tools falling in the hands of bad actors could potentially impact those elections and sway votes.
Generative AI tools allow users to create images, videos, or audio from simple text prompts. Some do not have the necessary safeguards in place to prevent users from creating content of politicians or celebrities saying things they've never said or doing things they've never done.
MORE: Amid spread of AI tools, new digital standard would help users tell fact from fictionJust last month, a fake robocall impersonating President Joe Biden's voice discouraged individuals from voting in the New Hampshire primary. Taiwan saw its fair share of deepfakes circulating on social media ahead of its presidential election on Jan. 13.
At the Munich Security Conference, these companies announced the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections," which is a voluntary agreement with a set of eight specific commitments to deploy technology countering harmful AI content.
"Democracy rests on safe and secure elections," said Kent Walker, president of global affairs at Google at the Munich conference. "Google has been supporting election integrity for years, and today's accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can't let digital abuse threaten AI's generational opportunity to improve our economies, create new jobs, and drive progress in health and science."
Some of those commitments include detecting the distribution of this content on their platforms along with developing and implementing technology to mitigate risks related to deceptive AI content.
"We are happy to see these companies taking steps to voluntarily rein in what are likely to be some of the worst abuses. Committing to self-police will help, but it's not enough," Lisa Gilbert, executive vice president of Public Citizen, a non profit that's been advocating for the legislation of political and explicit deepfakes, said in a press release.
"The AI companies must commit to hold back technology – especially text-to-video – that presents major election risks until there are substantial and adequate safeguards in place to help us avert many potential problems," she added.
MORE: Fake explicit Taylor Swift images: White House is 'alarmed'This comes on the heels of Open AI touting its new text-to-video tool Sora, which is not yet available to the general public and is being tested by safety experts against potential risks.
"This is a constructive step forward. We appreciate the companies involved for stepping up to the plate. Time will tell how effective these steps are and if further action is needed. We all have a role to play in protecting the integrity of our elections and upholding the democratic process, and we welcome this critical step by technology companies," said U.S. Senators Mark Warner, D-Va., and Lindsey Graham, R-S.C., in a joint statement in support for the accord.
As of today, the signatories of the accord are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.
The companies said they hope they might see more come onboard, as well as encourage governments to take meaningful action to legislate harmful uses of AI.