Experts tell ABC News that the rise of generative artificial intelligence is making it more challenging for the public to tell fact from fiction -- and with the 2024 presidential race only a little more than a year away, some are worried about the risk from deceptively fake political content.
Generative AI is the use of artificial intelligence tools capable of producing content including text, images, audio and video with a simple prompt.
From images falsely depicting what appears to be President Joe Biden in a Republican Party ad to an outside political group supporting Florida Gov. Ron DeSantis' White House bid using AI technology to fabricate former President Donald Trump's voice, new tools are giving candidates or their supporters the ability to produce hyper-realistic fakes in order to advance partisan messages.
But a coalition of companies, working together as the Content Authenticity Initiative, is developing a digital standard that they hope will restore trust in what users see online.
"If you don't actually have transparency and a level of authenticity on the images and videos you're seeing, you could be easily misled without knowing the difference," explained Truepic's Mounir Ibrahim, who told ABC News in a segment that aired Sunday on "This Week" that the company's camera technology adds verified content provenance information -- like date, time, location -- to content taken with their tool.
Truepic said it is currently being used by both nongovernmental organizations documenting war crimes and commercial partners, like insurance companies, to verify the authenticity of images of damage. But Ibrahim thinks there's a use case for 2024 candidates who want to prove the content they post is authentic.
MORE: OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'"Think about the way in which we make our decisions on who we vote for, what we believe: So much of it is coming from what we see or hear online," he said.
Adobe's chief trust officer and general counsel, Dana Rao, agreed: "I think it's really critical for governments to think about this seriously."
"They're communicating directly with our citizens, and they're doing it more than ever on the internet, through social media platforms and other online digital audio and video content," Rao said.
He told ABC News that the Content Authenticity Initiative's digital standard would allow creators to display "content credentials," about the entire history of that piece of content -- including how it was captured and if and how it was edited.
The goal is to have those credentials displayed wherever the piece of content publishes online, whether through a website or a social media platform.
"The key part of what we're offering is this is a solution to let you prove it's true," Rao said. "And that means the people who are using content credentials, they're trying to tell you what happened. They want to be transparent."
"[And as a consumer] you get to look at that information. You get to decide for yourself whether or not you want to believe it," Rao said.
Both he and Ibrahim acknowledged that bad actors trying to deceive people would not use this standard -- but the hope would be for creators to more broadly adopt it such that their content will be set apart with information that attests to its authenticity.
Adobe said they are having productive conversations with social media platforms, but none of them have so far joined the Content Authenticity Initiative or agreed to let users display the new content credentials on their sites.
ABC News has reached out to Meta, which owns Facebook and Instagram, and TikTok for comment as well as X, the platform formerly known as Twitter.
MORE: 'A real worry': How AI is making it harder to spot fake images"They could do this tomorrow. There's no barrier to entry here," said University of California, Berkeley, computer science professor Hany Farid, who said that content credentials are a free open-source technology that companies can easily implement.
Farid specializes in digital forensics and said generative AI threatens to erode already embattled information ecosystems.
"For the last few [presidential] election cycles, the difference between one candidate and the other is measured in tens of thousands of votes. There's a handful of states, a handful of districts, where you move 50,000 votes in one direction or another -- that's the ballgame," Farid said. "And between social media, outward manipulation, fake content, existing distrust of governments and media and scientists, I don't think that's out of the question. And that, to me, is worrisome that our very democracy we're talking about here is at stake."
But Farid said he's hopeful that the conversations happening now -- not just with technology companies but lawmakers -- will lead to industry-wide change.
"I think our regulators are asking a lot of good questions, and they're having hearings, and we're having conversations and we're doing briefings and I think that's good," Farid said. "I think we have to now act on all of this."