Billionaire entrepreneur Elon Musk announced plans this week to create an AI-driven conversation tool called "TruthGPT," after criticizing the popular AI text bot ChatGPT for being "politically correct."
"There's certainly a path to AI dystopia, which is to train AI to be deceptive," Musk, the CEO of Tesla and owner of Twitter, cautioned in an interview with Fox News host Tucker Carlson on Monday.
AI chatbots pose significant risks centered on political bias, since the models can generate vast amounts of speech, potentially shaping public opinion and enabling the spread of misinformation, experts told ABC News.
MORE: What is ChatGPT, the artificial intelligence text bot that went viral?However, the comments from Musk underscore the fraught challenge raised by the issue, because content moderation itself has become a polarizing topic and Musk has voiced opinions that place his approach within that hot-button political context, some experts added.
"Musk is correct that if we can't solve the truthfulness problem and the reliability problem, that poses a risk for AI safety," Gary Marcus, a professor emeritus of psychology and neuroscience at New York University, who specializes in AI, told ABC News.
"But tying that question to political correctness might actually be a mistake," he added. "You have to separate the truth from the politics if you want to be credible on the truth issue. It's a mistake to try to tie the two together."
Created by artificial intelligence firm OpenAI, ChatGPT is a chatbot -- a computer program that converses with human users.
Neither Musk nor OpenAI responded to a request for comment from ABC News.
ChatGPT uses an algorithm that selects words based on lessons learned from scanning billions of pieces of text across the internet. The tool has gained popularity for viral posts that demonstrate it composing Shakespearean poetry or identifying bugs in computer code.
But the technology has also stoked controversy with some troubling results. The designers of ChatGPT programmed safeguards that prevent it from taking up controversial opinions or expressing hate speech.
Content moderation on AI poses legitimate challenges for designers, who must determine which messages are sufficiently offensive or odious to warrant intervention, experts told ABC News.
"If a product is being used by millions of people, safeguards are something the designers have to put in place," Ruslan Salakhutdinov, a professor of computer science at Carnegie Mellon University, told ABC News.
"The question is: How do you make it fair or neutral? That's a little bit of a judgment of the designers," he added. "You could imagine a GPT that's politically biased."
Further, the responses from AI conversation tools depend heavily on the text with which the model is trained, Kathleen Carley, another professor of computer science at Carnegie Mellon University.
"There's this view that the majority of information that it was trained on is more left-leaning and has certain political biases and certain political agendas built into it," Carley said. "That's where that argument is coming from."
MORE: Classified documents leak on social media sparks debate over government monitoringMusk, who co-founded OpenAI but left the organization in 2018, in a December tweet accused OpenAI of "training AI to be woke."
While AI chatbots deserve scrutiny over political bias, Musk stands as an imperfect spokesperson for such criticism because of his own high-profile political views, some experts said.
Musk has taken up a slew of conservative stances in recent months, including an expression of support for Republican candidates in the midterm elections last year and repeated criticism of "woke" politics.
"I think what he means by 'truth' is 'agrees with me,'" Oren Etzioni, CEO of Allen Institute for AI and a computer science professor at the University of Washington, told ABC News.
Still, the polarized political environment poses a challenge for any AI chatbot developer attempting to moderate responses, experts said.
"Politics as it exists today doesn't draw lines that finely," Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute, told ABC News. "You would have to know where to draw the line in a sensible place to get AI to draw the line in a sensible place."