What to know about Twitter's fact-checking labels
Twitter's new fact-checking label has been thrust into the spotlight after it was used to mark one of President Donald Trump's tweets as potentially misleading.
Here's what to know about how the new labels from the social media giant work to identify false claims.
A Twitter spokesperson told ABC News that Trump's two tweets from Tuesday "contain potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots."
"This decision is in line with the approach we shared earlier this month," the spokesperson added, linking to a blog post by Twitter's Yoel Roth, head of site integrity, and Nick Pickles, director of Global Public Policy Strategy and Development, from when the feature was announced on May 11.
While Trump's tweets aren't in violation of Twitter's rules, as they don't directly try to dissuade people from voting, they do contain misleading information about the voting process, specifically mail-in ballots, according to Twitter.
The fact-checking labels were rolled out earlier this month as a way to combat misinformation related to COVID-19, Roth and Pickles wrote. Initially, the labels were mostly used to link back to medical authorities' information about the virus when people posted false claims or misleading information.
The labels appear below a tweet and link to a page curated by Twitter staff or "external trusted sources" with more information about the claims made in a tweet.
For Trump's tweet, with unsubstantiated claims about mail-in ballots being fraudulent, the label took Twitter users to a page with links to media reports and bulleted points such as "fact-checkers say there is no evidence that mail-in ballots are linked to voter fraud."
Initially, Twitter rolled out the feature with three categories of labels. They included "Misleading information" (things that haven't been confirmed to be false or misleading by experts), "Disputed claims" (statements where the truth or credibility is contested or unknown) and "Unverified claims" (information that is unconfirmed at the time it is shared).
"Moving forward, we may use these labels and warning messages to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content," Roth and Pickles said. "This will make it easier to find facts and make informed decisions about what people see on Twitter."
Tuesday's labeling of Trump's tweets on mail-in ballots marked the first time the fact-checking labels have been used on the president's tweets.
The president did not take it well, and on Wednesday threatened that Republicans will try to "close" down social media platforms that "silence conservative voices."
Roth and Pickles said they identify tweets using "internal systems" that aim to ensure the platform is not amplifying the tweets with these labels and detecting highly visible content quickly. The company also said it is relying on "trusted partners to identify content that is likely to result in offline harm" though it did not specify who the partners were or how they fact-checked tweets.
The move comes at a time when social media giants have faced growing criticism for their role in the spread of misinformation online.
"Serving the public conversation remains our overarching mission," Roth and Pickles wrote, "and we’ll keep working to build tools and offer context so that people can find credible and authentic information on Twitter."