×

AMH is an independent media house free from political ties or outside influence. We have four newspapers: The Zimbabwe Independent, a business weekly published every Friday, The Standard, a weekly published every Sunday, and Southern and NewsDay, our daily newspapers. Each has an online edition.

  • Marketing
  • Digital Marketing Manager: tmutambara@alphamedia.co.zw
  • Tel: (04) 771722/3
  • Online Advertising
  • Digital@alphamedia.co.zw
  • Web Development
  • jmanyenyere@alphamedia.co.zw

Fact-Checkers Are Scrambling to Fight Disinformation With AI

International
FAKE NEWS

Bad actors use artificial intelligence to propagate falsehoods and upset elections, but the same tools can be repurposed to defend the truth.

Spain’s regional elections are still nearly four months away, but Irene Larraz and her team at Newtral are already braced for impact. Each morning, half of Larraz’s team at the Madrid-based media company sets a schedule of political speeches and debates, preparing to fact-check politicians’ statements. The other half, which debunks disinformation, scans the web for viral falsehoods and works to infiltrate groups spreading lies. Once the May elections are out of the way, a national election has to be called before the end of the year, which will likely prompt a rush of online falsehoods. “It’s going to be quite hard,” Larraz says. “We are already getting prepared.”

The proliferation of online misinformation and propaganda has meant an uphill battle for fact-checkers worldwide, who have to sift through and verify vast quantities of information during complex or fast-moving situations, such as the Russian invasion of Ukraine, the Covid-19 pandemic, or election campaigns. That task has become even harder with the advent of chatbots using large language models, such as OpenAI’s ChatGPT, which can produce natural-sounding text at the click of a button, essentially automating the production of misinformation.

Faced with this asymmetry, fact-checking organizations are having to build their own AI-driven tools to help automate and accelerate their work. It’s far from a complete solution, but fact-checkers hope these new tools will at least keep the gap between them and their adversaries from widening too fast, at a moment when social media companies are scaling back their own moderation operations.

“The race between fact-checkers and those they are checking on is an unequal one,” says Tim Gordon, cofounder of Best Practice AI, an artificial intelligence strategy and governance advisory firm, and a trustee of a UK fact-checking charity.

“Fact-checkers are often tiny organizations compared to those producing disinformation,” Gordon says. “And the scale of what generative AI can produce, and the pace at which it can do so, means that this race is only going to get harder.”

Newtral began developing its multilingual AI language model, ClaimHunter, in 2020, funded by the profits from its TV wing, which produces a show fact-checking politicians, and documentaries for HBO and Netflix.

Using Microsoft’s BERT language model, ClaimHunter’s developers used 10,000 statements to train the system to recognize sentences that appear to include declarations of fact, such as data, numbers, or comparisons. “We were teaching the machine to play the role of a fact-checker,” says Newtral’s chief technology officer, Rubén Míguez.

Simply identifying claims made by political figures and social media accounts that need to be checked is an arduous task. ClaimHunter automatically detects political claims made on Twitter, while another application transcribes video and audio coverage of politicians into text. Both identify and highlight statements that contain a claim relevant to public life that can be proved or disproved—as in, statements that aren’t ambiguous, questions, or opinions—and flag them to Newtral’s fact-checkers for review.

The system isn’t perfect, and occasionally flags opinions as facts, but its mistakes help users to continually retrain the algorithm. It has cut the time it takes to identify statements worth checking by 70 to 80 percent, Míguez says.

“Having this technology is a huge step to listen to more politicians, find more facts to check, [and] debunk more disinformation,” Larraz says. “Before, we could only do a small part of the work we do today.”

Newtral is also working with the London School of Economics and the broadcaster ABC Australia to develop a claim “matching” tool that identifies repeated false statements made by politicians, saving fact-checkers time by recycling existing clarifications and articles debunking the claims.

The quest to automate fact-checking isn’t new. The founder of the American fact-checking organization Politifact, Bill Adair, first experimented with an instant verification tool called Squash at Duke University Reporters’ Lab in 2013. Squash live-matched politicians’ speeches with previous fact-checks available online, but its utility was limited. It didn’t have access to a big enough library of fact-checked pieces to cross-reference claims against, and its transcriptions were full of errors that humans needed to double-check.

“Squash was an excellent first step that showed us the promise and challenges of live fact-checking,” Adair tells WIRED. “Now, we need to marry what we’ve done with new advances in AI and develop the next generation.”

But a decade on, fact-checking is still a long way from being fully automated. While large language models (LLMs) like ChatGPT can produce text that looks like it was written by a person, it cannot detect nuance in language, and has a tendency to make things up and amplify biases and stereotypes.

“[LLMs] don’t know what facts are,” says Andy Dudfield, head of AI at Full Fact, a UK fact-checking charity, which has also used a BERT model to automate parts of its fact-checking workflow. “[Fact-checking] is a very subtle world of context and caveats.”

While the AI may appear to be formulating arguments and conclusions, it isn’t actually making complex judgements, meaning it can’t, for example, give a rating of how truthful a statement is.

LLMs also lack knowledge of day-to-day events, meaning they aren’t particularly useful when fact-checking breaking news. “They know the whole of Wikipedia but they don’t know what happened last week,” says Newtral’s Míguez. “That’s a big issue.”

As a result, fully automated fact-checking is “very far off,” says Michael Schlichtkrull, a postdoctoral research associate in automated fact verification at the University of Cambridge. “A combined system where you have a human and a machine working together, like a cyborg fact-checker, [is] something that’s already happening and we’ll see more of in the next few years.”

But Míguez sees further breakthroughs within reach. “When we started to work on this problem in Newtral, the question was if we can automate fact-checking. Now the question for us is when we can fully automate fact-checking. Our main interest now is how we can accelerate this because the fake technologies are moving forward quicker than technologies to detect disinformation.”

Fact-checkers and researchers say there is a real urgency to the search for tools to scale up and speed up their work, as generative AI increases the volume of misinformation online by automating the process of producing falsehoods.

In January 2023, researchers at NewsGuard, a fact-checking technology company, put 100 prompts into ChatGPT relating to common false narratives around US politics and health care. In 80 percent of its responses, the chatbot produced false and misleading claims.

OpenAI declined to give an attributable comment.

Because of the volume of misinformation already online, which feeds into the training models for large language models, people who use them may also inadvertently spread falsehoods. “Generative AI creates a world where anybody can be creating and spreading misinformation. Even if they do not intend to,” Gordon says.

As the problem of automated misinformation grows, the resources available to tackle it are under pressure.

While there are now nearly 400 fact-checking initiatives in over 100 countries, with two-thirds of those within traditional news organizations, growth has slowed, according to Duke Reporters’ Lab’s latest fact-checking census. On average, around 12 fact-checking groups shut down each year, according to Mark Stencel, the lab’s codirector. New launches of fact-checking organizations have slowed since 2020, but the space is far from saturated, Stencel says—particularly in the US, where 29 out of 50 states still have no permanent fact-checking projects.

With massive layoffs across the tech industry, the burden of identifying and flagging falsehoods is likely to fall more on independent organizations. Since Elon Musk took over Twitter in October 2022, the company has cut back its teams overseeing misinformation and hate speech. Meta reportedly restructured its content moderation team amid thousands of layoffs in November.

With the odds stacked against them, fact-checkers say they need to find innovative ways to scale up without major investment. “Around 130,000 fact-checks have been written by all fact-checkers around the world,” says Dudfield, citing a 2021 paper, “which is a number to be really proud of, but in the scale of the web is a really small number. So everything we can do to make each one of those work as hard as possible is really important.”

Related Topics