IN an age where artificial intelligence (AI) and digital technologies are reshaping the media landscape at an unprecedented pace, the issue of trust has never been more critical.
As AI becomes more integrated into newsrooms, maintaining and building this trust amid a technological change is paramount.
The trust dilemma
Trust is the bedrock upon which any successful media organisation is built. Readers trust the news they consume is accurate, fair and unbiased.
However, with the advent of AI, there is a growing concern about whether these technologies can uphold the standards of trust that audiences expect.
According to the Edelman Trust Barometer, trust in media companies has been declining for some years.
Keep Reading
- Harvest hay to prevent veldfires: Ema
- Public relations: How artificial intelligence is changing the face of PR
- Queen Lozikeyi singer preaches peace
- Public relations: How artificial intelligence is changing the face of PR
Reversing this trend will be one of the many challenges facing the media industry.
AI algorithms can personalise content, predict trends, create summaries of articles, and even write entire articles or create other formats.
However, they raise questions about transparency, bias and accountability.
The latest research by the Reuters Institute shows people are generally more comfortable with news produced by human journalists than by AI, especially on sensitive topics like international affairs and politics.
People are less sensitive about soft news topics such as fashion, entertainment or sport.
This is also reflected in how people currently use GenAI tools such as ChatGPT or Claude.
According to the research, only 5% of people use GenAI to get the latest news, whereas more than 10% use GenAI for answering factual questions or getting advice.
One reason for this is likely that the free version of the most widely used generative AI chatbot, ChatGPT, has no real-time access to the Internet.
This will change when systems like Bing AI, MS Copilot, or Perplexity become more mainstream.
However, the fact people trust people more than machines, at least for the moment, is something we need to exploit by being very selective about where and how we use AI in the journalistic process.
AI in the newsroom: promise and peril
AI offers numerous benefits to newsrooms.
It can help identify breaking news faster, inspire journalists to ask less obvious questions, and automate routine tasks such as transcribing interviews or translating documents.
AI can also analyse vast amounts of data to uncover stories that might be missed by human journalists and personalise news feeds to ensure readers receive relevant content in their preferred format.
However, the use of AI is not without its pitfalls.
One major concern is algorithmic bias. AI systems are only as good as the data they are trained on.
If this data is biased, the AI’s outputs will also be biased, potentially leading to skewed reporting.
Currently that skew is toward white, male, academic, formal, and English language, just because of where the majority of sources originate from.
Although more training data and techniques like retrieval-augmented generation (RAG) and fine-tuning models can mitigate this issue, it remains a significant concern.
Additionally, there is the issue of transparency. AI algorithms can be complex and opaque, making it difficult for journalists and readers alike to understand how decisions are made.
To build and maintain trust, media organisations need to be transparent about their use of AI, explaining how these systems work, what data they use, and the steps taken to mitigate biases.
Even if transparency is very important, it is not black or white. Some organisations label headlines of articles that were suggested by generative AI as AI-generated. Other organisations do not label anything at all.
Reuters research indicates people have a differentiated view on what should be labelled.
Around one-third of people responded that “editing the spelling and grammar of an article” (32%) and “writing a headline” (35%) should be disclosed.
When it comes to “writing the text of an article” or “data analysis”, almost half of the respondents believe it should be labelled.
These views reflect common sense. With the availability of spell and grammar checkers for every journalist in the last 20 years, and the use of search engines as a support tool for journalist research, I cannot recall any news organisation labelling an article with “this article was created with the help of Google search and a spell checker.”
Finding the right balance will be important to maintain or increase trust without labelling every element with an AI sticker.
At the same time, the European Union Artificial Intelligence Act has defined certain rules that need to be followed anyway.
In any case, human oversight is crucial and should be a non-negotiable part of every news organisation’s code of conduct when it comes to AI and GenAI, be it just writing a headline, developing a teaser, or improving the style and tone of an entire article.
While AI can assist in news gathering and reporting, the final responsibility must always lie with human journalists.
Every newsroom should treat everything that the machine outputs as an unverified source.
Accountability and engaging with the audience
Transparency as a trust-building measure extends beyond the mechanics of AI.
Media organisations must also be transparent about their editorial processes, decision-making criteria, and how they address mistakes.
Accountability mechanisms should be in place to promptly and effectively address errors, ensuring readers can trust the reliability of the information they receive.
Readers need to feel confident the information they receive is reliable and that any lapses are addressed with integrity.
Engaging with the audience is another crucial aspect of building trust. AI can assist in this by helping media organisations foster a dialogue with their readers, understanding their needs and concerns, and involving them in the process of change.
Regular feedback loops, reader surveys, and open forums for discussion can be instrumental in this regard.
The integration of AI into newsrooms presents both opportunities and challenges. By prioritising transparency, maintaining human oversight, engaging with the audience, and adhering to ethical standards, media organisations can build and sustain trust in this era of change.
AI should be a tool that supports journalists in their work, not a substitute for human judgement and insight.
Ensuring AI-driven content adheres to the same ethical standards as human-generated content is essential.
- Dietmar Schantin is a digital media strategist and has helped to transform the editorial and commercial operations of media brands around the world.