THE technology wave is moving at an accelerated rate than before. It used to take a bit of time before new technologies could be fully adopted or familiarised with.
In journalism, digital evolutions like streaming videos and other multimedia formats appeared like a remote event.
But with audience preferences shifting, and more of them having an inclination towards digital platforms, there is no longer option B, except adoption.
There used to be town criers a long time ago who were heralded as novel communication transporters and the best thing to ever happen in the history of communication science.
Even in the traditional African communities, there existed “nhume” (envoy) or the sound of “hwamanda” (greater kudu’s horn) to pass messages and give signals and even news.
The 15th century changed all that in a flash and the publishing industry later flourished.
Keep Reading
- Time running out for SA-based Zimbos
- Sally Mugabe renal unit disappears
- Epworth eyes town status
- Commodity price boom buoys GB
The legacy broadcast era came and it was a huge milestone, even fortunes changed for media conglomerates and investors.
All these innovations were preparing the stage for the biggest technological nightmare and only progressive media houses will remain standing.
No doubt artificial intelligence (AI) has potential to enhance journalism, but in the absence of a code of ethics and regulation, it can be a disaster.
AI has invented unique legal and ethical challenges for regulators, journalists, policymakers and even digital platforms owners.
In Zimbabwe, much of the training on AI has been leaning towards adoption and use, creating news stories and the general risks associated with deploying the technology, which are mostly generic, nothing specific.
This could be due to the fact that we do not have customised tools that have been used in newsrooms.
People spend days being taught how to generate stories and images using AI tools.
But the benefits in productivity and innovation cannot be overlooked.
What is highly demanded are guidelines for use and deployment.
Clearly, AI technologies have outpaced governmental oversight or regulation and the more the delays, the more the risk of being at the receiving end.
Governments like Russia and China are said to be directly or indirectly facilitating and to some extent controlling the development of AI.
Of course, there are still regulatory deficiencies, across the world, exhibiting the difficulties that exist with regulating AI.
There are two types of AI: Machine learning (ML) and forms of Natural Language Processing (NLP).
Many tools that utilised for journalism like fact-checking and verification, automated transcription and translation, data visualisation, sentiment analysis and opinion mining are developed using machine learning while tools like Word2Vec, TextBlob, BERT (Bidirectional Encoder Representations from Transformers) utilise natural language processing.
Preparing for an AI-driven media space
The general sentiment across the world of journalism has been fears of what AI can do if left unregulated.
However, AI regulation for news should be part of greater compilation of other policies that clearly define how AI use differs per industry.
Potential threats to creativity and copyright infringement as well as amplification of biases presents some of the fears around AI technology adoption.
Even credibility and accountability fears have become visible due the amount of fake news, misinformation and disinformation generated and churned out with the aid of AI.
But the biggest dilemma in regulation would be defining specific practices and types of content that could be put under legislative policy.
Already, the confusion is on breaking down regulation policies to cover nuances of AI that exist.
For instance, if AI-generated news stories are published, who owns them and do they need their own copyright law?
If that is the case, who gets the copyright between algorithm developers and the media house?
Further to this, if AI models are trained on copyrighted creative work, who has the right to profit?
These are all critical issues that need a policy that will be constantly reviewed to match technology advancement.
A legal policy at government level will help guide newsrooms as they develop principles, guidelines and policies on the adoption of AI in journalism.
At the current moment, some media executives are excited about AI and they are envisaging empty newsrooms, where AI does the bulk of labour and they could save money that’s being gulped as remuneration.
And with regulations come protection of the journalists too because limitations will be clearly defined. Even the extent to which an organisation can rely on AI will work well under regulation.
Currently, there is the European Union AI Act adopted in 2023, a risk-based regulatory model for AI systems banning uses of AI systems with unacceptable risk levels.
In 2021 Brazil approved the Macro Legal da Inteligência Artificial to regulate the development of AI technologies and promote research on AI ethics and accountability.
In Canada, the Artificial Intelligence & Data Act (AIDA) is poised to address AI bias, transparency, risk mitigation and record keeping.
China has already published new ethical guidelines on the use of AI.
These cases have set the stage for the much needed regulation on AI use, not only for journalistic purposes, but across industries.
What is worse for Zimbabwe is that we are already struggling to fight fake news, cyber bullying and other internet crimes due to our failure to regulate big tech companies like Facebook and Google and setting the stage for AI regulation would go a long way in protecting journalism.
- Silence Mugadzaweta is digital & online editor for Alpha Media Holdings and content strategies blogger for International News Media Association.