Several countries are today scrambling to draft, finalise or enact laws of policy frameworks for governing Artificial Intelligence (AI).
Some have made lots of noise about how effective these laws are in terms of governing the development and deployment of AI technologies.
Arguably, well at least according to my assessment, the most effective and well-written AI law so far is the European Union (EU)’s AI Act. Still, the EU AI Act relies on human oversight.
Human oversight is one of the key ingredients that will ensure that the law can mitigate risks and prevent AI-related harm.
In this week’s opinion, I am arguing that yes, AI governance laws can be effective but there is an overreliance on human oversight in the existing legal frameworks, which has many shortcomings.
The basis of my argument is human oversight is extremely difficult to implement, and here is why.
Keep Reading
- Public relations: How artificial intelligence is changing the face of PR
- CCC urged to push for dialogue over reforms
- Queen Lozikeyi singer preaches peace
- Public relations: How artificial intelligence is changing the face of PR
Human oversight in the EU AI Act
As I have already alluded to earlier, I am convinced that the EU AI Act is the most recognised, well-written, effective, and efficient AI governance framework in the world so far.
One of the Articles is Article 14; which emphasises the establishment of human oversight as one of the requirements for high-risk AI systems: “1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use”.
So, Article 14 argues that all AI technologies that are deemed and categorised as high-risk under the detects of the EU AI Act must, therefore, be subjected to human oversight. Interestingly, the act also is emphatic of what it refers to as “effectively overseen”, this serves to emphasise that the oversight of AI technologies should not be a formalistic endeavour but it must be robust, effective, and efficient in its purpose of governing AI technologies and the results must be there for all to see.
The act also emphasises in paragraph 2 of Article 14 that human oversight should be there to ensure that people (human beings) are protected from any harm that can come from AI technologies.
“2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.”
What we can deduce from paragraph 2 is that the act is very clear that AI technologies should be monitored to ensure that they are only used for the intended purpose.
That human oversight should ensure that AI technologies are not misused.
“3. The oversight measures shall be commensurate with the risks, level of autonomy and context of the use of the high-risk AI system,” it states.
“A careful read of this act shows that human oversight serves the following purposes; understanding the capabilities and limitations of AI technologies, monitoring the AI technologies, awareness of automation bias in AI Technologies, interpreting the output of these AI technologies, interrupting the AI technologies if something amiss is detected in their operations.”
This is one of the well-written governance frameworks that other regions and countries, including Zimbabwe, can use as a basis for developing their tailor-made frameworks.
It is comprehensive, well-articulated, and fit-for-purpose. (The EU AI Act can be downloaded here https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf )
Well, as efficient as it may be, I believe that it is very difficult to implement human oversight.
A closer look at human oversight measures outlined by the EU AI Act reveals that this will be extremely challenging. The reason why I think so is that when it comes to AI technologies that have been trained on vast amounts of data are constantly evolving and fine-tuning themselves in the process, they develop some kind of autonomy.
Due to the attainment of autonomy, it is, therefore, extremely difficult if not impossible for human beings to then pinpoint exactly how AI technologies will have arrived at a certain decision.
What humans can only do is recognise the decision reached by an AI application, but it is certainly more complex to explain why the AI application has decided this particular choice over the other choice.
This happens particularly in AI applications that are used in employment, education, public services, and law enforcement. This opacity is referred to in the AI field as the black box paradox.
“The ‘black box paradox’ in AI refers to the difficulty in understanding and explaining the decision-making processes of complex AI models, especially deep learning models, leading to a lack of transparency and trust” (IBM).
The black box paradox presents a serious problem for human oversight. In most AI technologies, it is very difficult to pinpoint or explain the reasoning behind each decision.
According to the EU AI Act, the human responsible for oversight measures must be able to understand how the AI system operates and interpret its outputs, intervening when necessary to prevent harm to fundamental rights.
Here is the problem, how can they do that when AI technologies are highly complex and function like a black box that is operating in an opaque manner?
How are humans supposed to have a detailed comprehension of their functioning and reasoning to oversee them properly?
If we accept that humans often won't fully grasp an AI system’s decision-making, can they decide whether harm to fundamental rights has occurred? And if not, can human oversight truly be effective?
- Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — esagomba@gmail.com; LinkedIn: @Dr. Evans Sagomba; X: @esagomba.