Artificial intelligence (AI) presents enormous opportunities to improve the quality of life of people across the world.
There are vast potential applications in all sectors, particularly education, healthcare, agriculture, infrastructure, mining, trade facilitation, banking/finance, creative industries, and governance.
However, there are also potential dangers and risks associated with the technology – the dark side of artificial intelligence.
Characterising this space are risky applications of AI by folks who mean well, and of course, AI tools in the hands of bad actors with evil intentions.
The use of AI in military operations creates fertile ground for both good and bad actors to partake in the Dark Side of Artificial Intelligence.
Autonomous weapons systems (AWS) consist of combat equipment or technology that can identify, target, and engage the enemy without human intervention.
These systems use AI, sensors, and other technologies to perform tasks that traditionally require human decision-making.
AWS have also been referred to as lethal autonomous weapons systems or killer robots.
- Harvest hay to prevent veldfires: Ema
- Public relations: How artificial intelligence is changing the face of PR
- Queen Lozikeyi singer preaches peace
- Public relations: How artificial intelligence is changing the face of PR
Keep Reading
They range from armed drones and unmanned aerial vehicles (UAVs) to ground-based robots and naval vessels.
Such systems are designed to carry out missions autonomously, such as surveillance, reconnaissance, and combat operations, without direct human control.
The concern with autonomous weapons systems lies in their potential to make life-and-death decisions without meaningful human oversight.
There are ethical, moral, legal, and humanitarian concerns regarding their use, including issues related to accountability, unintended harm to civilians, and the potential for escalating conflicts.
Of particular interest is the moral and ethical dilemma of whether AI (a machine) should make the call to kill a human!
It is instructive to note that both good actors (national governments and armies) and bad actors (terrorists, thieves, and fraudsters) have the potential to have access to AWS.
Both groups have the propensity to irresponsibly deploy AWS with devastating effects.
Various international organisations and advocacy groups have called for regulations or outright bans on the development and deployment of autonomous weapons systems.
The key objective is to ensure that humans remain in control of decisions regarding the use of lethal force.
However, debates about the appropriate regulation of such systems continue among policymakers, ethicists, military leaders, and technology experts.
Comparison of tUnited States and China’s approaches to AWS
Three nations are leading the development of AWS: China, Russia, and the United States.
It is prudent and illustrative to review the approaches to AWS by two of these countries: China and the United States.
China and the United States take different approaches to autonomous weapons systems.
While both countries are actively developing AWS, their specific approaches vary.
China has been investing extensively in modernising its military, including developing advanced AI and robotics technologies for combat operations.
The People’s Liberation Army has been exploring the integration of AI and autonomy into various weapons systems, including drones, unmanned vehicles, and other platforms.
Similarly, the United States has a long history of investing in military technology and has been a leader in developing and deploying unmanned systems and AI-enabled weapons.
The US military, including the Army, Navy, Air Force, and Defense Advanced Research Projects Agency branches, has been researching and testing autonomous systems for various military purposes.
These efforts have included reconnaissance, surveillance, and combat operations.
There are differences in the policies and regulations of AWS between China and the United States.
The US has engaged in discussions and debates regarding autonomous weapons systems’ ethical and legal implications.
While no specific international treaties or agreements regulate AWS, the US Department of Defence has issued policy directives and guidelines on the development and use of autonomous weapons.
On the other hand, China’s approach to policy and regulation regarding AWS may be less transparent than that of the United States.
It has not been as involved in international discussions on the regulation of AWS and tends to prioritise national sovereignty and security interests in its policy decisions.
However, China is a party to international arms control agreements, and its stance on AWS may evolve as the technology develops and international norms emerge.
The United States has been actively engaged in diplomatic efforts to address concerns about AWS through international forums such as the United Nations. It has participated in discussions on arms control and disarmament, including debates on the regulation of autonomous weapons systems.
China’s approach to international cooperation and diplomacy on AWS may be influenced by its broader foreign policy objectives and strategic interests.
While China has participated in international discussions on emerging military technologies, it may prioritise bilateral or regional partnerships over multilateral initiatives on AWS regulation.
The specifics of the Chinese and US approaches to AWS may evolve in response to technological advancements, geopolitical dynamics, and international norms.
Current status of AWS technology
The increased autonomy of weapons through the introduction of AI will fundamentally transform the future of armed conflict.
As explained earlier, AWS raise profound questions from a legal, ethical, humanitarian and security perspective.
What are the implications of AI systems making killing decisions without humans in the loop?
Obviously, ceding killing decisions to machines leads to autonomous warfare.
There is also autonomous cognitive warfare, which entails using autonomous AI systems to take out, disable or disorient opponents in military operations.
The primary objective of AWS is reducing human loss while increasing combat power.
Given these new battlefield advantages, there is a danger that political and military leaders will find armed and confrontational options less costly or prohibitive.
Thus, it is easier for countries to go to war, as the decision to fight would have been lightened.
Once AWS are commonplace, there is also the challenge of: “How do we end wars?”
How can humans end a war in which they do not control the military operations?
What if the AI system makes a mistake and identifies a wrong target? What of other harmful and egregious technology errors?
What about autonomous AI-based military cyberattacks?
Indeed, humanity confronts an existential challenge – an unprecedented crossroads – that demands collective and binding global rules and regulations for these weapons.
Widely deployed autonomous weapons integrated with other aspects of military digital technologies could result in a new era of AI-driven warfare.
There has to be worldwide ownership and buy-in for any meaningful AWS regulatory framework.
In 2023, a fully autonomous weapon that uses AI to make its own decisions about who to kill on the battlefield was developed in Ukraine.
The drone carried out autonomous attacks on a small scale.
While this was a baby step technologically, it is a consequential moral, legal, and ethical development.
The next stage is the production of fully autonomous weapons capable of searching out, selecting and assailing targets without human involvement.
The unconstrained development of autonomous weapons could lead to wars that expand beyond human control, with fewer protections for both combatants and civilians.
Clearly, a wholesale ban on AWS is neither realistic nor practical.
Once the genie is out of the bottle, you cannot put it back!
AWS cannot be un-invented.
However, governments can adopt many practical regulations to mitigate the worst dangers of autonomous weapons.
Without limits, humanity risks gravitating towards a future of dangerous, machine-driven warfare.
Countries worldwide have used partially autonomous weapons in limited, defensive circumstances for decades.
These include air and missile defence systems or anti-rocket protection systems for ground vehicles that have autonomous modes.
Once activated, these AI-driven defensive systems can automatically sense incoming rockets, artillery, mortars, missiles, or aircraft, and intercept or disrupt them.
However, in semiautonomous weapons systems, humans are still in charge.
They supervise the operations and can intervene if something goes awry.
The war in Ukraine has led to accelerated adoption of commercial AI innovations such as drones into weapon systems by both belligerents – Moscow and Kyiv.
They have used drones extensively for reconnaissance and attacks on ground forces.
Drone counter mechanisms have been achieved through AI systems that detect and destroy drones’ communications links or identify and eliminate the operators on the ground.
This strategy works because most drones are remotely controlled.
Without human operators, remotely controlled drones lose their utility.
This creates the rationale for autonomous drones, which are not dependent on vulnerable communication links to human operators.
With further advances in AI technologies, all these drones, which are currently remotely controlled, can be upgraded to become autonomous, allowing continued utility in the event of the destruction of communications links or operators.
Consequently, such autonomous drones can be used to target air defences or mobile missile launchers without the involvement of humans in the exercise.
Battlefield singularity
The development of ground autonomous weapons has lagged behind that of air and sea AWS, but future possibilities include autonomous weapons deployed on battlefield robots or gun systems.
Military AI applications can accelerate information gathering, data processing and scenario selection. This will shorten decision cycles.
Thus, the adoption of AI reduces the time it takes to find, identify, and strike enemy targets.
Theoretically, this could allow humans more time to make thoughtful, deliberate and precise decisions.
However, adversaries will feel pressured to respond in kind, using AI to speed up execution.
This will inevitably lead to the escalation of automation away from human control.
Hence, autonomous warfare becomes unavoidable!
Swarms of drones could autonomously coordinate the behaviour of these systems, reacting to changes on the battlefield at a speed beyond human capabilities, with accuracy and efficacy far superior to that of the most talented military commander.
When this happens, it is called battlefield singularity.
This entails a stage where the AI’s decision-making speed/capacity and effectiveness far surpass that of the most intelligent human – a point wherein the pace of machine-driven warfare outstrips the speed of human decision-making.
When this occurs, an unassailable rationale exists for removing humans from the battlefield decision loops. Thus, autonomous, AI-driven warfare becomes a reality.
Battlefield singularity can be restated as a condition in the combat zone where humans must be taken out of the loop for maximum speed, efficiency, and efficacy.
It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operational-level war strategies.
When that condition is achieved, an army that does not remove humans from decision loops will lose a competitive advantage to the enemy.
Hence, with the attainment of battlefield singularity, using autonomous weapons systems becomes an existential matter.
It is no longer a “nice to have” or some intellectual curiosity.
The AWS have to be deployed for survival!
With AWS, machines would select individual targets, plan the battlefield strategy and execute entire military campaigns.
Furthermore, autonomous reactions at AI-determined speeds and efficiency could drive faster execution of battle operations, accelerating the pace of military campaigns to defeat or victory.
Humans’ role would be reduced to switching on the AI systems and passively monitoring the battlefield. They will have a reduced capacity to control wars.
Even the decisions to end conflicts might be inevitably ceded to machines.
What a brave new world!
What are the implications of autonomous battles and wars?
There is a concern that autonomous weapons could increase civilian casualties in conflict situations.
Indeed, these weapons could conceivably reduce civilian casualties by precisely targeting combatants.
However, this is not always the case.
In the hands of bad actors or rogue armies that are not concerned about non-combatant casualties – or whose objective is to punish civilians – autonomous weapons could be used to commit widespread atrocities, including genocide.
Swarms of communicating and cooperating autonomous weapons could be deployed to target and eliminate both combatants and civilians.
Autonomous nuclear weapons systems (ANWS)
The most dangerous type of AWS are autonomous nuclear weapons systems (ANWS).
These are obtained by integrating AI and autonomy into nuclear weapons, leading to partial or total machine autonomy in the deployment of nuclear warheads.
In the extreme case, the decision to fire or not fire a nuclear weapon is left to the AI system without a human in the decision loop.
Now, this is uncharted territory, fraught with unimaginable dangers, including the destruction of all civilisation.
However, it is an unavoidable and inevitable scenario in future military conflicts.
Why?
Well, to avoid this devastatingly risky possibility, binding global collaboration is necessary among all nuclear powers, particularly Russia, China, and the United States.
Given their unbridled competition and rivalry regarding weapon development and technology innovations, particularly AI, there is absolutely no chance of such a binding agreement.
The unrestrained race for AI supremacy among Chinese, Russian and US researchers does not augur well for cooperation.
This is compounded by the bitter geopolitical contestations among these superpowers, as exemplified by the cases of Ukraine, Taiwan, and Gaza.
Furthermore, there is ruthless distrust and non-cooperation among the nuclear powers on basic technologies, as illustrated by the unintelligent, primitive and incompetent bipartisan decision (352 to 65) by the US House of Representatives to outlaw TikTok in the United States on 13 March 2024.
Also instructive is the 2019 Huawei ban, which means that the company cannot do business with any organisation operating in the United States.
There is also restricted use of Google, Facebook, Instagram, and Twitter in China and Russia.
Clearly, the major nuclear powers are bitter rivals in everything technological!
Given this state of play, why would the Chinese and Russians agree with the United States on how and when to deploy AI in their weapons systems, be they nuclear or non-nuclear?
As it turns out, the evidence of this lack of appetite for cooperation is emerging. In 2022, the United States posited that it would always retain a “human in the loop” for all decisions to use nuclear weapons. In the same year, the United Kingdom adopted a similar posture.
Guess what?
Russia and China have not pronounced themselves on the matter.
With the obtaining state of play – conflict, competition, geopolitical contestation, rivalry and outright disdain – described above, why should the Russians and Chinese play ball?