Welcome to the age of Killer Robots

Welcome to the age of Killer Robots

Image Credit: Rawpixel - Daily Montanan

Stanley Kubrick’s cult classic 2001: A Space Odyssey prophetically warned of autonomous robots through the unsettling computer system HAL 9000. Now, HAL no longer feels entirely fictional. With Labour recently pledging to “mainline AI into the veins” of Britain, it seems AI has truly begun to impact daily life. Robots, powered by AI, spend their time delivering food, driving cars, stacking warehouses, disarming bombs and caring for the elderly, a list that increasingly sounds like the plot of a Black Mirror episode. In true human fashion, someone has now asked whether they can also fight our wars. 

Autonomous Weapon Systems (AWS) use AI to select and engage targets without the need for human intervention. These systems can be deployed as vehicles, drones, underwater systems or missile defence systems. As the tech improves, so will the levels of autonomy in each individual AWS. Eventually they may begin to engage in acts unexpected or not reasonably foreseeable by the user or manufacturer. For now, much of what we know about AWS capability is speculative or classified. However, reports from the United Nations note that countries such as the United States, Russia, China and Israel are investing heavily in this new technology.

They say the function of AWS is to improve both the lethality of a military and to increase ethical decisions in warfare. US defence secretary Pete Hegseth asserted recently that he was dismissing AI “that won’t allow you to fight in wars.” In Eastern Ukraine, Uncrewed Ground Vehicles autonomously move, observe and detect the enemy, but sensibly the decision to fire is still made by a human.

AWS may protect soldiers by supporting their missions and/or taking their place, and their precise targeting and unemotional decision making may help to safeguard civilians. A realpolitik assessment would highlight their use as a deterrent to protect against their inevitable use by bad actors. Even so, we should still take a stance on its legal use and moral permissibility. 

Firstly, AWS are not more effective than soldiers because they do not possess moral judgement. AWS acts on code and patterns and performs specific tasks based on a set of rules. Judgement relies on interpretation and context. If AWS can only follow moral rules, how can they wage war more ethically than humans?

Secondly, there is the question of blame for the actions of automated weapon systems. We cannot blame a robot because it has no moral agency. We also cannot blame the user because an AWS may act in a way unintended outside of what the user could reasonably control. Finally, if a manufacturer has taken all foreseeable and reasonable precautions then we cannot blame them either. Responsibility would lose its meaning if a truly unforeseeable malfunction occurred and the manufacturer was blamed. An AWS’ mistake would seem to resemble something more akin to bad luck than moral responsibility. This responsibility gap means AWS can act immorally and no one can be blamed.

Pope Francis expressed worry about the handing over of lethal decisions to non-humans. He was referring to the fact AI treats war like a maths problem rather than a profound tragedy. The taking of another human life is not something that should be decided by calculations and programming. Instead, it should be done for the right reasons. Imagine two billionaires. One donates to charity because she genuinely cares. The other donates in hope of appearing on the front page of The Gryphon. The outcomes are identical, but we find the first morally superior because she acts for the right reasons. Therefore, machines should not be allowed to make the decision to kill people because they do so for the wrong reasons. This responsibility should only fall on those who have moral judgement with conscience, empathy and an understanding of life: namely, (most) human beings.

Finally, it seems AWS may not even help save civilian lives. AWS capabilities mean their mistakes could risk significantly more harm than the mistakes of ordinary soldiers. They could unintentionally escalate a conflict by failing to understand principles of proportionality or commit atrocities by mistaking civilians for a battalion of soldiers. They also decrease the threshold for war, allowing force to be used with less consequences; perhaps encouraging leaders to forego diplomacy in favour of war. 

AWS’ development will in no uncertain terms lead to a global arms race. The delegation of authority over decisions to AWS is as dangerous as it sounds. Just as industrialisation forced skilled workers to surrender their roles in factory processes, AWS is the first step onto a slippery slope in which humans begin to surrender decision making to machines. If Sci-Fi has taught us anything, it’s that once humans hand over power to machines, the consequences are rarely simple. 

Just ask HAL. 

Words by James Center

(you can track the buying and selling here autonomousweaponswatch.org)

(Visit https://www.stopkillerrobots.org/)