JUDGEMENT CALL: Weapons are increasingly automated, raising new ethical questions. 
Picture: Reuters
JUDGEMENT CALL: Weapons are increasingly automated, raising new ethical questions. Picture: Reuters
Last month more than 100 robotics and artificial intelligence (AI) company chiefs signed an open letter to the UN warning of the dangers of autonomous weapons.

For the past three years, countries have gathered at the UN in Geneva under the auspices of the Convention on Certain Conventional Weapons to discuss the role of automation and human decision-making in future weapons. The central question for nations is whether the decision to kill in war should be delegated to machines.

As in many other fields, weapons involve increasing amounts of automation and autonomy.

More than 30 nations have or are developing armed drones, but these are largely controlled remotely. Some drones have the ability to take off and land autonomously or fly preprogrammed routes, but any engagement of their weapons is controlled by people. Advances in AI and object recognition raise the spectre of future weapons that could search for, identify and decide to engage targets on their own.

Autonomous weapons that could hunt their own targets would be the next step in a decades-long trend towards greater automation in weapon systems. Since World War II, nations have employed “fire-and-forget” homing munitions such as torpedoes and missiles that cannot be recalled once launched.

Homing munitions have on-board seekers to sense enemy targets and can manoeuvre to correct for aiming errors, and zero in on moving targets.

Unlike autonomous weapons, they do not decide which targets to engage. The human decides to destroy the target and the homing munition merely carries out the action. Some weapons also use automation to aid humans in making the decision of whether or not to fire. Today, radars use automation to help classify objects, but humans still make the decision to fire - most of the time.

More than 30 nations employ human-supervised, autonomous weapons to defend ships, vehicles, and land bases from attack. This means humans intervene if something goes awry, but once the weapon is activated, it can search for, decide on, and engage targets on its own.

Advances in robotics and autonomy raise the prospect of future offensive weapons that could hunt for and engage targets on their own. A number of major military powers are developing stealth combat drones to penetrate enemy airspace.

They will need the ability to operate autonomously deep inside enemy lines with limited or no communications links with human controllers.

What would the consequences be of delegating the authority to weapons to offensively search for, decide on, and engage targets without human supervision? We don’t know. It’s possible they would work fine. It’s also possible they would malfunction and destroy the wrong targets. With no human supervising, they might even continue attacking the wrong targets until they ran out of ammunition.

In the worst cases, fleets of autonomous weapons might be manipulated, spoofed or hacked by adversaries into attacking the wrong targets and perhaps even friendly forces.

A growing number of voices are raising the alarm about the potential consequences of autonomous weapons. While no country has stated they intend to develop any, few have renounced them. Most major military powers are leaving the door open to their development, even if they say they have no plans to do so today.

In response to this, more than 60 NGOs have called for an international treaty banning autonomous weapons before they are developed.

Two years ago, more than 3 000 robotics and AI researchers signed an open letter similarly calling for a ban, albeit with a slightly more nuanced position.

Rather than a blanket prohibition, they proposed to only ban “offensive autonomous weapons beyond meaningful human control” (terms which were not defined).

One of biggest challenges in grappling with autonomous weapons is defining terminology. The concept seems simple enough. Does the human decide whom to kill or does the machine make its own decision? In practice, greater automation has been slowly creeping into weapons with each successive generation for the past 70 years.

As with cars, where automation is incrementally taking over tasks such as emergency braking, lane keeping, and parking, what might seem like a bright line from a distance can be fuzzy up close. Where is this creeping autonomy taking us? It could be to a place where humans are further and further removed from the battlefield, a place where killing is even more impersonal and mechanical than before - is that good or bad? It is also possible that future machines could make better targeting decisions than humans, sparing civilian lives and reducing collateral damage.

If self-driving cars could potentially reduce vehicular deaths, perhaps self-targeting weapons could reduce unnecessary killing in war.

Much of the debate around autonomous weapons revolves around their hypothesised accuracy and reliability.

Proponents of a ban argue that such weapons would be prone to accidentally targeting civilians. Opponents of a ban say that might be true today but the technology will get better and may someday be better than humans.

These are important questions, but knowing their answers is not enough.

Technology is bringing us to a fundamental crossroads in humanity’s relationship with war.

It will become increasingly possible to deploy weapons on the battlefield that can search for, decide to engage, and engage targets on their own. If we had all of the technology we could imagine, what role would we want humans to play in lethal decision-making in war?

To answer this, we need to get beyond overly broad concepts like whether or not there is a human “in the loop”. Just as driving is becoming a blend of human control and automation, decisions surrounding weapons engagement already incorporate automation and human decision-making.

The International Committee of the Red Cross has proposed exploring the “critical functions” related to engagements in weapon systems. Such an approach could help to understand where human control is needed and for which tasks automation may be valuable.

Some decisions in war have factually correct answers: “Is this person holding a rifle or a rake?” It is possible to imagine machines that could answer that question. Machines already outperform humans in some benchmark tests of object recognition, although they also have significant vulnerabilities to spoofing attacks.

Some decisions in war require judgement - a quality difficult to program into machines.

The laws of war require that any collateral damage from attacking a target cannot be disproportionate to the military advantage.

But deciding what number of civilian deaths is “proportionate” is a judgement call.

It’s possible that someday machines may be able to make these judgements if we can anticipate the specific circumstances, but the current state of AI means it will be difficult for machines to consider the broader context for their actions. Even if future machines can make these judgements, we must ask: Are there some decisions we want humans to make in war, not because machines can’t, but because they ought not to? If so, why?

  • Paul Scharre is senior fellow at the Centre for New American Security.