12 Jun 2019

Deploying "good killer robots"

The deployment of lethal autonomous weapons systems (LAWS) is a controversial and emotive subject, one that asks questions of human ethics and morals. Is it desirable or even possible to deploy morally sound LAWS?

By Tom Dent-Spargo

shutterstock
Lightspring / Shutterstock.com

Dr Guido Noto La Diega, from Northumbria University, analyses this topic in a seminar at the University of Hull, entitled, “Towards the deployment of good killer robots”. Tom Dent-Spargo also caught up with him the next day to chat about the dangers and issues of coding LAWS with human morality.

Revolution in warfare

The public conscience, difficult as it can be to define and measure at times, appears to have set itself against the development and deployment of LAWS. The Campaign to Stop Killer Robots is particularly vocal in its opposition and its desire to pre-emptively prohibit the deployment of LAWS. Are these concerns justified though? A few states, such as Israel and Russia, have blocked moves for a complete international ban – hardly surprising given their advanced military AI programmes. Even if some states are able to develop these systems, or the technological foundations that LAWS could someday use, the deployment of LAWS is far outside the current picture. The lack of human control, with a human’s lived experiences that affects their ability to make moral decisions, is the feature of LAWS that will continue to keep them from the front lines.

LAWS are said to be the third revolution in warfare, after gunpowder and nuclear weapons. Fully autonomous systems may not yet be in service, but there are many weapons systems today that have partly automated features, some of which are seen now as necessary to reducing risk to civilians and increased target accuracy. The change to fully autonomous weapons is likely to be incremental, as different functions are automated.

Guido asks the question, “What if we can develop moral killer robots?” How would this be possible? Roboticists across the world are currently working on developing artificial conscience, or “ethics by design” – attempting to translate ethical values into strings of code, and then implementing them into robots. This is designed to ensure that any robot complies with human understandings of ethics, and would help to counter the immorality argument that LAWS face; if they systems have been literally coded with human ethics, they are then moral. Arkin, very much a backer of LAWS, goes one step further and argues that an unmanned system has the potential to be more ethical than a human in a battlefield situation. With such systems’ increased effectiveness in targeting and reaction times, coupled with a lack of emotions, this could even lead to bloodless wars. A moral killer robot would need to abide by the principles of proportionality, distinction, and necessity. If it can perform vital military actions under these constraints, then surely it can be said to be a good killer robot, and its development and deployment should be pursued.

Of course, it’s not that simple. With a wry smile, Guido passed his talk over to the audience at this point to gauge the room, with emphatic responses that killer robots are immoral, that to take humans completely out of the equation is morally repugnant. Two further interesting points were made by audience members. One, if LAWS were to make wars “bloodless” then the idea of war is sanitised, and implicitly made more attractive. If the human cost of wars becomes less, wars may in turn become more commonplace, they will be viewed as being less serious than they are now. Increasing the number of wars cannot said to be an ideal outcome. Second, the deployment of LAWS may have an effect on target selection. Citing the example of the use of bombs leading to bomb making facilities becoming legitimate targets, the people – perhaps civilians – who design and develop the LAWS may become valid military targets. So although the actual battlefield human casualties may reduce, they could well simply be relocated to different areas.

Having played devil’s advocate, it’s interesting to question why we are actually having this debate, if we are set against the idea of good killer robots. As previously stated, LAWS are the third revolution in warfare, deciding now how to counter that, or to regulate that is a hugely significant task.

Programming morality

“We live in an age of tech determinism,” says Guido. He continues, echoing Jurassic Park’s Ian Malcolm, by saying, “Where we say, ‘if we can do something because we have the technology to do it, then we have to!’ Which is not true.” Not only must we ask the question, “Is it possible to develop killer robots?” we then need to ask whether or not it is desirable to develop them. If the answer to the second question is yes, then deployment is the solution, otherwise they should be prohibited.

So, is it possible to develop moral killer robots? In order to answer that question, morality has to be defined, in a manner relevant to the creation of LAWS. (It’s interesting to note that this talk took place at the Institute of Applied Ethics, where defining morality shouldn’t be too hard.) Most classical definitions refer to humanity, and tend to exclude the possibility of any non-human subject ever being moral – Hume refers to feelings, which a robot can’t be said to have, even if it’s an intelligent system, and Mills refers to desires, which again are beyond artificial systems. These definitions are currently unsuitable for LAWS, until they can be said to develop any of the terms specified.

Arkin’s proposal for good killer robots is the primary source to examine, as the main supporter of the concept of morally developed robots – Arkin says that LAWS can be more ethical than humans without emotions clouding their judgment as well as their vastly improved reactions and accuracy. In order to be good killer robots, they have to be able to comply with the following criteria which are fundamental components of legal warfare: distinction, proportionality, and necessity.

Distinction

Distinction is the ability to distinguish, at all times, between civilians and combatants, and between civilian objects and legitimate military targets. There must also be present the ability to determine if an enemy combatant is hors de combat which will determine if they are a legitimate target or not.

The principle of distinction requires understanding of intention. A machine is unable, at the present time and for the foreseeable future, to be able to determine a human’s intentions. Therefore, it would be unable to realise that children playing with toy guns are not legitimate targets.

Proportionality

The decision to attack or react requires a moral equation to be made. It’s a balancing act between the good or military advantage that an action does against any collateral damage that may occur as well. Working out whether an action is “worth it” is extremely difficult, requiring years of military experience, coupled with situational awareness. 

A machine will be unclouded by emotions in this regard, and may not take into account the severity of the damage that an action might cause, nor can it gain the wisdom that all those years of experience provide. How do you program into a machine the worth of human life? Is it just a numbers game? Is an action worth taking purely because of the number of people harmed versus the number of people protected? 

Necessity

Military force should only be used to the extent necessary for winning the war. Again, this is an intrinsically human, values-based judgment. The equation here is between the value of winning a war and the affect on human lives. Can a machine make this decision? At this current time, no, and not for some time, though it can be envisioned that in the future it will be possible.

Another element to necessity is that, once LAWS are introduced into warfare, they will become a necessary feature of warfare themselves. Once one state has them, the rest must too, to try to keep up in the manner of an arms race, due to their efficiency and superiority over other weapons.


Translating these three criteria into code is problematic. What works in a hypothetical scenario will not translate to real world scenarios. A human’s holistic understanding of the world  is not conducive to being reduced to algorithms. Making decisions in theatres of war that affect lives requires sympathy and empathy – quintessentially human, at least for the moment.

Amoral robots?

The conclusion reached is that it is not yet possible to develop moral machines, as they are not able to fulfil the necessary conditions, meaning that the question of desirability does not have to be definitively answered at this point, though the public consciousness tends to be opposed to them. But LAWS can still be developed – they already are in some countries. They are amoral, there is no issue over the possibility of such machines being developed, as it’s already happening. The question therefore arises over the desirability of developing and deploying amoral LAWS, if moral machines are not possible.

What issues do amoral LAWS need to face then? Guido highlights the following: 

  • Are LAWS necessary to reduce the excessive number of deaths in the military? 
  • Are LAWS more effective than soldiers?

To answer the first question, and using the UK as a case study, in 2018 there were 61 deaths in the armed forces. Of these, only one was due to hostile action. The rest were the cause of suicide or illnesses such as cancer. It would appear the answer to this question is no.

Arkin said that LAWS are more effective than soldiers, due mainly to their better targeting sensors. But they are vulnerable to outside attacks, and any of the regular dangers that complex systems face. Furthermore, unintended interactions are always a danger with intelligent systems. Development of AI shows that unpredictable results often occur, sometimes without any explanations of behaviour; in the field of warfare, this could lead to an escalation or direct provocation from an opposing agent. The answer to this question also appears to be no.

Moral roombas

The binary conclusion is that LAWS should be prohibited. But that would appear to waste a lot of effort and technical expertise. However, there are uses for translating ethics into code. A prime example is in low risk areas, such as household robotics or manufacturing areas. Improving the technical efficiency in these areas is always a priority, and instilling these machines with data related to human codes of morals and ethics could well be a large part of that. Just because a machine can’t be said to be moral itself, doesn’t mean that it can’t be aligned with human morality. The challenge there is that there is no universal set of human values; it has to be a culturally-specific approach that is suitable for the culture that uses that particular robotic system.

This is not to say there aren’t any dangers with household appliances. As is often the case, protection of data is paramount. Connected devices in the house hold an enormous amount of data related to our identities that we are often very lax about. Humans are fundamentally lazy creatures and once AI provides a convenience, it is hard to turn one’s back on it. Designing appliances with a shared set of values can help to minimise the harm caused.

 

Dr Guido Noto La Diega is a Senior Lecturer in Cyber Law and Intellectual Property Law at Northumbria University, School of Law, where they are International Research Partnerships Coordinator and Co-Convenor of NINSO Northumbria Internet & Society Research Group.


related topics