Read this article in German or Russian.
Now that the United States and Russia have withdrawn from the INF Treaty, a new nuclear arms race looms – including on the European continent. As dangerous as this development may be, the reflexes on both sides of the Atlantic are well-practised: in the end, a balance will be reached making the actual deployment of medium-range nuclear missiles highly unlikely. A nuclear exchange would be too cut-and-dried and too predictable: a matter of ‘If you hit me, I'll hit you back.’ In the end, both sides lose.
Of course, we should not underestimate or downplay the risks associated with nuclear armament. But it also cannot mean that other, often low-profile disarmament issues are pushed into the background by the muscle flexing of Trump, Putin or even ourselves.
A completely new dynamic is unfolding in the field of lethal autonomous weapons systems. While there has yet to be a broad public debate on the pros and cons of these weapons, experts have been carrying on discussions for several years. The international community has been discussing such systems since 2013 as part of the United Nations Conference on Disarmament in Geneva, but some central questions remain unanswered. They concern the issue of a generally accepted definition, as well as the question of liability for errors committed by autonomous machines.
Deciding about a human life
Another open question is how to reconcile lethal autonomous armed force with human dignity and fundamental rights. The responsibility for the use of force, and thus for the killing of another human, shouldn’t be delegated to machines. This conclusion has less to do with apocalyptic horror scenarios in which autonomous robotic armies fight each other than with the fact that an algorithm, possibly even a self-learning variant, would be making a decision about a human life.
This notion is deeply contrary to the values of the Enlightenment and human civilisation as a whole. There’s only one possible, and indeed inevitable, consequence: human control must never be removed from weapons systems that fight living targets. As sensible as it may be to argue over whether warlike actions can ever be ‘human’ or ‘moral’, it is clear that the transfer of responsibility to an algorithm – however mathematically certain it may seem – would entail a dangerous shift in values.
If regulation is too slow or does not exist, technical progress will gallop ahead and be hard to rein back in.
Simply put: any autonomy in weapons systems should not reduce a human’s role to the last link in the operational chain and, as such, merely giving the ‘final blessing’ to the firing of a weapon. On the contrary, a commanding soldier must be able to actively engage in and control the selection, prioritisation and tracking of the target. And there must be adequate opportunity for ethical assessment and the burden of conscience.
Why we urgently need regulation
The talks in Geneva, which have lasted years, show that the international community is still struggling to reach a consensus on these issues and to gauge the possible international legal consequences of the dangers of these weapons systems. In fact, autonomous weapons present international humanitarian law with challenges that it has not yet been designed to address. What are the characteristics of a combatant when the battlefield of the future is dominated by autonomous machines? Rapid progress towards a ban on lethal autonomous weapons systems, or at least their comprehensive regulation, is urgently needed for two reasons.
First, autonomy in weapons systems will not suddenly emerge overnight. Rather, it’s a creeping process in which the degree of autonomy of a greater number of individual (assistance) systems increases step by step. Contrary to the popular concept of a ‘killer robot’, for the foreseeable future these systems will have no ‘soul’ of any kind – it will comprise the sum of those individual systems that describe a functioning whole. If regulation is too slow or does not exist, technical progress will gallop ahead and be hard to rein back in. We cannot afford to be caught off-guard by such developments, like we were by extrajudicial killings representing an eradication of boundaries outside defined areas of conflict.
Secondly, this is a matter of urgency not only technologically but also politically. The Geneva negotiations are in danger of failing because of a lack of common interests. The goals of the states playing in the top military-technological league are too different from those that can only lose in this race and therefore push for the strictest possible regulation. The most likely consequence of this standoff is an agreement to be established outside or in parallel with the United Nations – but by a smaller circle of participating states with correspondingly limited legal and moral power.
Where we see progress
Undoubtedly, the road to a universal ban on lethal autonomous weapons is a rocky one, especially as the military potential of these weapons systems is huge, as is the possible gain in civilian use. However, the situation is not hopeless. The first requirement is that national governments, international organisations, national and international parliaments and civil society all make their contribution. The Nobel Peace Prize-winning International Campaign to Abolish Nuclear Weapons has demonstrated how international pressure can be successfully asserted.
The termination of the INF treaty is a heavy burden, but our biggest concern should be the technologies of the future.
Last year, the Belgian Parliament passed a resolution calling for the banning of lethal autonomous weapons systems. Shortly thereafter, the European Parliament reaffirmed that call. The parliamentary group of the German Social Democrats (SPD) has now followed suit and is calling for a ban on systems that elude human control.
It’s important for the German Parliament as a whole to fortify this position, to encourage other parliaments and governments to join it and, in the long term, to lead to an effective ban on lethal autonomous weapons systems. The political tailwind for this plan comes from industry: the president of the Federation of German Industries, for example, recently called for a ban along the lines of the Chemical Weapons Convention.
The extent to which autonomous weapons systems will play a role on future battlefields is still unclear. Whether they will one day even be considered weapons of mass destruction remains to be seen. However, one thing is already obvious: the erosion of the previously clear structures of a nuclear arms race – first and second strike capability, mutual assured destruction, the nuclear triad – would be further accelerated by an arms race with autonomous weapons. When the closely related area of cyber-warfare is added, then forms of symmetrical warfare completely dissolve. If these technologies are ever available to non-state actors, the danger will increase again massively. The termination of the INF treaty is a heavy burden, but our biggest concern should be the technologies of the future.