Yesterday, Monday 27 July, over a thousand people, mostly scientists specialising in artificial intelligence and robotics, called for a ban on autonomous weapons capable of “selecting and combating targets without human intervention”. They made their point by signing an open letter published by the Future of Life Institute, a non-profit-making American organisation working on “the potential risks of developing artificial intelligence to human level” and “alleviating the existential risks facing humanity”.
Signatories included Elon Musk, CEO of Tesle and SpaceX as well as British astrophysicist, Stephen Hawking, who publically voiced their concerns regarding artificial intelligence.
This letter was published on the occasion of IJCAI, an international conference on artificial intelligence held at Buenos Aires from 25 to 31 July, to take a stand against a real threat: “Artificial intelligence has reached a point where the deployment of such systems will be materially, if not legally, feasible within a few years, not decades, and the stakes are high. Autonomous weapons have been described as the third revolution in warfare techniques after cannon powder and nuclear weapons”.
Experts and signatories of this open letter fear that the States will embark on an “armaments race” motivated by this idea that “replacing man by machines will limit the number of victims on either side”. “The key question facing humanity today is whether we should embark on an armaments race involving AI or whether we should put a stop to this from the outset”, they explain.
If a military force gets off the ground in the lethal autonomous weapons’ domain, an escalation will be “inevitable”, they insist. “Contrary to nuclear weapons, these arms do not require any expensive material that is difficult to obtain”, explain experts. Consequently, “it will only be a matter of time before they appear on the black market and in the hands of terrorists, dictators who want to control their people to even greater extent and warlords determined to carry out ethnic cleansing”.
These comments do not oppose AI per se. Scientists tend to fear that a military application of AI will “tarnish” this field of research and generate a “major rejection of AI on the part of the general public, thereby relinquishing all future social benefits”.
The question of whether or not to ban lethal autonomous weapons was the subject of a UN meeting in April. The special UN rapporteur, Christof Heyns, has been arguing for more than two years in favour of a moratorium on the development of these systems pending the definition of a suitable legal framework.
The Human Rights NGO has meanwhile denounced “the absence of any legal responsibility” relating to the acts of these “killer robots”.
Le Monde (Morgane Tual) 28/07/2015