The Biden administration on Thursday rejected demands for a binding international agreement banning or tightly regulating the use of so-called killer robots, autonomous weapons that campaigners fear will make war more deadly and entrench a global norm of “digital dehumanization.”
“The prospect of a future where the decision to take a human life is delegated to machines is abhorrent.”
During a meeting in Geneva, State Department official Josh Dorosin said the U.S. prefers “the development of a non-binding code of conduct” on Lethal Autonomous Weapons Systems (LAWS), which have already been used in conflicts to track and kill without a human operator.
While dozens of countries—most recently New Zealand—have expressed support for a global ban on the use of autonomous weapons systems, the U.S. has been a major obstacle to progress for years. On Thursday, Dorosin reiterated U.S. opposition to prohibiting killer robots through a “legally-binding instrument.”
John Tasioulas, director of the Institute for Ethics in AI, called the Biden administration’s position “sad but unsurprising.”
New Zealand, for its part, announced Tuesday that it would join the international coalition demanding a ban on LAWS, declaring that “the prospect of a future where the decision to take a human life is delegated to machines is abhorrent.”
“This is an issue with significant implications for global peace and security, and I’m optimistic New Zealand, alongside the international community, is well placed to push for action,” said Phil Twyford, New Zealand’s minister of disarmament and arms control.
Clare Conboy of the Stop Killer Robots coalition applauded New Zealand’s stand as “a powerful demonstration of political and moral leadership.”
“We look forward to supporting the government of New Zealand in their work to establish new law and to further build upon their proud history of leading international disarmament efforts and centering human rights, peace, and disarmament in their foreign policy,” she added.
In a report issued ahead of the latest round of United Nations talks, Human Rights Watch and the Harvard Law School International Human Rights Clinic warned that “it would be difficult for fully autonomous weapons systems, which would select and engage targets without meaningful human control, to distinguish between combatants and non-combatants as required under international humanitarian law.”
“The emergence of autonomous weapons systems and the prospect of losing meaningful human control over the use of force,” the report states, “are grave threats that demand urgent action.”
Bonnie Docherty, senior arms researcher at Human Rights Watch, said Wednesday that “much opposition to killer robots reflects moral repulsion to the idea of machines making life-and-death decisions.”
“A new treaty would fill the gap in international treaty law and protect the principles of humanity and dictates of public conscience in the face of emerging weapons technology,” Docherty argued.
THIS ARTICLE ORIGINALLY POSTED HERE.