December 2019
By Michael Klare
On Oct. 31, after 15 months
of private deliberation and public meetings, the Defense Innovation Board
(DIB), an independent advisory arm of the Office of the Secretary of Defense,
issued a set of recommendations on “ethical principles” for the use of artificial
intelligence (AI) by the Defense Department.
Eric Schmidt, executive
chairman of Google's parent company Alphabet Inc., speaks during a National
Security Commission on Artificial Intelligence conference on Nov. 5. He chaired
the Defense Innovation Board which recently issued recommendations on the military
use of artificial intelligence. (Photo by Alex Wong/Getty Images)
The DIB had originally been
asked in 2018 to devise such recommendations by Defense Secretary Jim Mattis
following a revolt by workers at Google over the company’s AI work for the department.
Some 4,000 employees signed a petition
calling on Google to discontinue its work on Project Maven, a pioneer Pentagon
effort to employ AI in identifying suspected militants, possibly for
elimination through drone attack. Google subsequently announced that it
would not renew the Maven contract and
promised never to develop AI for “weapons or other technologies whose principal
purpose or implementation is to cause or directly facilitate injury to people.”
Knowing that the military
would have to rely on Silicon Valley for the talent and expertise it needed to
develop advanced AI-empowered weapons and fearful of further resistance of the
sort it encountered at Google, the Defense Department leadership sought to
demonstrate its commitment to the ethical use of AI by initiating the DIB
study. This effort also represents a response of sorts to growing public
clamor, much of it organized by the Campaign to Stop Killer Robots, for a
treaty banning fully autonomous weapons systems.
The DIB, chaired by Eric Schmidt,
former executive chairman of Alphabet, Google’s parent company, held three
closed-door meetings with academics, lawyers, company officials, and arms
control specialists in preparing its recommendations. A representative of the
Arms Control Association submitted a formal statement to the board, emphasizing
the need to ensure human control over
all weapons systems and for the automatic deactivation of autonomous systems
that lose contact with their human operators.
In its final report, the DIB
appears to have sought a middle course, opening
the way for expanded use of AI by the military while trying to reassure
skeptics that this can be done in a humane and ethical fashion.
“AI is and will be an
essential capability across [the Defense Department] in non-combat and combat
functions,” the board stated. Nevertheless, “the use of AI must take place
within the context of [an] ethical framework.”
The military has long
embraced new technologies and integrated them in accordance with its
long-standing ethical guidelines, the DIB indicated. But AI poses distinct problems because such systems possess a capacity
for self-direction not found in any other weapons. Accordingly, a number of
specific “AI ethics principles” are needed when employing these technologies
for military purposes. Specifically, such systems must be “responsible,
equitable, traceable, reliable, and governable,” the DIB wrote.
Each of these five principles seeks to address concerns raised by
meeting participants over the use of AI in warfare. The DIB
report led with the principle of ultimate human responsibility over all AI
systems deployed by the Defense Department. Similarly, in response to
concerns about biases built into AI target-identification systems—one of the issues raised by rebel workers at
Google—it offers equity. The Defense Department,
it affirmed, should “avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.”
it affirmed, should “avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.”
The precepts of traceability and reliability are responses to scientific
critics who worry that AI-empowered machines may act in ways that humans cannot
understand or behave erratically, causing unintended harm. Accordingly, those
principles state that it is essential that AI systems’ decision-making
processes be traceable, hence correctable, by humans and that any programming
flaws be detected and repaired before such munitions are deployed on the
battlefield.
The final principle, governability, is of particular concern to the arms
control community as it bears on commanders’ ability to prevent unintended
escalation in a crisis, especially a potential nuclear crisis. The original text stated that all AI systems must be
capable of detecting and avoiding unintended harm and disruption and be able to
“disengage or deactivate deployed systems that demonstrate unintended
escalatory or other behavior.” Some DIB members argued that this left too much
to chance, so the final text was amended to read that such systems must possess
a capacity “for human or automated
disengagement or deactivation” of systems that demonstrate escalatory
behavior.
The DIB recommendations will be forwarded to the defense secretary,
where their fate is unknown.
Nevertheless, the principles articulated by the board are likely to remain a
source of discussion for some time to come.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.