[Peace-discuss] New "Ethical Principles" for killer drones have arrived.

J.B. Nicholson jbn at forestfield.org
Tue Feb 25 21:39:53 UTC 2020


The Dept. of Defense has a 2012 directive which says that humans must remain in full 
control over U.S. drones. But how do you maintain a global empire without automating 
some of the spying and killing (which is increasingly done with drones, further 
making drone war a special consideration beyond traditional bomber planes)?

https://www.youtube.com/watch?v=mOknrO-JVEo -- Manila Chan interview with Ben Swann 
on "In Question"

 From https://on.rt.com/able

> As the United States looks to develop artificial intelligence weapons to keep up
> with Russia and China, the Pentagon has adopted a set of guidelines that it says
> will keep its killer androids under human control.
> 
> The Department of Defense adopted a set of “Ethical Principles for Artificial
> Intelligence” on Monday, following the recommendations by the Defense Innovation
> Board last October.
> 
> "AI technology will change much about the battlefield of the future, but nothing
> will change America's steadfast commitment to responsible and lawful behavior,”
> Defense Secretary Mark Esper said in a statement.
> 
> According to the Pentagon, its future AI projects will be “Responsible, Equitable,
> Traceable, Reliable,” and “Governable,” as the Defense Innovation Board
> recommended. Some of these five principles are straightforward (once you decipher
> the Pentagon-speak anyway); “Governable,” for example, means that humans must be
> able to flip the off switch on “systems that demonstrate unintended behavior.” But
> others are more ambiguous.
> 
> What exactly the department will do to “minimize unintended bias in AI
> capabilities,” as it says it will do to keep these systems “equitable,” is vague,
> and may cause problems down the line if left undefined. Trusting a machine to scan
> aerial imagery in search of targets is a legal and ethical minefield, and Google
> already pulled out of a Pentagon project in 2018 that would have used machine
> learning to improve the targeting of drone strikes.
> 
> Similarly, the Pentagon’s promise that its staff will “exercise appropriate levels
> of judgment and care” when developing and fielding these new weapons is a lofty,
> but ultimately meaningless pledge.

See 
https://admin.govexec.com/media/dib_ai_principles_-_supporting_document_-_embargoed_copy_(oct_2019).pdf 
for the exact language:

> The following principles represent the means to ensure ethical behavior as the
> Department develops and deploys AI. To that end, the Department should set the
> goal that its use of AI systems is:
> 
> 1.Responsible. Human beings should exercise appropriate levels of judgment and
> remain responsible for the development, deployment, use, and outcomes of AI
> systems.
> 
> 2.Equitable. DoD should take deliberate steps to avoid unintended bias in the
> development and deployment of combat or non-combat AI systems that would
> inadvertently cause harm to persons.
> 
> 3.Traceable. DoD’s AI engineering discipline should be sufficiently advanced such
> that technical experts possess an appropriate understanding of the technology,
> development processes, and operational methods of its AI systems, including
> transparent and auditable methodologies, data sources, and design procedure and
> documentation.
> 
> 4.Reliable. AI systems should have an explicit, well-defined domain of use, and
> the safety, security, and robustness of such systems should be tested and assured
> across their entire life cycle within that domain of use.
> 
> 5.Governable. DoD AI systems should be designed and engineered to fulfill their
> intended function while possessing the ability to detect and avoid unintended harm
> or disruption, and disengage or deactivate deployed systems that demonstrate
> unintended escalatory or other behavior.


jbn: None of the new language says that humans must be in direct, explicit control of 
U.S. weapons instead of handing over such control to AI.

Automating the details like where a drone flies, how long a drone spies, or whom a 
drone causes to die is a viable way to kill a lot more people with no warning 
(including hearing an aircraft overhead or an announcement of war). That will, in 
turn, manufacture a lot more enemies (thus preventing an "enemies gap" as Col. Powell 
warned us about). The parties who benefit remain the same -- weapons contractors, 
mercenaries ("contractors"), all of the traditional permanent government interests.

Perhaps Rep. Tulsi "surgical strike" Gabbard[1] or Sen. Bernie "drones is a weapon" 
Sanders[2] will comment on this later.

[1] https://theintercept.com/2018/01/17/intercepted-podcast-white-mirror/
[2] Meet the Press interview, 2016 and 
https://thehill.com/policy/national-security/252270-sanders-i-wouldnt-end-drone-program 
recapping an interview on ABC’s "This Week with George Stephanopoulos" where Sanders 
said he would not end the drone program.

-J


More information about the Peace-discuss mailing list