Articles  |    |  January 1, 2020

The Ethics of Acquiring Disruptive Technologies: Artificial Intelligence, Autonomous Weapons, and Decision Support Systems

Article by C. Anthony Pfaff. Published in Prism: A Journal of the Center for Complex Operations. 28 pages.


Last spring, Google announced that it would not partner with the Department of Defense’s “Project Maven,” which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting. Google’s corporate culture, which one employee characterized as “don’t be evil,” attracted a number of employees who were opposed to any arrangement where their research would be applied to military and surveillance applications. As a result, Google had to choose between keeping these talented and skilled employees and losing potentially hundreds of millions of dollars in defense contracts. Google chose the former.

In fact, a number of AI-related organizations and researchers have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life. This kind of problem is not going to go away. Setting aside whether the kind of absolute pacifism exhibited by Google employees is morally preferable to its alternatives, it is worth taking these concerns seriously. Persons who enjoy the security a state provides are well within their rights to make personal commitments to avoiding violence of any kind. As Stanley Hauerwas puts it, only a complete commitment to nonviolence, even if not entirely philosophically consistent, is often the only way to convince a society not just to consider, but privilege, non-violent approaches to conflict resolution over violent ones. [ . . . ]