DOD’s Ethical Principles Challenges for AI
The Department of Defense issued five Ethical Principles for Artificial Intelligence(AI) this February. They are Responsible, Equitable, Traceable, Reliable and Governable. The DoD AI principles were developed by the Defense Intelligence Board. The recommendations were developed over a 15-month period by AI leaders in government and the private sector and were based on existing Law of War principles and statutes. They are designed to address new ethical issues raised by AI.
The DoD General Counsel, Paul C. Ney Jr., has commented on the importance of applying the existing principles of the law of war to new legal issues and in particular AI saying that ““The advantage of artificial intelligence and other autonomy-related emerging technologies is the use of software or machine control of the systems rather than manual control by a human being.”
DoD has established an AI policy team in the Pentagon’s Joint Artificial Intelligence Center (JAIC) under the command of Air Force Lt. Gen. Jack Shanahan. Shanahan quickly hired attorney Alka Patel to head the policy team implementing the principles. Patel had been the executive director of the Risk & Regulatory Services Innovation Center at Carnegie Mellon University.
Implementation of the DoD principles
Among the first steps taken by using the new DoD principles at JAIC by Patel and her colleagues:
- Including the principles as applicable standards in requests for proposals, including a May award to Booz Allen Hamilton.
- JAIC participation, through Patel, in a Responsible AI subcommittee, part of a larger DoD working group writing a broader DoD policy document.
- Establishment of pilot program, “Responsible AI Champions,” bringing together a broad group inside JAIC to look at the principles throughout the entire cycle of AI programs.
- Early work on the creation of a Data Governance Council within the U.S. government and other countries.
Issues to be looked
Issues to be looked at carefully while applying ethical principles to real-world business and national security issues.
- Clarification of the terms used in the principles, including, but not limited to, what is “appropriate,” in the principle of responsibility and “unintended bias” in the second principle of equity.
- As acknowledged by Ney, Stuart Russell and many others, what will be the scope of human control of AI involved in military applications?
- As also broadly discussed, what will be the scope of ultimate human accountability for AI decision-making?
- And finally, the overarching problem about “moral machines”: “what to think about machines that think.”
Implications for the private sector
For the Defense contractors and the private sector, the implications are immense. The Pentagon is continuing to litigate issues regarding its $10 billion cloud computing contract between Microsoft (which won the contract in 2019) and Amazon and others, which are contesting the contract.