Pentagon Takes a Stab At Machine Morality, What’s New?

An important development in artificial intelligence space occurred last month with the Pentagon’s Defense Innovation Board releasing draft recommendations [PDF] on the ethical use of AI by the Department of Defense. The recommendations if adopted are expected to “help guide, inform, and inculcate the ethical and responsible use of AI – in both combat and non-combat environments.”

For better or for worse, a predominant debate around the development of autonomous systems today revolves around ethics. By definition, autonomous systems are predicated on self-learning and reduced human involvement. As Andrew Moore, head of Google Cloud AI and former dean of computer science at Carnegie Mellon University defines it, artificial intelligence is just “the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

How then do makers of these systems ensure that the human values that guide everyday interactions are replicated in decisions that machines make? The answer, the argument goes, lies in coding ethical principles that have been tested for centuries into otherwise “ethically blind” machines.

Critics of this argument posit that this recent trend of researching and codifying ethical guidelines is just one way for tech companies to avoid government regulation. Major companies like Google, Facebook and Amazon have all either adopted AI charters or established committees to define ethical principles. Whether these approaches are useful is still open to debate. One research for example found that priming software developers with ethical codes of conduct had “no observed effect” [PDF] on their decision making. Does this then mean that the whole conversation around AI and ethics is moot? Perhaps not.

In the study and development of autonomous systems, the content of ethical guidelines is only as important as the institution adopting them. The primary reason ethical principles adopted by tech companies are met with cynicism is that they are voluntary and do not in and of themselves ensure implementation in practice. On the other hand, when similar principles are adopted by institutions that consider the prescribed codes as a red lines and have the legal authority to implement them, these ethical guidelines become massively important documents.

The Pentagon’s recommendations – essentially five high level principles – must be lauded for moving the conversation in the right direction. The draft document establishes that AI systems developed and deployed by the DoD must be responsible, equitable, traceable, reliable, and governable. Of special note among these are the calls to make AI traceable and governable. Traceability in this context refers to the ability of a technician to reverse engineer the decision making process of an autonomous system and glean how it arrived at the conclusion that it did. The report calls this “auditable methodologies, data sources, and design procedure and documentation.” Governable AI similarly requires systems to be developed with the ability to “disengage or deactivate deployed systems that demonstrate escalatory or other behavior.”

Both of these aspects are frequently the most overlooked in conversations around autonomous systems and yet are critical for ensuring reliability. They are also likely to be the most contested as questions of accountability arise when machines malfunction as they are bound to. They are also likely to make ‘decision made by algorithm’ a less viable defense when creators of AI are confronted with questions of bias and discrimination – as Apple and Goldman Sachs’ credit limit-assigning algorithm recently was.

While the most direct applications of the DoD’s principles is in the context of lethal autonomous weapon systems, their relevance will likely be felt far and wide. The various private technology companies that are currently soliciting and building various autonomous systems for military use – such as Microsoft’s $10 billion JEDI contract to overhaul the military’s cloud computing infrastructure and Amazon’s facial recognition system used by law enforcement – will likely have to invest in building new fail safes into their systems to comply with the DoD’s recommendations. It is likely that these efforts will have a bleed through effect into systems being developed for civilian use as well. The DoD is certainly not the first institution to adopt these principles. Non-governmental agencies such as the Institute of Electricals and Electronic Engineers (IEEE) – the largest technical professional organization in the world – have also called [PDF] for adoption of standards around transparency and accountability in AI to provide “an unambiguous rationale” for all decisions taken. While the specific questions around which ethical principles can be applied to machine learning continue for the foreseeable future, the Pentagon’s draft could play a key role in moving the needle forward.

Leave a Reply

Your email address will not be published. Required fields are marked *