Accountability is a professional ethic and a legal term that identifies which person is responsible or even legally liable for artificial intelligence at each stage in its lifecycle.
Accountability is one of the most cited AI principles. It refers to the process by which the humans who design, deploy, and use AI systems uphold certain professional codes of practice across various sectors of society and follow the civil and criminal laws within those legal jurisdictions.
When harm occurs, stakeholders initiate an accountability procedure to establish remedies, a process for repairing the situation. Companies and courts have used accountability practices to determine whether a harm was caused by a person’s intent, negligence, or reckless disregard. Legally speaking, negligence means a failure to take responsible care, and reckless disregard means ignoring known risks or evidence of harmful effects.
When assigning accountability, the focus should be on asking, “Who are the humans and organizations responsible?” not which AI machine or algorithm.
Fair accountability practices ensure that the assigned fault is distributed among the relevant parties in the correct proportion (e.g., investors, researchers, developers, manufacturers, users, regulators). This distributive process is intended to ensure that all relevant human parties take shared responsibility for an AI system’s impact.
Effective accountability practices have the insight and foresight to mitigate current harms and deter future ones. See nonmaleficence.
For Further Reading: Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.