Accountability refers to the professional and legal standards determining who is responsible and legally liable for artificial intelligence throughout its lifecycle—from design and development to deployment, use, and monitoring.
Accountability is a foundational AI ethic that measures the efficacy of how technologies uphold human values, professional codes of practice, civil and criminal laws, and human rights across all sectors of society. Accountability practices establish remedies when harm occurs, whether caused by intent to harm, negligent action (failure to take reasonable care to prevent harm), reckless disregard (ignoring known risks), or evidence of harmful effects.
When assigning accountability, the focus should be on asking, “Who are the humans and organizations responsible?” rather than AI technology itself. Accountability should be distributed among the relevant parties in correct proportionality, such as companies, investors, researchers, developers, manufacturers, users, and regulators.
This mutually dependent approach ensures that all relevant parties take shared responsibility for an AI system’s impact by mitigating harm, embodying the promise to “do no harm”—nonmaleficence.
For Further Reading
Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI." Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Contributor(s): Nathan C. Walker
Reviewers: TBD
Editor(s): TBD
Updated: January 16, 2024
Edition 3.0 Review: This article is undergoing peer review. We welcome your expert insights and critical analysis. Please share your feedback in the chat below.