Accountability refers to the professional and legal standards determining who is responsible and legally liable for artificial intelligence throughout its lifecycle—from design and development to deployment, use, and monitoring.
Accountability is a foundational AI ethic that measures the efficacy of how technologies uphold human values, professional codes of practice, civil and criminal laws, and human rights across all sectors of society. Accountability practices establish remedies when harm occurs, whether caused by intent to harm, negligent action (failure to take reasonable care to prevent harm), reckless disregard (ignoring known risks), or evidence of harmful effects.
When assigning accountability, the focus should be on asking, “Who are the humans and organizations responsible?” rather than AI technology itself. Accountability should be distributed among the relevant parties in correct proportionality, such as companies, investors, researchers, developers, manufacturers, users, and regulators.
This mutually dependent approach ensures that all relevant parties take shared responsibility for an AI system’s impact by mitigating harm, embodying the promise to “do no harm”—nonmaleficence.
For Further Reading
Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI." Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.