Privacy by Design is the principle that privacy must be built into a technology from the very beginning, not added later as an afterthought. It requires developers and operators of artificial intelligence (AI) systems to integrate protections for personal data throughout the entire lifecycle of a system, from its initial design to its everyday use and eventual retirement.
This concept matters because the way AI handles personal data has direct consequences for human dignity, autonomy, and trust. When privacy is ignored, people’s most sensitive information can be exposed, misused, or turned against them. By embedding privacy safeguards early on, developers show respect for individuals’ rights and help prevent harms that cannot always be undone after the fact.
Ethically and legally, Privacy by Design affirms that privacy is not optional. It is a baseline requirement for any system that processes personal data. In the context of AI ethics and law, this principle reinforces the idea that technology companies, governments, and institutions have a duty to protect individuals, not simply because it is efficient or profitable, but because it is right.
For Further Reading: Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.