Generative Artificial Intelligence is a type of artificial intelligence that can create new content such as text, images, audio, or video. Instead of only analyzing or classifying existing information, generative AI learns patterns from large amounts of data and then produces outputs that are similar but not identical to what it has seen before. Its defining feature is the capacity to generate original material that appears creative or novel.
Generative AI matters for ethics and law because it reshapes how societies produce and share knowledge, art, and communication. It raises questions about human agency, dignity, and control over technology, as machines now imitate human expression in ways that may blur the distinction between truth and falsehood. The power of these systems makes them both promising and dangerous. They can expand human creativity, but they also risk spreading misinformation, amplifying bias, or infringing on privacy.
From a human rights perspective, the ethical responsibility is clear: generative AI should never be deployed in ways that deceive, manipulate, or strip people of their autonomy. Its use must be bound by safeguards that uphold fairness, accountability, and the inherent dignity of all people. When unregulated, generative AI threatens to erode trust in democratic institutions and distort shared realities. When guided by strong rights-based frameworks, however, it can serve as a powerful tool for human flourishing.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.