Agentic science is a way of conducting scientific research in which AI systems act as active scientific partners rather than passive tools. These AI agents can generate ideas, design and run experiments, analyze results, and revise their approach through ongoing feedback. In other words, the system can pursue scientific goals with a degree of independence while remaining guided by human oversight and ethical safeguards.
This concept matters for AI ethics and law because agentic systems introduce new questions about responsibility, transparency, and human control. When an AI system can act with partial autonomy in scientific settings, society needs clear rules to ensure its decisions remain accountable, traceable, and aligned with human values. Agentic science carries real promise for advancing knowledge, yet it also risks concentrating power, creating opaque decision pipelines, and weakening human judgment if not properly governed. Ethical governance is essential to ensure that these systems strengthen rather than undermine human rights, including the rights to safety, dignity, and scientific progress.
Agentic science is closely connected to ideas such as autonomy, transparency, and accountability, since each principle helps determine whether an agentic system acts in ways that are lawful, trustworthy, and oriented toward the public good.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.