Autonomous Weapons are AI-driven systems capable of selecting and engaging targets without direct human supervision. Often called lethal autonomous weapons systems (LAWS), they rely on advanced sensors and algorithms to interpret battle conditions and carry out deadly force decisions on their own.
Their significance lies in the profound challenges they pose to accountability and international humanitarian law. When machines, rather than humans, decide matters of life and death, it becomes unclear who bears ultimate responsibility for unlawful harm. These weapons risk escalating conflicts, diminishing human oversight, and endangering human rights, as algorithms may fail to distinguish civilians from combatants. This detachment of human conscience from lethal force raises serious moral questions and threatens to undermine the principle that warfare should remain under direct, human command. Many legal experts and activists argue that these technologies should be heavily regulated—or banned outright—to protect civilian lives and preserve human dignity in armed conflict.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.