Predictive justice refers to the use of artificial intelligence and data analytics to forecast judicial outcomes or predict the likelihood of future criminal behavior. These systems draw on past legal data, such as court decisions and sentencing patterns, to generate predictions about verdicts, risks, or possible actions by individuals. While often presented as tools for efficiency or objectivity, predictive justice raises serious ethical and legal concerns.
The significance of predictive justice lies in its potential to reshape how decisions are made in courts, prisons, and policing. It directly affects fundamental rights, including liberty, privacy, and equality before the law. Because these systems rely on historical data, they can reproduce existing bias and discrimination, especially against marginalized groups. The lack of transparency in many algorithmic models also undermines accountability, making it harder to challenge errors or unfair outcomes. Errors are not abstract: they can mean wrongful restrictions on a person’s freedom, harsher punishments, or unjust profiling.
From an ethical standpoint, using machines to predict human behavior in the justice system is deeply problematic. It risks turning justice into a statistical exercise rather than a moral and legal judgment. Predictive justice, if unregulated, undermines the principle that every person deserves to be judged as an individual, not as a data point.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.