Predictive policing is the use of artificial intelligence to forecast where crimes might occur or who might commit them, based on patterns found in past data. By analyzing information such as prior arrests, reported incidents, or neighborhood statistics, predictive systems claim to help police departments decide where to focus attention and resources.
This approach raises serious ethical and legal concerns. If the data reflects biased practices of the past, the algorithm will likely reinforce those same patterns of injustice. For example, when crime data is already shaped by unequal policing of certain racial or economic groups, predictive policing does not remove the bias but multiplies it. The result is a cycle where marginalized communities are unfairly targeted, deepening mistrust and discrimination. Predictive policing also threatens privacy, since it often relies on constant surveillance of individuals’ movements and behaviors. Furthermore, because the algorithms are often secret and shielded from scrutiny, they lack transparency and accountability, making it nearly impossible for communities to challenge unjust outcomes.
From a human rights perspective, predictive policing is dangerous because it shifts law enforcement from responding to actual wrongdoing to punishing people for what they might do. This undermines due process and the presumption of innocence, both of which are essential for justice. Technology should never be used to justify surveillance that erodes basic freedoms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.