The false promise of objectivity: Examining bias in predictive policing

Authors: Silván Oskar Waidmann Irastorza, Sofija Mitrova, Arianna Congedi, Gilles Jacot, Cynthia Jucker, Elisabeth Hubmann, Simon Knoch, Katja Blaser, Dorota Grajewska


Bringing together 25 young thinkers from a variety of academic fields, we used Policy Kitchen to develop visions for how artificial intelligence is likely to affect their futures. This four-part blog series on how novel tools will shape our lives and societies was rendered possible by the Hirschmann foundation.

Science & Tech

Predictive policing refers to the use of data analysis and machine learning to predict crime. What is often portrayed as science fiction has already become reality in Switzerland – although in a less spectacular and more bureaucratic way. Attempts to predict crime are nothing new, but digitisation and the hype around machine learning have spearheaded broader rollouts. Databases and algorithms have been part of criminal investigations for years already, but advances in AI technology have now led to police forces attempting to foresee where, when and by who a crime will be committed, before any law has even been broken. Swiss researchers Ahmed Ajil and Silvia Staubli highlighted that this marks a shift from reactive policing to proactive police work aimed at quelling potential harm and preventing criminal acts from even occurring. 

Reporting by the online newspaper Republik shows that Switzerland pioneers predictive policing methods. Swiss police forces regularly use risk assessment questionnaires such as the Swiss tool Octagon, the German DyRiAS software or the Ontario Domestic Assault Risk Assessment (ODARA) framework in digital or analogue forms to make person-based assessments. The most controversial prediction tool currently in use in Switzerland is the location-based PRECOBS (Pre-Crime-Observation-System) software that relies on a complex algorithm to find patterns in the occurrences of home break-ins. PRECOBS is used, amongst others, in the city of Zurich, to generate heat maps of geographical areas where an increased risk of burglary is suspected. 

Hand in hand: potentials and problems in smart criminal justice technology

Algorithms and analyses of big databases are set to integrate policing even further as technology becomes more powerful. In a 2018 SRF News article representatives from investigative authorities mention early detection and threat assessment tools as pathways to prevent tragic cases of violent crime where perpetrators were previously known to the police, such as in cases where reports of domestic violence preceded a femicide. Police departments further argue that burglary heat maps as generated for instance in PRECOBS allow for more efficient patrol deployment. But what are the costs of such efficiency gains on innocent people? 

A study by the University of St. Gallen on Switzerland’s pioneering role in predictive policing within the region highlighted two aspects about smart criminal justice technology: these tools develop future scenarios through probability calculations from past data and they are rooted in the assumption that crimes are subject to statistical regularities. Researchers have called into question the general authority of such patterns underlying smart policing softwares.

Furthermore, in recent years, many AI companies have been accused of using biassed and therefore discriminatory algorithms. Those used in predictive policing are no exception. Even in the case of perfectly just algorithms, predictive policing tools would show discriminatory behaviours as the underlying data and statistics are the result of decades of potentially discriminatory policing, rooted in societal issues and police prejudice.

How robo-cop became racist: examining the illusion of technological objectivity

Let us retrace how such biases arise and why we cannot trust in technology alone to make objective choices:

In her book “Weapons of math destruction : how big data increases inequality and threatens democracy”, Cathy O’Neil presents the argument that not all groups in society can rely on the police to treat them fairly. This leads to whole communities – for example ethnic minorities – not reporting crimes against them. The result is a flawed dataset that over-represents crimes against the affluent. Another reason for concern with location-based forecasting is that stigmatisation and  profiling lead to self-fulfilling prophecies. Self-fulfilling prophecies manifest themselves in so-called feedback loops: The strategic deployment of police forces in a particular neighbourhood will lead to more crime being found in that area, which in turn will influence and justify their strategic decision to increase their presence in that neighbourhood. Such up- or downgrading bias is not purely due to AI tools; a study found that areas marked high risk by prediction models often already had increased police presence fore from the bias of human officers. Such psychological mechanisms were also registered in research with Swiss police – in interviews for the HSG study respondents stated that they “valued algorithmic tools because when they confirmed subjective findings, they lent confidence to the decision”.

People-based programmes have also been found to produce inaccurate results. For example, some programmes not developed but deployed in the Swiss context have been found to signal threats stemming from people who own firearms, which is common in Switzerland due to the country’s mandatory military service. In addition, mathematician and data scientist, Cathy O’Neil, warns of the discrimination that results from the practice of using proxies: instead of a person’s race or nationality, the programme uses a postcode. But in segregated cities, a particular postcode often indicates a high likelihood of belonging to a particular ethnic minority group.

Faced with the inaccuracy of predictive policing’s results, the Swiss police often argue that the final decision is always made by a real, human officer, thus minimising false suspicions. Algorithms only serve as a helping hand. However, error-prone programmes should not be used to replace or justify human decisions. In addition, Republik is concerned that the results of the programmes could lead police officers to develop a tunnel vision and strengthen confirmation biases by algorithms lending seeming legitimacy to prejudiced suspicions. In light of the discriminatory risks just identified, strict regulation, and in some cases bans, of certain law enforcement technologies and rethinking of established practices are essential to ensure the safety of already discriminated against communities. Furthermore, the privacy invasion and bias of real-time facial recognition should be prevented at all costs.

The concept of algorithmic accountability ensures that developers as well as software users comply with anti-discrimination laws. A strict and nationally unified application of this principle in regards to predictive policing tools would set limits on experimentation with these dangerous technologies. To promote justice, we should consider going beyond simply acknowledging biases and feedback loops. Instead, we need to hold institutions accountable for the discriminatory data they produce, especially in cases like policing where discriminatory algorithms are evidence of systemic injustices and long-standing discriminatory practices. Biassed algorithms should be recognised as a symptom of these larger issues, and we must take action to address them by holding institutions accountable.

We thank the Hirschmann foundation for their kind support of this four-part blog series.

The cover image was created by foraus using an AI image generator.