Reports  |  ,   |  April 1, 2016

Stuck in a pattern: Early evidence on “predictive policing” and civil rights.

Report by David Robinson & Logan Koepke.
Prepared for Upturn.

Executive Summary:

The term “predictive policing” refers to computer systems that use data to forecast where crime will happen or who will be involved. Some tools produce maps of anticipated crime “hot spots,” while others score and flag people deemed most likely to be involved in crime or violence.

Though these systems are rolling out in police departments nationwide, our research found pervasive, fundamental gaps in what’s publicly known about them.

How these tools work and make predictions, how they define and measure their performance and how police departments actually use these systems day-to-day, are all unclear. Further, vendors routinely claim that the inner working of their technology is proprietary, keeping their methods a closely-held trade secret, even from the departments themselves. And early research findings suggest that these systems may not actually make people safer — and that they may lead to even more aggressive enforcement in communities that are already heavily policed.

Predictive policing systems typically rely, at a minimum, on historical data held by the police — records of crimes reported by the community, and of those identified by police on patrol, for example. Some systems seek to enhance their predictions by considering other factors, like the weather or a location’s proximity to liquor stores. However, criminologists have long emphasized that crime reports, and other statistics gathered by the police, are not an accurate record of all the crime that occurs in a community; instead, they are partly a record of law enforcement’s responses to what happens in a community. This means that predictive systems that rely on historical crime data risk fueling a cycle of distorted enforcement.

Predictions that come from computers may be trusted too much by police, the courts, and the public. People who lack technical expertise have a natural and well-documented tendency to overestimate the accuracy, objectivity, and reliability of information that comes from a computer, including from a predictive policing system. As one RAND study aptly put it, “[p]redictive policing has been so hyped that the reality cannot live up to the hyperbole. There is an underlying, erroneous assumption that advanced mathematical and computational power is both necessary and sufficient to reduce crime [but in fact] the predictions are only as good as the data used to make them.”1

The fact that we even call these systems “predictive” is itself a telling sign of excessive confidence in the systems. The systems really make general forecasts, not specific predictions. A more responsible term — and one more accurately evocative of the uncertainty inherent in these systems, would be “forecasting.”

The systems we found also appear not to track details about enforcement practices or community needs, which means that departments are missing potentially powerful opportunities to assess their performance more holistically and to avoid problems within their ranks.

In an overwhelming majority of cases, departments operate predictive systems with no apparent governing policies, and open public discussion about the adoption of these systems seems to be the exception to the rule. Though federal and state grant money has helped fuel the adoption of these systems, that money comes with few real strings in terms of transparency, accountability, and meaningfully involving the public.

In our survey of the nation’s 50 largest police forces, we found that at least 20 of them have used a predictive policing system, with at least an additional 11 actively exploring options to do so. Yet some sources indicate that 150 or more departments may be moving toward these systems with pilots, tests, or new deployments.

Our study finds a number of key risks in predictive policing, and a trend of rapid, poorly informed adoption in which those risks are often not considered. We believe that conscientious application of data has the potential to improve police practices in the future. But we found little evidence that today’s systems live up to their claims, and significant reason to fear that they may reinforce disproportionate and discriminatory policing practices. [ . . . ]