top of page

Big Data Policing and the "Predictive Policing" Movement

​Let's dive in deep into this topic as we examine what big data policing is and why it matters. Ask yourself, before you read this article, how much did you know about big data policing? Most of you knew very little, if anything at all. The problem that I see, often in the United States, is that most people are ignorant about serious technological changes yet are super informed about what Beyoncé wore to the Grammys. If you don’t believe me: Net neutrality. Yup! A policy that was horribly revoked right underneath the American populations' noses. Its all fun and games until you cant afford to stream the Grammys (a bit of satire for those of you who are informed).


Big Data Policing Explained So what exactly is “big data policing”? The basic model is that there is a database comprised of data being collected with your consent but without your knowledge. Let's let that marinate for a second. The data is then processed through algorithms with the notion that it will “predict” something about the people and environments where the data was collected from. This technology has been used and tested in many locations across the world.


In China, the government has been using this technology since 2016. More complex than our system, "the “predictive policing” platform combines feeds from surveillance cameras with other personal data such as phone use, travel records and religious orientation, and then analyzes the information to identify suspicious individuals." (Chin) They argue that the technology is in use to crack down on suspected extremists, however, reports against that have stated

"For the first time, we are able to demonstrate that the Chinese government's use of big data and predictive policing not only blatantly violates privacy rights but also enables officials to arbitrarily detain people," said Maya Wang, a Hong Kong-based researcher at HRW. (Reuters)


I will let that speak for itself.

"Predictive policing" technology is also in use in the UK where their bail system utilizes predictive technologies. The large critique here is that it is AI(Artificial Intelligence) profiling. In the UK they saw poor people being targeted with this trial-and-error system to the point where area code of addresses was removed from the algorithms data feed to avoid bias based of off where you live. As you know, area code can say a lot about a person. If you don't think that is true: 90210.


But how can an AI be biased? It is designed by people and, by nature, people are biased. “So society needs to maintain a critical perspective on the use of AI on moral and ethical grounds. Not least because the details of the algorithms, data sources and the inherent assumptions on which they make calculations are often closely guarded secrets.” (Feldman) I mean just take a second and think about all the data out there collected with your permission but without your knowledge. This theory again? Absolutely!


Thoughts on Bid Data Policing In an interview that Andrew Ferguson, author of Big Data Policing in the Big Apple and The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, did with the Cato Institute in December, he explained initiatives in the police force to use big data predictive systems. They talk about technology being put in place in Chicago that provides a heat map for where crime is predicted to be and to what extent. This allows officers the ability to see patterns in crime and make adjustments to what neighborhoods they police and when. It provides a “predictive forecast” of the crime for that day. What is this, the predictive policing weather channel? I think they are forgetting that crime happens even in the most prestigious of neighborhoods. There's a fine line between analysis and profiling. This sounds like profiling to me but wait, there's more. The technology also provides you with an “at risk score” from 1-500 which will identify if a person is at risk of a crime. We can’t forget to add that this technology fails to decipher between the victim and perpetrator; “at risk” is at risk.


Logic Mic Drop I have many holes to poke in this logic but the first is something we all have experienced at least once in our lifetime: being in the wrong place at the wrong time. I have two words: Travon Martin. What if you were traveling through a "high crime" reported neighborhood, and you were stopped because you had a high “at risk” score? The officer cannot tell if you are the victim or the perpetrator. Even bigger question: Where do these scores come from? Can they easily change like credit scores? What data is collected that can tell me how “at risk” I am and why can't it be used for more important things like stopping sexual assault? It seems to me that this is a giant guessing game and we pay the price for the false accusations. They have even admitted “it makes sense to target them” but “we don’t know if it works” but their justification is that they know people in these dangerous communities don’t trust the police and are likely to take justice into their own hands. (Ferguson) Sounds like a cycle.


Let's dive into the next can of worms. Crime will happen regardless. My prediction is that we will start to see a surge in IQ for the perpetrators of these crimes the more this technology is used. The perpetrators will change from your average Joe Shmoe to someone who can analyze the pattern of the police officers and predict an un-monitored location. But let's get back to the facts. The fact of the matter is that it will take years for this technology to observe enough data to be accurate. And what is accuracy?


What Do I Agree With? I agree with Fergusen when he says police have a hard job. People want them to decrease crime while that should be a primary political issue. We need to prevent it at the root by opening more schools and getting more environmental resources to get kids off the streets. (Ferguson) But the reality is those police officers are dealing with what they have. And what they have is a technology that’s being forced on them by technologists who believe they can help (Furgeson)


What Needs to Change? The reality is this technology is invasive. Technologists don’t think about how it impedes on personal liberties. They won't think about the moral or ethical dilemma. They don’t think about whether the control is even worthwhile for this predictive software. Technology has already come so far. The reality is that “everyone has a spy in their pocket called a smartphone” (Ferguson) and we have rules about what to collect but not what to do with it. So before we start diving into the realm of AIs and predictive algorithms, let's figure out what we really expect to see from these programs.


Police departments should test this technology and not use it to change their behavior but rather let it run in the background and compare statistical evidence daily. For example, if a house is robbed in a place that came up as a high risk area, that would support the AI. When people are convicted of crimes, the police should run the AI and see how accurate it was. This checks and balance system will help to balance the bias among officers as well as properly test the algorithm.


Work Cited

Chin, Josh. “About to Break the Law? Chinese Police Are

Already On To You.” The Wall Street Journal, Dow Jones & Company, 27 Feb. 2018.


Feldman, Noah. “Artificial Intelligence in Policing: Advice for

New Orleans and Palantir.” Bloomberg.com, Bloomberg, 28 Feb. 2018.

Ferguson, Andrew, and Caleb O. Brown. “The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement.” Cato Institute, 14 Dec. 2017.

Reuters, and Thomas Peter. “Report: China Using 'Big Data' to Crack down on Suspected Extremists.” Public Radio International, OZY Media News, 1 Mar. 2018.


Rowe, Mike. “AI Profiling: the Social and Moral Hazards of 'Predictive' Policing.” The Conversation, The Conversation US, Inc., 7 Mar. 2018.



Single post: Blog_Single_Post_Widget
bottom of page