Predicting crime with big data is profiling and will not help predicting big crime

While was writing my last post, I was also thinking of this article about using big data to predict crime but didn’t find a way to fit it into that narrative.   The external article was a review of another more detailed study that I did not read.   This study investigates the utility of anonymous mobile data to predict crime.  The connection with my earlier post was that the crimes predicted are probably not the more serious and social disruptive ones we should be more concerned about.  The crimes we most want to predict are far more damaging ones such as widespread fraud, white-collar crimes (such as Ponzi investment schemes), or community intimidation or terrorism.   Because these crimes are much rarer than the individual-level crimes, there is little or no data to analyze.   Without data to analyze, predictive analysis is not going to be helpful for these more serious and destabilizing crimes.   The parallel with my previous post is that this project will not find the more frightening kind of crime.

The focus of using big data to predict crime appears to be on individual level crimes of theft, robbery, assault, or even murder or rape.  The following is the same quote the above article highlights:

The main contribution of the proposed approach lies in using aggregated and anonymized human behavioral data derived from mobile network activity to tackle the crime prediction problem. While previous research reports have used either background historical knowledge or offenders’ profiling, our findings support the hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime. In our experimental results with real crime data from London we obtain an accuracy of almost 70% when predicting whether a specific area in the city will be a crime hotspot or not. Moreover, we provide a discussion of the implications of our findings for data-driven crime analysis.

While police departments may welcome innovations to improve their ability to police these crimes, the suggestion of using big data is only a modest incremental improvement over the tools they have already been using for a long time.   It does not take much computing power to predict certain demographic groups in certain locations deserve more policing because we have abundant police reports that locate more crimes to these areas and group.  The fact that there is more crime means there is more data and that data is easy to interpret for allocating policing resources.

We often criticize the older approaches with terms such as racial or ethnic profiling.   These criticisms receive much debate as we attempt to balance fairness with the demographic statistics about prevalence of crime.  In my opinion, the criticisms make a valid point that this type of profiling results in unfair and repeated harassment of innocent and law-abiding citizens who happen to share the qualities of the targeted demographic.

The study presents a different way to establish demographics using more clues from data from modern mobile communications.   The approach uses anonymized meta data to preserve privacy protections.   The innovation is to define more subtle distinctions between demographic groups in order to better separate the innocent law abiding group from the criminal group.   However, even with the impressive 70% accuracy rate noted in the above quote, 30% of the big-data defined demographic group will be subject to unfair police harassment because they will appear to be likely to commit a crime.

The increased purported accuracy of these new big-data derived demographic profiles likely will encourage in harsher and more prolonged police harassment of innocent individuals who happen to fall into the more precisely defined categories.

Using big data to predict finely distinguished demographics that are more prone to committing crime does not change the fact that this is profiling.  Profiling is controversial.   Using big data in this way is likely to make the controversy more hotly debated as we learn of harsher and more prolonged harassment of innocents with the misfortune of being categorized in the some demographic with a higher likelihood for committing crime.   It is not progress if we reduce crime in a way that increases social disapproval and unrest.

We already have some form of demographic approach for identifying demographics and locations with more likelihood for crimes.   We already use this data to allocating policing resources to regularly patrol the areas and to be nearby when a crime is reported.  Over time, we have refined policing practices delay police action until a crime is in progress or when that progression is imminent.  While we may welcome addition data tools to fine tune this process, at best this will be only an incremental improvement over what we already have.

In order to achieve a radical change in policing, we would have to adopt a more aggressive approach to stop criminals before they commit the crime.   The above article makes this point by alluding to the film “The Minority Report” to suggest that big data approach can eliminate crime by stopping a criminal before he commits a crime.   This preemptive policing effectively charges a defendant with a crime that has not yet been committed.   This is the worst kind of profiling that merits our strongest objections.   But I suspect we will experiment with this any way, especially in the current environment of heightened optimism of the benefits of big data predictive algorithms.  The bad consequences will take longer to become apparent.

Replacing ethnic or racial stereotyping with big-data generated demographic distinctions is not going to change the fact that we are prejudging people before the commit a crime.   Even if those charged would have eventually committed a crime, we will object to premature demographic-based harassment or arrest.

I have two concerns about the introduction of big data for crime prediction.

The first concern is about an eventual breakdown in social cohesion as we eventually learn of the harm caused by our earlier acceptance of crime prediction based on big data.   We may initially accept the big data approach because it appears to resolve the existing controversial form of profiling based on racial or ethnic identity.   Eventually, we will learn of stories of even harsher and prolonged harassment of innocents who happen to share a targeted demographic defined by big data.  By the time we notice, the large number of documented cases of this unfairness will overwhelm us and motivate strong condemnation or socially-disruptive protests.

The second concern is that the investment focuses on the wrong type of crimes.   Without diminishing the harm of individual level crimes, we already have reasonably effective methods to manage the problem in a way that results in tolerable crime rates.   We should instead invest in improving our ability to manage crimes against large populations: organized crime, serial criminals, large scale fraud schemes, and terrorism of populations.   These social level crimes consistently catch us by surprise and result in great damage.   We never suspected the perpetrators or anticipated the scale of their crime.  Even after the crime occurs, we often fail to prosecute the most culpable individuals: the ones that conceived and lead the crime.

The category of crimes that would most benefit from improved predictions are those crimes that create victims of a large number of people.   These crimes are very rare.   When each crime does occur, the actual details lack historical precedents.   Large scale crimes are inherently surprising in part because the perpetrators are intelligent people who innovate new ways of pursuing their goals.

Big data analytics is inherently unable to help with this larger scale crime problem because almost by definition there is no historical precedent for the next crime.   The promoted advantages of big data are not available when we lack relevant data to an innovative crime against a population.

The promises of big data may encourage us to attempt to use big data tools for predicting big crimes.  The analytic tools are readily available and affordable, and many have earned credibility with successes when used for other types of data.   In order to use the same tools for predicting big crime, we will need to invent data.   To make up for the lack of historical precedents for a big crime, we will use computer simulations that model big crime scenarios.   These simulations can produce sufficient data that can satisfy the statistical needs for predictive analytic tools.   The problem is that this is invented data instead of historical data.

The value of predicting crimes is obtained by preemptive harassment or arrest of the likely perpetrators.  If we instead wait for the crime to occur, then the crime itself provides evidence so there is no need for a prediction.  Preemptive prosecution requires a high degree of trust in the analytic results.

The problem with predicting big crime is that we need to supply invented data to make up for the lack of historical data.  In many of my earlier posts on data science, I characterized model-generated data as dark data in allegory to cosmology’s invented data for missing matter (dark matter) or missing energy (dark energy).   Dark data is invented data to substitute for missing data.   I criticize dark data as being inherently dangerous in most data science projects that lack the resources we devote to cosmology in the form of financing large communities of scientists.  In most cases, model-generated data (that I call dark data) is no  substitute for historical data when using predictive analytics.   Often the models that generate the data will reinforce the the models underlying predictive analytics because both use algorithms conceived by humans.

Feeding model generated data to predictive analytics is unlikely to uncover a crime that would surprise us.   After all, the model that generates the data presupposes some crime occurring.  We don’t need a predictive algorithm to tell us what we already had to know in order to produce the data in the first place.  The hope is that if we simulate enough variety of big crimes we may find some common pattern that will apply to an unexpected crime.   This may be possible, but I am not convinced that this will be any more effective than what we are already doing through other means.   Again, I point out that in order to produce a high fidelity simulation we had to already know where the weak points are.   We could just skip the simulation-then-predict approach and just start addressing those already known weak points.

Applying predictive analytics using simulated data to predict big crime has a high risk of failing.   Despite our investment in this technology, we may not enjoy any reduction in the frequency of being surprised by successfully executed big crimes.   Equally troubling is that this approach can misidentifying suspects for a crime that had little chance of actually succeeding.  The latter case is what causes complaints of unfair prosecution or demographic-based discrimination.   Those complaints can get out of control with socially destabilizing consequences.

The recent promotion of big data is best recognized as a reinforcement of our long lasting respect for evidence of actual historical data.   This respect for evidence includes a presumption of innocence.  We also demand a crime to actually occur before we prosecute someone.   These concepts of justice restrain the predictive value of big data for preventing crimes, and especially big crimes, because we lack evidence to overcome reasonable doubt.   As long as we retain our demand for compelling evidence to overcome reasonable doubt, big data techniques only offer limited incremental value over existing techniques.

My concern is that the excessive promotion of big data will encourage us that our old concepts of evidence and crime are obsolete.  This promotions demands that we trust the results of big data predictive analytics even when it uses invented data.  In order to fully enjoy the benefits of big data, we must obligate ourselves to follow the big data recommendations to preemptively harass or arrest the targeted demographic predicted to be about to commit some big crime.

We will likely indeed go along with this approach initially.  Eventually, we will accumulate a large number of documented cases of unfair treatment of innocent persons.  We’ll protest then, when it may be already be too late.   Injudicious application of big data techniques to reduce the frequency of big crimes may come at the expense of destabilizing a free society.

Advertisement

One thought on “Predicting crime with big data is profiling and will not help predicting big crime

  1. Pingback: Life in automated world | kenneumeister

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s