Ensayo de Tomás Premoli sobre ¨Los alcances de la inteligencia artificial en la prevención del delito". Presentado en Universidad de Exeter, Inglaterra (Disponible en inglés).
Are We Ready for an AI-based Minority Report?


1 Introduction

1.1 Artificial Intelligence (AI)  and Society

Since 2015, the exponential growth of Artificial Intelligence (A I) has been unparalleled in its ability to disrupt traditional workplace structures and the role of employees in corporations. It is an industry that is expected to grow from $9.5 billion in 2018 to over $110 billion by 2025 and its profound effects on societies around the globe are only starting to be seen. Despite AI’s tremendous potential in terms of increasing productivity, a two-year study conducted by McKinsey Global Institute revealed that by 2030, as many as 800 million jobs will be displaced due to the increased levels of automation brought about by artificial intelligence.[13] As this innovation becomes increasingly adopted by various industries, many are looking into ways that our societies can be improved by these technologies, and recently, many states and private bodies have been looking into the use of AI in the criminal justice system.

1.2 The “Minority Report”

Minority Report is a 2002 Steven Spielberg film revolving around a police department that can predict crimes before they take place, allowing them to apprehend criminals before the crimes are committed.[20] The conflicts between advancing technology and political systems portrayed in the film are more relevant now than ever, and the concept of anticipating crimes through predictive systems seems like an ever more realistic possibility. Since as early as 2009, China has been working on a system that takes advantage of artificial intelligence in its use of facial recognition, tracking, and predictive algorithms, to assign its citizens social credit scores, which they have slowly begun adapting into their policing systems.[22] Beyond that, in 2018, Japan began investing in the development of artificial intelligence systems designed to identify money laundering schemes and terror attacks through path analysis for various vehicles and boats.[7] However, despite the many innovationsand improvements brought upon because of artificial intelligence, it is highly contestable that using these systems in policing is the right way forward. As a whole, current technologies are not yet at a point where an AI-based minority report is a viable and egalitarian solution to crime in our increasingly tech-based societies.


2 The Problem

2.1 Crime as a Social Issue

For decades, crime has been one of the predominant issues plaguing societies around the world. According to a UN report on crime, although the majority of western countries have generally experienced a downwards trend in terms of homicide, many developing countries (Particularly in the Americas and Africa) have experienced a stark increase over the last 20 years.[5] However, beyond the decrease in homicide, western countries have generally seen a considerable increase in assault crimes and a notable increase in crimes involving rape and robbery. Developing states have also experienced similar increases in these crimes.[10] These trends could be connected to many different issues, particularly gang violence related to drug cartels in developing nations, and increasing wealth inequality across the western world.[17]

2.2 Crime in the Information Age

In recent years, due to the increase in access to computing and internet technologies, new forms of crime have been employed, which are oftentimes significantly more difficult to define, track, and convict for, due to the largely anonymous nature of the web. Offenses such as email scams have existed in more rudimentary forms before the internet, but the advent of malware such as viruses and Trojans allow crimes such as theft, fraud, and illegal espionage to be conducted in a significantly more undercover manner.[23] The concept of cybercrime has been a difficult area for many states to effectively legislate against, particularly due to the sheer breadth of potential methods of attack, and the many difficulties that come with tracking people on the web. If there’s a time where new policing strategies will come in helpful, it is now.


3 The Solution

A proposed solution to this problem is the implementation of systems that deal with the concept of pre-crime. Pre-crime is a term often used to describe crimes not yet committed, and a lot of criminal justice implementations of AI generally focus on pre-crime. The benefit of focusing on pre-crime is that instead of waiting for crimes to happen and then having to track down suspects, pre-crime stops the initial crime from happening in the first place.[14] In some ways, systems targeting pre-crime already exist, specifically when it comes to counter-terrorism. However, the solution presented by an AI-based “minority report” primarily focuses on analyzing behavioral and social patterns to convict for more common crimes, such as homicide, assault, or theft. Such a system could potentially be used to determine an appropriate length of sentencing, the risk of reoffense, the capability of rehabilitation, and other aspects that are important to keep in mind.


4 The Case for a Minority Report

4.1 Increased Efficiency

As is the case with most AI-assisted software, the primary advantage of a minority report-style system is the increase in efficiency. Vaak, a Japanese startup, has developed a system that analyzes body language from security camera footage to identify potential shoplifters, which has been very effective. Although the addressing of shoplifting is usually left to store owners or security officers, implementing Vaak’s solution into store security camera feeds has the potential to not only catch shoplifting but also prevent it from happening in the first place. If an individual is noted as suspicious, the workers are notified, where they can intervine and prevent the goods from being stolen.[12] This system is significantly more efficient than our current system, as, in most situations, it will be much easier to identify shoplifting and take action against it. Knowing when a crime is going to happen before it happens allows individuals to take proactive action, reducing the burden of crime in many industries. The government of China has implemented similar protocols to address crime in its major cities, where the use of facial recognition has enabled the state to keep records on individuals’ daily activities, and automatically send them fines if they’re caught jaywalking or breaking traffic laws.[16] Recent developments have even managed to track people based entirely on their walk, which removes the need for an individual’s face to be visible on camera to be tracked. According to Nimrod Kozlovski, a cybersecurity consultant and investor, this profiling system has been so effective that since its inception, “successful fraud with credit cards is negligible.”[11] Crimes such as shoplifting and the breaking of traffic laws are difficult to enforce due to the level of attention that must be paid to them. Therefore, by having these processes automated, human workload is decreased, while law enforcement is increased. Beyond that, keeping track of individuals to such an extent has allowed security systems to function much more effectively overall, which in turn has a positive impact on the wellbeing of individuals. These real-life implementations have proven effective in their respective areas, and more widespread solutions could potentially mitigate the effects of crime across the globe.


4.2 Counter-Terrorism and Wider Reach

Focusing on pre-crime is already a vital part of counter-terrorism, and the various implementations of predictive AI systems in these areas could likely be expanded to wider security goals. When looking at AI in counterterrorism, it is difficult to ignore SKYNET, an NSA surveillance program that monitored public mobile records of over 55 million Pakistanis to identify potential terrorists in the country. With this data, the program was then able to pin down a pattern of cellular activities (Such as location changes, app usage, and calls) correlated with extremist groups, and then take note of those who follow similar patterns. This program, while massively controversial, managed to be extremely accurate in its identification of terrorists, with a false positive rate of just 0.008 percent.[15] One of the primary issues with implementing AI in counter-terrorism is the lack of data; terrorist attacks are very uncommon occurrences and machine learning works best when there’s a large dataset.[21] However, a false positive rate of just 0.008 percent despite this issue is an incredible feat that cannot be ignored. This investigation into counterterrorism with AI demonstrates how data as basic as cell phone usage can have tremendous connotations, suggesting more than we would expect. If such a system was enacted into the general population in order to track crimes, states would have significantly more data (Therefore, increased accuracy), due to the commonality of normal crime as opposed to terrorism. Beyond that, focusing on pre-crime is a huge development in the area of counter-terrorism, where a single event could have catastrophic human, social, and economic costs. These systems allow people to feel more secure within their country, without many direct sacrifices being made to achieve it.

4.3 Reduction of Individual Bias

Oftentimes, when it comes to sentencing, it is difficult to account for the individual biases of both judges and juries. Humans are flawed creatures; despite everyone’s best efforts, all individuals have unconscious biases that will have an impact on their judgment. According to a study by Notre Dame Law Review, “judges, like the rest of us, possess implicit biases” and that these biases can “influence judgments in criminal cases.”[18] Justice must be impartial to be fair, and if judges’ biases are influencing criminal cases, then our current system has to be improved. By employing artificial intelligence in these systems, we can both identify and reduce the amount of bias. A Stanford study was able to employ AI-based linguistic models to body camera footage to quantify disparities in the way officers speak to people of different racial categories, finding a bias notably favoring white people. Regardless of the officer’s race, white members were talked to with an overall higher degree of respect than their black counterparts.[24] Not only does this study reveal clear biases in the way that people are treated in routine operations, but it also demonstrates how AI can help diagnose these issues. This study required the analysis of over 1400 police stops, which, if done by hand, could have easily taken up hundreds of man-hours. However, due to AI’s effectiveness at processing huge datasets, we were able to draw these important, quantitative conclusions which would have been nearly impossible to reach otherwise. By employing artificial intelligence in these analytical fields, and then using the results to improve how the law is conducted, we can create a system that is fairer to everyone. Consistency in the criminal justice system can be achieved, and AI provides tools in analysis that will assist our efforts to reach it. By incorporating machine learning algorithms into the way policing is conducted we can potentially mitigate the impacts of individuals’ biases and create a more egalitarian system for everyone.


5 The Case Against a Minority Report

5.1 Control Issues

One of the big roadblocks to relying on machine learning algorithms is the theory of the black box. A black box is a model for describing a system that takes in an input, processes it, and returns an output. In such a system, we are not sure of the exact calculations that take place within the box, and we can only understand it in terms of the outputs it gives to certain inputs.[9] AI is a “black box” as it relies on being trained on data rather than being explicitly programmed; we’re not necessarily sure what exact processing is taking place.[6] In a pre-crime based system, if there is an error in the code, it must be resolved as soon as possible; if people are being jailed as a result of an algorithm, a small error in processing could potentially destroy people’s lives. The black box dilemma both ensures that these errors can only be identified after they happen and that they would become much more difficult to resolve. No machine learning algorithm is 100 percent accurate. Looking back into the example of SKYNET, despite its tremendous accuracy (Especially for a machine learning algorithm), it still had a false positive rate of 0.008 percent. In regards to this, American computer security specialist Bruce Schneier said, “If Google makes a mistake, people see an ad for a car they don’t want to buy. If the government makes a mistake, they kill innocents.”[8] AI can be employed in low-risk commercial applications without too much fear about the consequences of errors, but once people’s lives are at stake, immense care must be taken to ensure that errors do not occur. AI is still a new technology; we are not yet at a point where these programs are accurate enough to account for the harm that errors could cause. These systems must be extremely reliable and maintainable before they can be deployed, and that point has not yet been reached by our industries.

5.2 Collectivisation of Systemic Bias

Despite AI’s ability to decipher huge sums of data, if we are not careful, an AI-based justice system could end up aggregating our individual biases rather than eliminating them. Artificial intelligence creates function entirely based on the data and methods used to train it; if the training data is skewed in any way, it could end up drastically changing how a system works. COMPAS (Correctional Offender Management Profiling for Alternative Solutions) was a software that utilized machine learning algorithms to assess criminal defendants and their likelihood to commit crimes. This system’s intended usage was to calculate recidivism when deciding if to allow bail, however, it was controversially used to determine sentencing in many situations.[3] According to a data analysis by ProPublica, black defendants “were nearly twice as likely to be misclassified as higher risk [in recidivism] compared to their white counterparts,” with black defendants being misclassified as a higher risk 45 percent of the time as opposed to white defendant’s 23 percent.[1] Although these errors could be seen as a mere issue with the program, it generally pointed to greater issues within the criminal justice system. A report from the United States Sentencing Commission noted that even when accounting for factors such as age, education, and prior criminal history, black males serve approximately 20% longer sentence lengths for the same crime when compared to white males with similar backgrounds.[2] This sort of data skewed the COMPAS system to treat black defendants with less leniency than white defendants, serving as a reflection of systemic bias rather than as a solution to it. Despite the expectation that employing a machine learning algorithm would end up improving equality, the algorithm ended up collectivizing these biases, delivering results that were ultimately no better than those delivered by individuals with little to no training.[3] Beyond any help that AI can provide within data analysis, it is vital to consider how these algorithms manifest largely as a reflection of our societies. Practical applications of these programs have many times become a solidification of systemic bias, rather than a clear solution to it.

5.3 Moral and Societal Conflicts

Although we could assume that by its algorithmic nature, an AI-based minority report would generally apply the law equally to all, one must consider the power that such a system could have over a populace, and how it could be abused. Despite the innovations that the Chinese government has made in the realm of AI, its use has been far from benevolent. In the Xinjiang region of China, there has been a massive crackdown from the government on the religious and cultural freedoms of the Uyghur Muslim ethnic group. Uyghurs are being systematically detained and incarcerated in so-called “reeducation camps.” This mass incarceration has been widely criticized by the global community, with many comparing these camps to Germany’s concentration camps during the second world war.[19] Beyond that, there have been various reports of facial scanning technology created by Huawei being used to aid the Chinese in their goals. This system has been described as a “Uyghur Alarm” that uses artificial intelligence to identify people’s ethnicity, with people identified as Uyghur being flagged for detainment.[4] The AI policing systems in China were not necessarily just to enforce the law for the good of the people, but also further the more oppressive agenda of the state. The program was ultimately successful; it appeared to watch over the population, but the authoritative nature of China as a country ensured that the system was corrupt from its creation. With these sorts of systems, exploitation is not only the most dangerous threat they pose but also the most likely. The approach China has with this system is only one example; corrupt governmental bodies could potentiallyuse the programs to persecute those with differing political ideologies or those who wish to act in ways the state deems “unruly.” In many ways, this may seem like a worst-case-scenario, however, China has already proven that it is very much possible. Once an authoritative body takes control of such an initiative, it will create an iron grip on the society that it governs, with no real freedom for those who live in it.


6 Considerations and Conclusions

6.1 Considerations

Artificial intelligence is still a new technology that is being innovated upon every day. Despite that, it has shown incredible success in the area of data analytics which allowed the development of incredible policing tools. These tools have already been approved and employed by governments such as China, the US, and Japan, with new innovations emerging every day. However, despite these incredible successes, the reliability and safety of these systems might not yet be satisfactory for public roll-out. Issues with controlling these algorithms, their reflections of systemic bias, and their potential for abuse are incredible roadblocks to implementing these sorts of protocols at a wider level.

6.2 Conclusions

The world is not yet ready for an AI-based minority report. Although these algorithms are proven to have low false-positive rates, the risk of misuse of these systems is too high for them to be viable. As seen with China, a government could easily take over such a program and use it to further their own goals, ultimately pushing down marginalized groups and controlling the population to a point where people’s fundamental freedoms are at stake. This is not to forget the issue of racial bias seen in many real-world experiments into these sorts of AI assisted policing strategies; while theoretically solvable, there has not yet been an algorithm released that did not have at least a certain degree of bias towards certain groups of people. Although AI in the criminal justice system can increase efficiency and better enforce order amongst a population, it also is the beginning of an authoritarian nightmare for many. It is dubious whether or not these issues will be resolved, and beyond that, it is uncertain that AI policing is the best way forward for society.
Tomas Premoli Muniagurria