The AI Revolution in Law Enforcement and Healthcare

By Larisa Kasler

7–10 minutes

Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, particularly computer systems, to perform tasks such as learning, reasoning, and problem-solving. It encompasses technologies such as machine learning, natural language processing, and robotics (McDaniel & Pease, 2021). Although AI was a subject that seemed to only belong to the science fiction genre, modern technology is continuously advancing the use of it. With the growing sophistication of AI tools, both law enforcement, healthcare professionals and corporate security experts are using these advancements to strengthen their operational effectiveness. Its applications improve crime prevention and resource allocation and streamline investigations. However, they also raise ethical questions. This article explores AI’s impact on public safety, healthcare, and its use in policing, most notably predictive policing.

Predictive policing involves collecting and analysing data from multiple sources to predict and address potential criminal activity. Predictive policing impacts almost all aspects of the criminal justice system, using risk assessment tools to predict future crimes and chances of recidivism (convicted criminal’s tendency to reoffend) (McDaniel & Pease, 2021). This approach represents a shift from reactive to proactive measures. As Charlie Beck, former Chief of the Los Angeles Police Department (LAPD), explained, “The predictive vision moves law enforcement from focusing on what happened to focusing on what will happen.” (Pearsall, n.d.).

Predictive policing is often divided into two approaches when referenced in policing: place-based and person-based. One of the earliest place-based approaches in predictive policing is hot spot detection, often referred to as ‘hot spots policing’. This method involves mapping crime locations to identify geographically defined clusters of criminal activity, known as ‘hot spots’ (Pearsall, n.d.). By focusing on these clusters, law enforcement agencies transition from general geographic-based patrols to more targeted incident-based deployments. The goal of this method is to direct resources to areas most affected by crime, ensuring that high-crime locations receive the attention needed to prevent future offences. Evidence suggests that concentrating efforts on these areas can improve efficiency in policing. While Pearsall (n.d.) argues that targeting specific areas may displace crime to surrounding neighbourhoods, hot spots policing is also often associated with the diffusion of crime control benefits to adjacent areas, thereby reducing crime not only within the hot spots but also in their vicinity.

Expanding on hot spot detection, risk terrain modelling (RTM) offers a more advanced predictive technique. RTM begins with identifying environmental factors associated with specific crimes. For instance, in analysing future burglary risks, relevant factors might include the proximity of retail businesses, as these locations could attract offenders (McDaniel & Pease, 2021). Once these factors are determined, RTM assigns values that represent the presence, absence, or intensity of each factor across a geographic area. Each factor is mapped individually, and these maps are then combined to produce a composite risk terrain map. The resulting map assigns risk values to every location within the area of study, indicating where crimes are most likely to occur based on the interplay of environmental factors. RTM provides a framework for understanding the spatial dynamics of crime and for guiding police deployments to high-risk locations (Pearsall, n.d.). As predictive policing has advanced, software programs incorporating machine-learning algorithms have been developed to forecast crime more accurately. Others build upon RTM by integrating a broader range of variables, including weather patterns, the schedules of major sporting events, and school calendars, in addition to historical crime data (McDaniel & Pease, 2021). These predictive tools enable law enforcement agencies to identify potential future crime locations and allocate patrol resources more effectively.

Person-based approaches in predictive policing focus on analysing data related to offenders and victims to predict who is most likely to commit or fall victim to future crimes. Since victimization often correlates with an individual’s proximity to high-risk groups, individuals, or locations, person-based approaches frequently overlap with place-based strategies such as hot spot detection. For instance, offender-focused approaches like targeted offender lists can help anticipate likely victims of crime. Targeted offender lists identify individuals known to be chronic offenders or those deemed most likely to offend. The Chicago Police Department (CPD) implemented a prominent example of this approach with its ‘heat list’ or ‘strategic subject list’. This list used an algorithm to identify individuals at high risk of future criminal involvement, either as offenders or victims. The CPD initially faced criticism for allegedly unfairly targeting individuals for police attention, leading the department to adapt its approach. One modification was the introduction of ‘custom notification’, in which officers proactively informed individuals on the list of their inclusion and warned them that any future criminal activity would be closely monitored and prosecuted. Despite these efforts, a 2016 study revealed mixed outcomes. Those included on the list were no more likely to become victims of shootings or homicides than individuals in a comparison group but were more likely to be arrested for shootings. These findings prompted further revisions to the system, culminating in its rebranding as the Crime and Victimization Risk Model (CVRM). However, a subsequent 2019 evaluation concluded that the CVRM was still “not operationally suitable” (Pearsall, n.d.).

The risks and ethical issues associated with predictive policing and artificial intelligence (AI) cannot be ignored. The issue arises in the data-driven biases, which can lead to discrimination against a certain race (e.g. racial bias against African Americans regarding drug crimes) and gender (e.g. males are more likely to be suspected of a crime) (Harcourt, 2007). Another problem with artificial intelligence is the incomplete crime data. Sexual assault, domestic violence and fraud remain often unreported. Other gathered reported data has also been shown to be inaccurate and occasionally fraudulent, as some police officers do not want to engage in paper work and writing reports. Thus, some police reports lack in reliability and completeness. That makes reliance on AI in public safety questionable, as it is trained on the aforementioned biased and incomplete data (Ferguson, 2017).

AI is also used in surveillance, for example, in facial recognition technology. This tool has been widely adopted by law enforcement agencies, private companies, and governments to identify individuals, track their movements, and even infer emotional states from facial expressions. Using advanced algorithms, facial recognition systems compare facial features with databases of known individuals, making it possible to monitor public spaces more effectively and identify suspects in real time. Proponents argue that this enhances public safety by aiding in criminal investigations, preventing terrorism, and maintaining order in high-risk areas (Spair, 2024). However, facial recognition technologies also come with substantial ethical and privacy concerns. One critical issue is the potential for mass surveillance, where individuals are continuously monitored without their knowledge or consent. In public areas such as airports, train stations, and shopping centres, citizens may be tracked even if they are not involved in any illegal activity. This pervasive surveillance can limit free expression and personal autonomy. Moreover, studies have highlighted biases in facial recognition systems with lower accuracy rates for people of colour and women. For instance, research by the MIT Media Lab revealed that these systems misidentify black women at significantly higher rates compared to white men. Such inaccuracies raise concerns about fairness and the risk of wrongful identification, particularly in critical areas where errors can have severe consequences (Spair, 2024).

AI is also often used in healthcare, especially in radiology, where AI systems help analyse medical images such as X-rays, CT scans, and MRIs. These tools use advanced algorithms to find patterns that human radiologists might miss, allowing for earlier and more accurate diagnoses of diseases such as cancer, stroke, and heart problems. For example, AI has been able to detect early-stage cancer by spotting small nodules in CT scans and has been very effective in finding signs of breast cancer in mammograms. This helps doctors detect serious illnesses sooner and improve treatment outcomes (Spair, 2024). However, Spair (2024) suggests that using AI in radiology also has challenges. One problem is that doctors might rely too much on AI systems, even though they are not always correct and can make mistakes. For instance, AI might wrongly flag something as dangerous or misjudge a serious problem. If doctors feel pressured to follow AI suggestions, they might ignore their own experience.

In conclusion, artificial intelligence (AI) has become a powerful tool across various fields, from law enforcement to healthcare. However, these advancements are not without challenges. Ethical issues such as bias, privacy concerns, and over-reliance on AI systems raise important questions about the responsible use of this technology. Biases in data and algorithms can result in unfair outcomes, while the potential for mass surveillance and misidentification threatens personal freedoms. In healthcare, the risk of errors caused by over-dependence on AI highlights the need to balance AI tools with human experience. As AI continues to develop, it is essential to address these concerns through regulation, transparency, and ongoing training of AI databases. This will ensure that AI’s benefits are maximized while minimizing harm, creating a future where technology supports fairness, accountability, and trust in both public safety and healthcare systems.


Reference list:

Ferguson, A. G. (2017). Policing predictive policing. Washington University Law Review, 94, (5), 1109–1190.

Harcourt, B. E. (2007). Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. The University of Chicago Press.

McDaniel, J., & Pease, K. (2021). Predictive policing and artificial intelligence. Routledge.

Pearsall, B. (n.d.). Predictive policing: The future of law enforcement? NIJ Journal, (266), 16–18. https://www.ojp.gov/pdffiles1/nij/230414.pdf

Spair, R. (2024). Navigating AI Ethics: Building a Responsible and Equitable Future. Self-published.


While we are transparent about all sources used in this article and double-checked all the given information, we make no claims about its completeness, accuracy or reliability. If you notice a mistake or flawed phrasing that can lead to misunderstandings, please send an email to centuria-sa@hhs.nl .

Leave a comment