top of page
  • Felicia Sych

Artificial intelligence and the criminal legal system.

The era of artificial intelligence (AI) has begun. 


C-3PO from Star Wars. J.A.R.V.I.S. from Iron Man. Baymax from Big Hero 6. What were once eccentric visions from Hollywood studios might soon become a reality. From automating tasks, improving medical diagnosis, to creating music that mimics the real artist, AI has already substantially impacted our lives.


But what happens when AI gets it completely wrong? 


Robert Julian-Borchak Williams’ story is particularly troubling. In January 2020, he was arrested by the Detroit Police Department and charged with felony larceny. But what no one knew at the time was that Williams was the first person in the United States arrested based on a flawed facial recognition algorithm. The Detroit Police Department detained Williams for 30 hours and, although prosecutors eventually dropped the charges against him, the harm had already been done. 


Williams may have been the first victim of an arrest by faulty AI data, but he certainly was not the last. In fact, Williams is but one of three people who the Detroit Police Department has arrested based on erroneous data created by AI. As of August 2023, six people across the United States have reported that they have been falsely accused of a crime due to facial recognition technology—all of whom are Black. 


Despite the technology’s potential to find missing people, solve crime, and monitor crowds, research has found that AI is 34% more likely to misidentify a Black woman than a lighter-skinned man. Its shortcomings disproportionately affect minority communities, perpetuating existing biases and continuing social inequities. As more cities and police departments across the country seek to use AI, ensuring equity across all races and genders will be critical. 


However, it is not just the technology itself that poses a risk; humans too are exploiting this technology to create “evidence,” particularly through the use of deepfakes. A deepfake can be created to make it look like someone said or did something they did not do. As the U.S. Department of Homeland Security reported in 2021, the increase in deepfakes poses a significant threat to many aspects of the criminal legal system. In short, the problem mainly cuts in two ways: defendants needing to prove a deepfake is indeed fake and bad actors trying to evade accountability by alleging something real is a deepfake. While attorneys are already accustomed to the ritual of authenticating evidence, the emergence of deepfakes adds an additional level of complexity and poses a serious challenge to justice when the truth is hardly distinguishable from a lie. 


The era of AI has begun, and its potential is both limitless and troublesome. Even ChatGPT, an AI chatbot developed by OpenAI, recognizes the changing landscape in the practice of law. I asked ChatGPT “how will AI impact the practice of criminal law?” The response included a substantive list, including predictive policing, legal research, and sentencing and parole decisions. Importantly, the response included a caveat after its list: 

“It is important to note that while AI offers many potential benefits to the practice of criminal law, it also raises ethical and legal challenges. Ensuring the fairness and transparency of AI algorithms, addressing bias, protecting privacy, and upholding due process rights are important considerations in the integration of AI in the criminal justice system.” 

ChatGPT is certainly right—practitioners will need to find the balance between harnessing the good of AI while considering and dismantling the bad. I asked ChatGPT, “Can the balance be accomplished?” In response, ChatGPT said, “Yes. Artificial intelligence can be used ethically by criminal law practitioners, but it requires adherence to ethical guidelines and responsible practices.” And if an artificial chatbot says it can be done, who are we to argue otherwise?


0 comments

Recent Posts

See All
bottom of page