top of page
  • Emily Moran

Facial recognition inaccuracies leading to wrongful arrest.

TW: Racial Discrimination


The inaccuracy of facial recognition technology is a problem that has become apparent due to the increase in artificial intelligence (“AI”) and other facial recognition software. First, issues relating to wrongful identifications have led prosecutors to make wrongful arrests. Reports as recent as January 2024 tell us wrongful accusations have occurred due to facial recognition technologies used by law enforcement. A man in Texas was held in jail for nearly two weeks because of a false identification performed by a software that linked him to a crime he did not commit. The man was later exonerated and the accusations were withdrawn. However, this issue raises not only concerns about privacy interests but also on the effects an increase in the use of this technology could entail. In another similar case, the one of Porcha Woodruff, law enforcement arrested Woodruff for a suspected armed carjacking while she was eight months pregnant. Woodruff was interrogated for hours and later exonerated when law enforcement relayed that there was insufficient evidence to connect her to the crime


Additionally, there are issues with facial recognition technology that negatively affect persons of color. A National Institute of Standards and Technology (“NIST”) study evaluated “face recognition algorithms submitted by industry and academic developers on their ability to perform different tasks.” The study found higher rates of falsely placing Asian and Black people compared to caucasian people. The study also found higher rates of misidentifying Black women specifically. The NIST study is critical to understanding the effects and impacts of the development of AI and facial recognition technology in the criminal justice arena. AI and facial recognition technologies also pose threats relating to deepfake technology and the development of programs like ChatGPT and Open AI. As my colleague Felicia Sych stated, “ChatGPT is certainly right—practitioners will need to find the balance between harnessing the good of AI while considering and dismantling the bad.”


In 2021, the Government Accountability Office found that 14 federal agencies out of 42 had utilized facial recognition technology. As the number of federal and local agencies that use facial recognition software increases, so do the implications for communities of color. As this technology continues to develop with little to no oversight, innocent people may continue to be accused of crimes they did not commit.  Scholars have argued “for greater reliability checks on both before use against a criminal defendant.” Improvements need to be included in the facial recognition process and, without them, the technology will continue to develop rapidly with little to no oversight. Senators have urged the Department of Justice to improve its oversight capabilities over facial recognition technology as well as increase legislation on the issue.


The Innocence Project has found six people who have reported being misidentified or incorrectly accused of a crime by law enforcement due to facial recognition were Black. In addition, a Detroit Police Chief, James Craig, acknowledged that if he were to use facial recognition software alone, it would misidentify suspects 96% of the time. People advocating for the increased oversight of facial recognition technology have stated they “believe there are dozens of similar cases of wrongful arrests, but they are difficult to identify because the police department isn't required to share when matches were made using facial recognition software.”  As facial recognition software continues to develop, oversight is necessary to stop innocent people from being arrested and charged with crimes they did not commit.

Recent Posts

See All
bottom of page