• Ben Crawford

Latent fingerprint identification: A lack of scientific process


Image courtesy of: https://www.fbi.gov/services/laboratory/biometric-analysis/latent-print

Fingerprint analysis was first proposed for criminal identification in the 19th century and has since become an evidentiary staple of forensic science for law enforcement agencies both on and off the screen. Despite its establishment in the criminal legal system and in television, the science behind latent fingerprint identification, or rather the lack thereof, needs to be addressed.


To understand the basis of the critiques against latent fingerprint identification, one must first understand the process of identification itself. The basic process is referred to as the ACE-V process, which stands for Analyze, Compare, Evaluate and Verify.


In the Analyze stage, the fingerprint examiner analyzes the latent print found at the crime scene to determine the quality and quantity of identifiable features, such as ridges and loops. If the latent print is not of sufficient size, quality or detail to be compared to known prints, the process stops here. Keep in mind that latent fingerprints are often incomplete or distorted because of the surface being touched and how it is touched. However, if the fingerprint does pass these standards, the examiner then subjects known prints to the same analysis. Then, if the examiner judges that both the latent and known fingerprints are thoroughly detailed, the examiner moves on to the Compare stage. Here, they identify the number of corresponding details and similarities that the prints have in common via visual comparison. Next, during the Evaluate stage, the examiner determines if the points of agreement between the prints are sufficient for the examiner to make an identification. The Evaluate stage can end in one of three pronouncements: a source determination (positive match), an exclusion (no match), or inconclusive. The final stage is Verify, where a different fingerprint examiner repeats the first three stages (Analyze, Compare and Evaluate) to validate the results of the first examiner.


To positively identify a match, an examiner must find a sufficient number of similarities between the two prints. Unfortunately, the United States has no established standard for a required number of similarities for a match to be determined, leaving the number of similarities entirely up to the judgment of the examiner. This level of discretion gives latent fingerprint identification an inherent subjectivity that makes the process hard to repeat. In comparison, as David Harris wrote in his book “Failed Evidence: Why Law Enforcement Resists Science,” Australia requires 12 similarities, France and Italy require 16 and Brazil and Argentina require 30 to identify a match. This subjective process means that the examiners in the United States cannot conduct their determinations within the realm of statistics and certainty. The Verify stage poses an additional issue within the ACE-V process. When the second examiner repeats the Analyze, Compare and Evaluate stages, they typically already know the conclusion that the first examiner came to. This lack of blindness can lead to the second examiner being biased in their determinations.


The problem with fingerprint identification as a forensic tool in the United States is that the process is currently subjective and prone to human biases. To improve the process, law enforcement agencies need to adopt practices that guard against bias, to the extent possible, and move towards a more scientifically driven approach that would ensure fingerprint identification is as accurate and reliable as possible. By raising the standards of the identification process, actors within the criminal legal system can operate with a lower risk of wrongful convictions and a greater reliance on the accuracy of fingerprint evidence.



0 comments