Legal Affairs
space


Current Issue

 
 
 
 


printer friendly
email this article
letter to the editor


space space space
space


July|August 2002
Under the Microscope By Brendan I. Koerner
Why Judges Rarely Change Their Minds By Edward Lazarus
Net Loss By Reed E. Hundt
History Lesson By Jack M. Balkin

Under the Microscope

For more than 90 years, forensic science has been a cornerstone of criminal law. Critics and judges now ask whether it can be trusted.

By Brendan I. Koerner

Frayed news clippings from murder trials, blow-ups of spent bullets, and collages culled from medical textbooks adorn the corridors of the Connecticut State Forensic Science Laboratory. One of the macabre mementos is a poster-sized array of photos connected to an old attempted homicide. In a corner of the frame is a snapshot of a state trooper's jacket, badly creased and caked with dirt; in the opposite corner is a close-up of a tractor-trailer that's also in need of a wash.

A prank gone awry made that truck's driver a cop killer. "There was a process called 'dusting' that was fairly common across the country," explains Kenneth Zercie, the lab's beefy, good-natured assistant director. "The vortex created as a tractor-trailer goes by would dust the Stetson right off [a trooper's] head. It was a big joke, to see if you could dust the hat off." But the trucker whose rig now graces the lab's walls misjudged his distance from the roadside's white line; instead of merely spooking Smokey, he killed a patrolman who had stopped to help a motorist. And then he sped off.

Forensic scientists built the state's case against him by comparing an imprint crushed into the trooper's jacket to the front right corner of the truck. "You look at the imprint pattern caused by the impact, you can see where the pattern on the jacket corresponds to the pattern [on the truck]," says Zercie, clapping his hands together to dramatize the vehicle-meets-man collision. "Made it real hard for him to say he wasn't there."

So it has gone for almost a century in countless criminal cases, not to mention Law & Order episodes. Zercie's identification techniques—analyzing indentations, fingerprinting, comparing fibers—yield forensic evidence, cherished by prosecutors as beyond reproach. The idea is that unlike some eyewitnesses, the match between a print on a jacket and a truck doesn't lie. "The science tends to be more objective," explains the retired detective, who switched to forensics more than 25 years ago. "You're not depending on your prejudice, your visual acuity, the lighting..."

But as judges have begun to realize, naked-eye observation is at the heart of what experts like Zercie do. The question now is whether that's enough. Critics contend that forensics lacks the rigors of "real" science—clinical trials, measurable error rates, and demanding peer review—and relies instead on untested assumptions about how one fingerprint, hair follicle, or screwdriver can be distinguished from another. Michael J. Saks, a law professor at Arizona State University, compares present-day forensics to World War II-era surgery, which was guided by tradition rather than clinical trial results. "Every field thinks they know what they're doing," he says. "And then they go out and do systematic testing, and they find out they were right about some things and wrong about others."

In January, a respected federal judge, Louis H. Pollak, agreed. Pollak barred an FBI expert from testifying that a suspect's fingerprints matched those found at a crime scene. Two months later, he decided his landmark ruling had gone too far. (See "Why Judges Rarely Change Their Minds," pp. 39-40.) But the question that triggered the judge's initial misgivings—What are the methods underlying forensic science?—is being raised in challenges by defense lawyers around the country.

The mantra of forensic evidence examination is "ACE-V." The acronym stands for Analysis, Comparison, Evaluation, and Verification, which forensic scientists compare with the step-by-step method drilled into countless chemistry students. "Instead of hypothesis, data collection, conclusion, we have ACE-V," says Elaine Pagliaro, an expert at the Connecticut lab who specializes in biochemical analysis. "It's essentially the same process. It's just that it grew out of people who didn't come from a background in the scientific method."

In the case of the hit-and-run trucker, Pagliaro and Zercie's colleagues first inspected the trooper's jacket and the vehicle to determine whether there were enough clear markings to allow for a reliable comparison. They placed the jacket next to photos of the truck and looked for "points of similarity"—oil streaks skewed at identical angles, dirt blotches of comparable shape. "There was a take-away imprint that removed the surface soil and dirt [from the truck]," says Pagliaro, a former prep-school science teacher who holds a master's degree in forensics from the University of New Haven. "You could see that on the shirt there was the dark oil and soil residue from the tractor-trailer."

The examiner ruled that enough similarities existed between the two patterns to establish that the truck and jacket had likely smooshed together. Then a second lab worker went through the whole process again, as a means of double-checking the results. (The soil wasn't analyzed, Pagliaro says, because there would be no way to prove whether the dirt on the jacket got there via the truck or from the ground next to the highway.) The big-rig duster pleaded guilty when confronted with the pattern evidence.

At its best, forensic analysis is meticulous and precise. In describing how an examiner might go about identifying whether a particular crowbar was used to jimmy open a particular door, Pagliaro sounds like a biologist outlining a complex dissection. She notes how the crowbar would be examined to check for manufacturing imperfections or tiny striations caused by use. A lead impression of the crowbar's edge would then be made, and that test impression compared against the damage to the door at the crime scene.

As with the comparison between the trooper's jacket and the truck, the examiner would look for points of similarity between the two impressions, for nooks and crannies that correspond to the crowbar's striations. "When you do that kind of comparison, you clearly have to consider things such as the angle in which the original tool mark was made, and the pressure that was used," says Pagliaro. What's the key to making those nuanced judgments? "As much as possible, it's based on the experience of the examiner."

The value of that experience, forensic scientists claim, is found in courtroom tales of justice well served. Zercie likes to recall a murder trial in Bridgeport, Conn., in which the defendant, from jail, repeatedly professed his innocence. The evidence was unusually skimpy: a vague description from eyewitnesses and a broken beer bottle used as the weapon. Fingerprints were lifted from the bottle's neck, but the ridge detail—the smudge that reveals the print's loops and whorls—was of lackluster quality.

When Zercie looked more closely, however, an idea popped into his head. Perhaps the ridge detail wasn't a fingerprint at all, but rather an imprint made by the webbing between the murderer's thumb and forefinger on one hand. Though little mainstream research exists to support the uniqueness of webbings, Zercie convinced a judge that a comparison was the court's best hope for justice. "The judge put the trial on recess, and I did the analysis," he says. "And the area of ridge detail, it wasn't from the individual. We could exclude him." After spending 18 months in jail, the defendant was freed.

A happy ending. But given the paucity of research on webbings, and the lack of an established protocol for obtaining and examining webprints, should the defendant have walked based on Zercie's work?

Fingerprinting, handwriting analysis, fiber comparison, and the like have been courtroom staples since the days of Sir Arthur Conan Doyle; the first fingerprint case in the United States dates back to 1911. "That was a big era of enthusiasm for science and technology," notes Simon Cole, author of Suspect Identities: A History of Fingerprinting and Criminal Identification. "Saying, 'Hey, we're doing scientific policing' had a lot of cachet."

The phrase may have dazzled the public, but the practitioners' credentials were often less than scientific. The British anatomist Sir Francis Galton pioneered fingerprinting for criminal identification, but records clerks implemented his techniques in police precincts. These low-level bureaucrats helped devise the procedures for lifting "latent" fingerprints, the often partial impressions found at crime scenes, and then comparing them to complete "rolled" prints. The basic method has changed little over the decades.

Other early "experts" also lacked scientific pedigrees. Penmanship teachers—scriveners, in effect—were the earliest handwriting experts, dubiously claiming to understand how nib angles influenced swoops and curls. The initial cases involving tool marks in the late 1920s starred a man named Luke S. May, who once testified in a rape case that for there to be an exact match of the blade allegedly used in the crime, "every one of the hundred million people in the United States" at that time would have to have "six hundred and fifty quadrillion knives each." How he arrived at that astronomical figure remains a mystery.

Yet for most of the 20th century, courts seldom set limits on what experts could say to juries. The 1923 case Frye v. United States mandated that expert witnesses could discuss any technique that had "gained general acceptance in the particular field in which it belongs." Courts treated forensic science as if it were as well-founded as biology or physics.

When geneticists pioneered DNA testing in the late 1980s, that premise got a second look. A fiber analyst would merely opine that two hair follicles were similar. But a DNA tester could wow juries by saying, "There's only a one in 60 million chance that the samples don't match"—and produce data to back up the claim. Next to DNA testimony, the claims of old-time forensics didn't sound so scientific.

In 1993, the Supreme Court set a new standard for evidence that took into account the accelerated pace of scientific progress. In a case called Daubert v. Merrell Dow Pharmaceuticals, the plaintiffs wanted to show the jury some novel epidemiological studies to bolster their claim that Merrell Dow's anti-nausea drug Bendectin caused birth defects. The trial judge didn't let them. The plaintiff's evidence, he reasoned, was simply too futuristic to have gained general acceptance.

When the case got to the Supreme Court, the justices seized the opportunity to revolutionize the judiciary's role in supervising expert testimony. Writing for a unanimous court, Justice Harry Blackmun instructed judges to "ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable." Daubert turned judges into "gatekeepers" responsible for discerning good science from junk before an expert takes the stand. Blackmun suggested that good science must be testable, subject to peer review, and feature a "known or potential rate of error."

As different kinds of forensics have been measured by the Daubert criteria, some have been found wanting. For years, arson investigators looked for telltale signs of chipped concrete based on the assumption that fire accelerants like gasoline cause such fragmentation. Recent laboratory tests have shown that they don't. Investigators also long assumed that fire accelerants burn hotter than wood. A guidebook published by the National Fire Protection Association firmly disagrees.

Serious questions also have been raised about the reliability of hair evidence, which is examined microscopically for color and texture variations. A study of the first 74 prisoners exonerated by DNA evidence found that 26 had been wrongly convicted largely on the strength of hair follicles found at the crime scenes. "I keep thinking about the Coleman case," says Paul Giannelli, a law professor at Case Western Reserve University, referring to Roger Coleman, a Virginia man executed in 1992. At his murder trial, the prosecutor told the jury that it was "extremely unlikely that anyone else would have hair" that matched the hair sample taken from Coleman. "But there could have been a million [others]," Giannelli said. "He didn't know. He couldn't know."

Still, most courts are wary of scuttling useful techniques for Daubert's sake. The "proven reliability" of hair and fiber evidence made "an independent reliability determination" unnecessary, Hawaii's Supreme Court decided in a 1997 case. About 35 challenges to fingerprint evidence have been similarly rebuffed.

There are a few exceptions, though. In 1999, Judge Nancy Gertner of the Federal District Court in Massachusetts set limits on the kinds of conclusions a handwriting expert could draw before a jury in United States v. Hines. The expert could point out similarities between the defendant's handwriting and the writing on a stick-up note, the judge said, but she could not "make any ultimate conclusions on the actual authorship." The judge questioned "the validity of the field" of handwriting analysis, noting that "one's handwriting is not at all unique in the sense that it remains the same over time, or unique[ly] separates one individual from another."

Early this year, Judge Pollak stunned the legal world by similarly reining in fingerprint experts in the murder-for-hire case United States v. Plaza. Pollak was disturbed by a proficiency test finding that 26 percent of the crime labs surveyed in different states did not correctly identify a set of latent prints on the first try. "Even 100 years of 'adversarial' testing in court cannot substitute for scientific testing," he said. He ruled that the experts could show the jury similarities between the defendants' prints and latent prints found at the crime scenes, but could not say the prints matched.

An aghast prosecution scrambled to assemble a series of in-house FBI proficiency tests, conducted annually since 1995, in which less than 1 percent of examiners erred when asked to look for matches between latent and rolled prints. Pollak pulled back and allowed the fingerprint experts to present their conclusions. At the same time, he urged the National Institute of Justice to sponsor academic research on the validity of fingerprint evidence. And he fretted that the FBI's proficiency tests were "less demanding than they should be," since they used latent prints of much sharper clarity than those typically lifted from crime scenes.

Forensic experts were relieved by Pollak's change of heart, which they believe was guided in large part by the prosecution's adroit re-explanation of how painstaking ACE-V really is. They suspect that the judge, a former dean of the law schools at Yale and at Penn, was initially biased against their profession for its lack of Ivy League credentials. In his initial opinion, Pollak said that top forensic experts "tend to be skilled professionals who have learned their craft on the job without any concomitant advanced academic training." He concluded, "It would thus be a misnomer to call fingerprinting examiners a 'scientific community' in the Daubert sense."

That assessment is a bit outdated. An increasing number of forensic scientists hold graduate degrees in chemistry or molecular biology, and rigorous interdisciplinary programs are cropping up at colleges; the University of West Virginia recently offered the nation's first-ever four-year degree in biometrics, the science of identifying humans by unique physical traits like iris patterns and hand geometry. These students, however, typically specialize in newer techniques like DNA testing. Traditional forensics is still dominated by ex-cops and examiners "educated at the school of hard knocks," as Zercie says.

Critics wonder how forensics experts without advanced degrees can design double-blind studies, or use statistics to calculate error rates. Why can't crime-lab folks just admit that they're technicians, not scientists? But forensic experts despise the technician label. "It implies to other scientists that we don't understand the theoretical basis of what we're doing and we're not capable of drawing conclusions," says Pagliaro of the Connecticut lab. "That's clearly not the truth."

Yet as David Wayne Kunze might protest, gilding a conclusion with scientific credentials does not necessarily make it true. In 1997, the Vancouver, Wash., man was accused of bludgeoning his ex-wife's fiancé to death. The victim's son described the attacker as a 25- to 30-year-old man with dark-to-black hair; Kunze was in his mid-40s and had reddish-blond hair. Yet investigators fingered him on the strength of a latent ear print found on a bedroom door. At trial, a Dutch expert named Cor Van der Lugt testified that he was "100 percent confident" that the print matched the defendant. The jury returned a guilty verdict, and a sentence of life without parole.

On appeal, however, the ear-print evidence crumbled. A three-judge panel seemed stunned that the jury had swallowed Van der Lugt's hyperbolic testimony; no peer-reviewed studies attest to the individualizing characteristics of ear prints, and the FBI does not use such evidence. A second effort to prosecute Kunze ended in a mistrial last March. The case was subsequently dropped.

To the skeptics, Van der Lugt's overreaching reveals forensics as the science that has no clothes. "Certainty was a hallmark of science a hundred years ago," says Suspect Identities author Cole. "It's a hallmark of pseudoscience now."

Zercie counters that the whole discipline shouldn't be impugned by a few shoddy experts. If a forensic scientist's credentials seem suspect, or his methods dubious, it's the defense lawyer's job to take apart the testimony on cross-examination. "Two and two we know equals four," says Zercie. "But if I add two and two and make it three, the error rate is not in the technique of mathematics."

Forensic scientists say they control their errors through mandatory double-checks and vigorous proficiency testing, exactly as stipulated in Daubert. The Connecticut lab sends out pre-examined fibers or fingerprints to police departments, which are asked to return the samples for analysis as if they were part of real investigations. The lab then examines the samples without knowing they're part of a proficiency test; when the cops reveal the fake, the lab results are compared to the known properties of the evidence.

That testing model might not fly in a university setting, since the data and methodology are not published and made available for public inspection, as scientific custom demands. In fact, few forensic proficiency tests are seen by outsiders. Collaborative Testing Service, which conducts studies on behalf of crime labs nationwide, does not release data collected before 2001 without a court order. The few CTS results that have been made public are not encouraging. In a series of fingerprint tests conducted between 1995 and 2001, misidentification rates for latent prints ranged from 3 to 20 percent. And the tests that produce low error rates—that is, that crime labs pass—often strike impartial observers as laughably easy; in his second Plaza ruling, Judge Pollak noted that "the FBI examiners got very high proficiency grades, but the tests they took did not."

If Daubert is to have teeth, judges will have to demand tougher tests and better results. Zercie acknowledges that forensic scientists need to do a better job of allaying their critics' fears—as well as satisfying juries who've been spoiled by DNA's precision. "In some of our identification fields, it's catch-up time," he says. "We have to catch up with DNA."

Zercie hopes high-tech equipment will bring traditional forensics up to date. But computer-generated charts and spectographs won't satisfy the naysayers unless forensic scientists clinically test their assumptions. "They think reliability means validity," says David Faigman, a professor at the University of California, Hastings College of Law. "But in science, reliability just means consistency. A thermometer that is 10 degrees wrong is 100 percent reliable, but 0 percent valid."

Brendan I. Koerner is a Markle Fellow at the New America Foundation.

printer friendly email this article letter to the editor reprint premissions
space space space space
Contact Us