With the large scandal this year involving the FBI’s bungling of hair analysis, it is easy to wonder what other types of unreliable scientific evidence have made it through our legal system. The short answer is that the failure in hair analysis is only the tip of a much more troubling iceberg – unreliable science makes it into the courts much more frequently than we like to admit (i.e. bitemark evidence, silicone implants, etc…). This blog has stated before that the answer to this problem is simple: admit only expert opinions that are “borne out by observation of predictions made by [a] theory.” This is not controversial. Indeed, this is the definition of “science” in some sense, and it is the standard baked into Daubert (and Frye to lesser extent). The harder question, however, is how can a system be created to ensure that this actually happens? Who decides what is “good science” in litigation?
As litigators well know, testifying experts are chosen specifically because they support the client’s position. The “science” being presented may or may not actually be accepted by mainstream scientists. Cross-examination is the traditional method to question experts – but both sides use this technique, resulting in a confusing zero-sum game with little clarity and plenty of erroneous outcomes. What I proposed in my article in Bloomberg BNA is that the best way to ensure valid science is to have neutral and mainstream scientific peers blindly evaluate expert opinions. Indeed, peer review is the system used by scientists for centuries to validate scientific theories and hypotheses. It presents the best available option to ensure that only valid science survives admissibility under Daubert/Frye.
Peer Review as a Tool for Litigators
Daubert and Frye motions to exclude experts are arguably underutilized in the current system. Many lawyers undoubtedly fear showing their cards early or gambling on a motion that will ultimately be decided by dueling experts. In many ways, excluding experts under Daubert/Frye relies on the same arguments and questions that a litigator will submit at a deposition or under cross-examination.
Frye, which is still employed in many states, requires judges to determine whether an expert’s opinions have been generally accepted by the scientific community from which the opinion came. Unfortunately, judges have no mechanism by which to ask scientists this question. In effect, judges today typically either inquire whether other courts have accepted similar testimony from other experts, or ask the expert himself whether his methods are accepted (with the obvious “yes” response). This makes exclusion under Frye risky – how can a judge truly assess admissibility when she has no external and reliable source of information?
In an opinion concurring in part and dissenting in part, Chief Justice Rehnquist complained in Daubert that the rule as interpreted would require judges to become “amateur scientists.” And indeed it might, since it calls upon trial court judges to be “gatekeepers” and assess the validity of expert opinion by analyzing the methods and principles from which they are based. This is a tall order for judges who might come across scientific evidence ranging from acoustics to zoology. Certainly, not even scientists are able to analyze the methods of principles of other fields. Would you ask a psychologist to evaluate medical causation? As a result, Daubert motions are correspondingly risky as well – only a “sure bet” might have a chance of convincing a judge that an expert is not sound.
Ultimately, the risky nature of Frye and Daubert stems from the adversarial bias that a judge is presented with in evaluating expert evidence. Without external feedback to assess reliability, judges (like most scientists), have no “reliable” way to evaluate expert conclusions. In the scientific community, such feedback comes from uninterested peer reviewers. In the legal context, peer review of expert testimony provides a most promising avenue to make Daubert and Frye exclusion much more predictable and effective.
Fitting Peer Review into Litigation
Peer review fits into multiple places in any litigation. First, peer review can help to solidify a draft expert report. In a recent case study by JuriLytics – City of Pomona v SQM – a groundwater expert was excluded by a district court and then reversed on appeal. In that case, peer review of the expert’s draft report would have addressed many of the concerns the judge had with the expert’s conclusions and might very well have saved him from exclusion and therefore the city $100k+ on appeals costs. The moral of the story: no matter how confident an expert or litigator is in an expert opinion, peer review will always add valuable input to an expert report. Further, reviewers would be treated like any other consulting expert – they would remain confidential under work-product and would not be discoverable.
Second, peer review can be used as a sword or shield when deciding whether Daubert or Frye is a good avenue to pursue (read: win) your case. As I said beforehand, expert exclusion has traditionally been a gamble – will a judge understand enough to be able to determine why an opinion is (un)reliable? This calculation changes drastically with peer review. Now, litigators can test whether expert exclusion could be utilized to win the case. Positive reviews can be cited with Daubert or Frye motions as appendices. Reviewers might get deposed, but only if the reviews are employed (i.e. they are favorable).
All told, peer review is a general tool for litigators to bring credible and mainstream science to the courtroom without losing control of the litigation. Many other options yet unthought are still possible using this core concept. It’s nice to know that there is an expert willing to testify to your position. But the sure bet is when you convince the judge that mainstream science is on your side.
(Many thanks Dr. Faigman – when I read that you’d clerked for Reavley I figured (as they still say down here) that your rebuttal would be well and concisely argued. — DAO)
David L. Faigman is the John F. Digardi Distinguished Professor of Law at the University of California Hastings College of the Law and the Co-Founder and CEO of JuriLytics, LLC. He holds an appointment as Professor in the School of Medicine (Dept. of Psychiatry) at the University of California, San Francisco. He received both his M.A. (Psychology) and J.D. from the University of Virginia. Professor Faigman clerked for the Honorable Thomas Reavley of the U.S. Court of Appeals for the Fifth Circuit. He is the author of numerous articles and essays. He is also the author of three books, Constitutional Fictions: A Unified Theory of Constitutional Facts (Oxford, 2008), Laboratory of Justice: The Supreme Court’s 200-Year Struggle to Integrate Science and the Law (Henry Holt & Co. 2004) and Legal Alchemy: The Use and Misuse of Science in the Law (W.H. Freeman,1999). In addition, Professor Faigman is a co-author/co-editor of the five-volume treatise Modern Scientific Evidence: The Law and Science of Expert Testimony (with Blumenthal, Cheng, Mnookin, Murphy & Sanders). The treatise has been cited widely by courts, including several times by the U.S. Supreme Court. Professor Faigman was a member of the National Academies of Science panel that investigated the scientific validity of polygraphs and he is a member of the MacArthur Law and Neuroscience Network.