Evidence and Causality

If you're interested in the issue of causation a great place to start is page 4 (through 9) of the Institute of Medicine's new report "Adverse Effects of Vaccines: Evidence and Causality". There's lots to consider. For example,support for the Texas Supreme Court's recent determination that plaintiff's will need two well done epi studies to support a claim of general causation:

      "The committee does not consider a single epidemiologic study - regardless of how well it is designed, the size of the estimated effect, or the narrowness of the confidence interval - sufficient to merit a weight of 'high' or, in the absence of strong or intermediate mechanistic evidence, sufficient evidence to support a causality conclusion other than 'inadequate to accept or reject a causal relationship.' This requirement might seem overly rigorous to some readers. However, the Agency for Healthcare Research and Quality advises the Evidence-based Practice Centers that it has funded to produce evidence reports on important issues in health care to view an evidence base of a single study with caution (Owens et al., 2010). It does so due to the inability to judge consistency of results, an important contributor to a strength of evidence, because one cannot 'be certain that a single trial, no matter how large or well designed, presents the definitive picture of any particular clinical benefit or harm for a given treatment' (Owens et al., 2010)." (pg. 6)

You'll also get an introduction to the similar yet different GRADE v. EPC approaches to assessing and weighing scientific evidence when making causal judgments. Hopefully, as we've said in the past, courts will one day demand of experts the same sort of transparency in evidence accumulation, assessment and weight assignment that sound science demands.

Along the way you'll note the conspicuous absence of the pronouncements of experts among the things to be determinative in reaching causal judgments. By now that ought not be surprising. As Dr. Steven N. Goodman reported in his journal article "Judgment For Judges: What Traditional Statistics Don't Tell You About Causal Claims" the US Preventive Services Task Force, which is committed to evidence-based medicine, has ranked by reliability the different sorts  of evidence that go into making supportable causal judgments. At the top of the list, the strongest sort of evidence, is the well done randomized controlled trial. Then comes the non-randomized controlled trial. Then cohort or case-control studies. Then multiple time series. Dead last, and weakest of all, comes "opinions of respected authorities ..." If science hasn't much use for the mere ipse dixit of credentialed experts it's hard to imagine why the law should hold otherwise.

Anyway, to find out what vaccines cause, and don't cause, and how sound causal judgments are made, this new IOM report is well worth your time.



"The Cost to the Health of Our Microbial Ecosystems"

Gina Kolata has another good read at the NYTimes in "The New Generation of Microbe Hunters". The word, as you can see, is quickly getting out; the old ways of thinking about the determinants of human health are crumbling as the discovery that we are "super-organisms", more bacterial than human - at least from a genetic perspective, sweeps away old notions about what makes us sick, what keeps us healthy and even what (and maybe who) we are.

For other dispatches from the revolution you might want to read about just how big a deal this is, how much we know, how much remains to be understood and the promise of biotherapeutics; or maybe, since there's a little Gilgamesh in each of us, how  changing the bacteria in the gut of mice makes the rodents live significantly longer;  then there's a dysregulated microbiome and rheumatoid arthritis; new insights into how H. pylori causes gastric cancer; and gut microbes can cause cancer of the liver and breast (in mice anyway); and changing the gut microbiota to treat type 2 diabetes and, and, and ... There's a torrent of literature but that'll give you an idea what's out there and what's coming.

None of that is to say "Eureka!" they've found the answer. Likely (as it's wise to hedge bets) the causation onion has many layers still uncovered. No, the point is twofold. First, the 40 year old idea championed by public health advocates pushing what they call social, or environmental, justice - that much if not most human suffering is due to bad industrial chemicals or the bad habits inculcated in consumers by nefarious corporations bent on selling them things they don't want or need - was never sound but now it's just silly. Second, if you've been paying attention, you'll understand that an awful lot of illness and suffering has been caused by stuff nobody, we presume, ever fretted about. But who knows? Maybe somebody somewhere has the disrupted microbiome version of the Sumner Simpson Papers. Wouldn't that be something?

Havner: Now We're Really Confused

The Texas Supreme Court just decided Merck v. Garza. The relatively short opinion rolls along (1) reaffirming Havner; (2) apparently adding the further requirement of a second well done epidemiological study "statistically significant at the 95% confidence level" that shows a doubling of risk; (3) rejecting the "totality of the evidence" ipse dixit of plaintiff's expert; but then suddenly (4) utterly confounding us by holding "when parties attempt to prove general causation using epidemiological evidence, a threshold requirement of reliability is that the evidence demonstrate a statistically significant doubling of the risk". What?!

The whole purpose of the "doubling of the risk" requirement had been, we thought, to ensure that when a plaintiff has nothing but probabilistic evidence such evidence must actually support a "more likely than not" causal inference as to her specific illness. There are numerous agents that produce small effects (i.e. relative risks less than 2.0) and which are nevertheless unquestionably causative of human disease. Hopefully, the court meant "specific" where it wrote "general" regarding risk doubling.

Yet there's another problem on the very next page. Apparently courts are now to "examine the design and execution of epidemiological studies using factors like the Bradford Hill criteria to reveal any biases that might have skewed the results of the study." Again: What?! We thought (and we're pretty sure we're right) that Hill's list of factors were his way of assessing a given claim of general causation. And anyway, that's not how you look for bias. This is how you look for bias: "Excess Significance Bias in the Literature on Brain Volume Abnormalities".

In sum we liked, of course, the court's conclusion that when each piece of plaintiff's supposedly supportive evidence is flawed "a plaintiff cannot prove causation by presenting different types of unreliable evidence." Yet, recognizing that causal inference is hard (nearly maddening sometimes) and that statistical inference is complicated and counterintuitive, we wish the court had done a better job on this one. The deviations from standard analysis will only support those who complain that the current court is "merely results oriented".

The Malleability of Memory

Upon reading the New Jersey Supreme Court's decision in State v. Larry R. Henderson the first thought that occurred to me was that if courts hearing toxic tort cases in which product identification is an issue were to scrutinize such testimony under a similar standard our toxic justice problem would soon be solved.

The science stuff starts on page 40. The social science is, per usual, weak with small sample sizes involving unrepresentative subjects and the analysis consists too often of counting hands of credentialled experts. Nevertheless, the evidence that "memories fade with time" and that "memory decay is irreversible" ought to be kept in mind any time a witness claims to recall the incidental use of a product decades before.


Speaking of Dusty Death

 ... there are lots of new papers associating various sorts of dusts with cancer, pneumonia and cardiovascular disease. We've previously discussed the correlation between endotoxins and a reduced risk of lung cancer but how to square those studies with ones like "Occupational Exposure to Organic Dust Increases Lung Cancer Risk in the General Population" (which identifies exposure to dust from "microbial, plant or animal" sources)?

On the inorganic dust front there's "Increased Mortality From Infectious Pneumonia After Occupational Exposure to Inorganic Dust, Metal Fumes and Chemicals". It's a study of 320,143 workers in the construction industry that finds a fairly large increase in risk of death from pneumonia among those exposed to a mixture of inorganic dusts yet the opposite outcome for those exposed just to one sort of dust.

Finally, on the pm2.5 front there's "The Effect of Particle Size on Cardiovascular Disorders - The Smaller the Worse", focusing on, obviously, size rather than substance; "The Effects of Particulate Matter Sources on Daily Mortality: A Case-Crossover Study of Barcelona, Spain" finding again that the observed correlation is a matter of size over substance when it comes to an increased risk of cardiovascular disease; and, evidence that the observed association is not, perhaps, the result of confounding whereby people more prone to cardiovascular disease (for which socioeconomic status is a big risk factor) are more likely to live near sources of pm2.5 can be found in "Particulate Air Pollution and Socioeconomic Position in Rural and Urban Areas of the Northeastern United States".



An Early Return Indeed

For more evidence that Milward v. Acuity is “one of the most significant toxic tort cases in recent memory” check out footnote 53 of "Introduction: The Third Restatement of Torts in a Crystal Ball" . To see what else is in the works check out those portions of the paper discussing the assault on the so-called shares defense and the effort to wholly expunge foreseeability from duty and therefore ensure that the ordinarily prudent person test is replaced with an any risk test.


Way To Go, Joe

I've got to brag on my partner, Joe Lonardo. He just prevailed in a benzene/leukemia case before the Vermont Supreme Court and the opinion is well worth the time required to read and digest it. Building on its decision in Estate of George v. Vermont League of Cities and Towns the court embraced critical thinking and a Bayesian approach to causal reasoning and so held that empty evidence can't change prior, or baseline, beliefs and that  plaintiff's argumentum ad ignorantiam won't fly in Vermont. Here's how it went.

Plaintiff's claim can be distilled to the following:1) plaintiff has lymphoma of the central nervous system (CNS lymphoma); 2) CNS lymphoma is a subtype of non-Hodgkin's lymphoma (NHL); 3) benzene has been shown in some studies to double the risk of NHL; 4) plaintiff was exposed to benzene; 5) an alternate cause of CNS lymphoma was ruled out by his expert; so, 6) plaintiff's CNS lymphoma was caused by his benzene exposure. Q.E.D.

Not so fast, said the Vermont Supreme Court.

The vast majority of cases of CNS lymphoma "are of unknown etiology". Accordingly, our initial belief must be that plaintiff's CNS lymphoma is similarly likely to be of unknown etiology. So what evidence does plaintiff have that might reasonably move a sensible jury away from the belief that any given case of CNS lymphoma is due to some unknown cause and towards benzene? Plaintiff could have shown that he was exposed to a level of benzene that so increased his risk of CNS lymphoma that we ought to consider it as a likely cause. But this plaintiff couldn't show even by a rough approximation what his exposure might have been, much less that the dose experienced appreciably increased his risk of developing the disease. Then again, he could have shown that the manner or circumstances in which he was exposed, whatever the dose, has been found to the likely cause of CNS lymphoma in some similarly exposed group of individuals. But he had no evidence of that either. There was then nothing to cause a sensible person to move off the baseline belief - that plaintiff's was an ordinary disease of life.

Plaintiff next tried the differential diagnosis dance but got nowhere. The court clearly understood that an unweighed risk factor, abstract and disconnected from the circumstances (i.e. dose/exposure) in which it was detected is not the same thing as a potential cause to be weighed in a differential diagnosis or process of elimination exercise. Thus it held that any attempt to establish benzene as the cause of plaintiff's CNS lymphoma by ruling out everything else "must fail" because plaintiff couldn't demonstrate that his benzene exposure belonged among the potential causes to be considered in the first place.

The court then demonstrated that critical thinking isn't just for good scientists. Plaintiff had found an expert who could rule out one cause of CNS lymphoma and so he constructed the following argument: 1) plaintiff has CNS lymphoma; 2) some cases of CNS lymphoma are caused by an immunodeficiency disorder; 3) plaintiff doesn't have an immunodeficiency disorder; therefore, 4) benzene caused plaintiff's CNS lymphoma. That's about the purest form of the fallacy known as the appeal to ignorance and the Vermont Supreme Court would have none of it. (It's also, by the way, a sort of enthymeme, an argument with a hidden or unexcavated premise - here, that any benzene exposure is a potential cause of CNS lymphoma - and it lies at the heart of most "reasoning to the best explanation" seen at the courthouse). The court held that when the cause of most cases of a disease is unknown the ruling out of one cause cannot be evidence in favor of some other cause. 

Finally, and quite interestingly, the court briefly elaborated on its decision in George; an opinion which has come in for criticism from those hoping lower the barriers meant to keep out all but sound science (see e.g. "The 'Reshapement' of the False Negative Asymmetry in Toxic Tort Causation"). The court holds it seems to the same view as organizations like the National Academies of Science and the U.S. Preventive Services Task Force - that experts weighing scientific studies ought to be able to say how they did the weighing and to state "the weight given to each study". There is, after all, not much left of the scientific method without measurements and methods.

That's just my take. Go read it for yourself: Blanchard v. Goodyear


From Post-Normal Science to Post-Normal Law?

Carl Cranor certainly understands the impact of Milward v. Acuity as you can see from his recent blog post at the Center for Progressive Reform. Over the coming days we'll examine several of Cranor's points but for today let's start with his enthusiastic approval of the appellate court's rejection of "an 'atomistic' study-by-study assessment of the scientific basis of expert testimony." 

In the sort of scientific induction that is the basis for most expert opinion in toxic tort cases, broad conclusions are drawn from specific data.  Thus in Milward, plaintiff's expert extrapolated from four particular sets of data to his conclusion that benzene is generally capable of causing acute promyelocytic leukemia (APL) in humans.  The trial court, however, reviewed each particular bit of data which supposedly supported the inference and found each to be wanting and so excluded the opinion.  But the US Court of Appeals for the First Circuit held that "[t]he district court erred in reasoning that because no one line of evidence supported a reliable inference of causation, an inference of causation based on the totality of the evidence was unreliable."  That court then went even further and adopted a view of how science is done that is advanced by just a small cadre of non-scientist academics. It approved Cranor's conception of scientific induction holding that "[t]he hallmark of the weight of the evidence approach is reasoning to the best explanation for all of the available evidence."  The problem, for those of us stuck in The Enlightenment, is that an argument founded on false premises cannot, save by sheer accident, lead to the truth.

Let's say there are four studies recording the incidence of some disease in a work force and the dose, or exposure, to the chemical in question sustained by the workers being studied.  The data, according to plaintiff's expert, looks like this:

From such data a scientist could reasonably infer that there is a relationship between dose and the incidence of disease and specifically that doubling the exposure doubles the risk of disease. Data like that is generally a powerful indicator of a true causal connection if subsequently confirmed by other studies.  But let's say that we examine the four data points and find that the expert has either misreported or misinterpreted the data and that it really looks like this:

What the appellate court has said in Milward is that somehow, based solely on the subjective weight given each bit of data and his interpretation of "the totality" of the data, an expert is free to testify to a conclusion that not only is unsupported by, but is completely at odds with, the premises from which it was derived.  What's going on here?  

What's up is that the Court has bought into, whether it recognizes it or not, the concept of "post-normal science".  It's an idea advanced by Jerome Ravitz and embraced by Carl Cranor and many in the movement that seeks to incorporate the precautionary principle into our laws.  The idea is explicated most clearly in "Towards a Non-Violent Discourse in Science" in which Ravitz explains that the Enlightenment's view of science which has prevailed to this day - that "in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will" ... we must "give up this idea and this hope of [ours] that there may be men so much more learned, erudite and well-read than the rest of us as to be able to make that which is false become true in defiance of nature" (Galileo Galilei) - is yielding to a new conception of science necessitated by our modern scary world. A world in which "facts are uncertain, values in dispute, stakes high and decisions urgent". A world in which Enlightenment-style science too often serves "the morally dubious worlds of profit, power and privilege".

Essentially the idea is that science has become "authoritarian". It imposes what it claims to be truth on people who have genuinely held beliefs that lead them to a very different conception of how the universe works. And when it comes to risk, it ignores social constructions of risk that lead people like Cranor to believe that, for example, autism is linked to pharmaceuticals, alcohol and living too close to a freeway. Consequently, "housewife epidemiology" and the fervently held beliefs of activists ought to be weighed in the scales of judgment alongside the data and the test results of such theories.  (See "Legally Poisoned: How the Law Puts Us at Risk From Toxicants"). Most importantly it embraces the view that there are indeed people, experts, "so much more learned, erudite and well-read than the rest of us" as to hold a view of truth immune to, and indeed beyond the reach of, "normal science".

It's all, in our view, a dreadful misreading of Thomas Kuhn's "The Structure of Scientific Revolutions". It twists the conclusion that scientists are no happier to admit their errors than regular folks into a claim that all science is a sort of social construct - that there is no truth, and that what scientists really do is to weigh the facts they find relevant to a nicety in the scales of their subjective judgment. And by incorporating such a view into our law, originally, at least, derived from and founded upon the empiricism of the Enlightenment, we adopt a view in which the law exists not to guide our future actions as citizens but rather, typically ex post facto in the case of toxic torts, to support whatever fad or fear motivates us in the moment.

More in coming days.