You'd think that courts would be leery about dressing their Daubert gatekeeping opinions in the "differential etiology method". After all, as you can see for yourself by running the query on PubMed, the U.S. National Library of Medicine / National Institute of Health's massive database of scientific literature, apparently nobody has ever published a scientific paper containing the phrase "differential etiology method". Of the mere 22 articles ever to contain the sub-phrase "differential etiology" none use it in the sense - to rule in a heretofore unknown cause - meant by the most recent court to don its invisible raiment. Even mighty Google Scholar can manage to locate only 6 references to the "method" and all are law review articles resting not upon some explication and assessment of a scientific method known as differential etiology but rather on the courtroom assertions of paid experts who claimed to have used it.
You'd also hope courts would understand that scientific evidence is no different than any other kind of evidence. It must be still something that has been observed or detected albeit with techniques (e.g. nuclear magnetic resonance spectroscopy) or via analyses (e.g. epidemiology) beyond the ken of laymen. Yet, while they'd never allow into evidence (say in an automobile case) the testimony of someone who had witnessed neither the accident nor the driving habits of the Defendant but who was prepared to testify that he thought the Defendant was speeding at the time of the accident because Defendant looks like the sort of person would would speed and because he can't think of any other reason for the wreck to have occurred, some courts will allow that very sort of testimony so long it comes from a PhD or an M.D. who has used the "weight of the evidence method". Can you guess how many references to scientific papers using the "weight of the evidence method" PubMed yields? The same 0 as the "differential etiology method".
Nevertheless another (memorandum) opinion has joined the embarrassing procession of legal analyses bedecked in these ethereal methods; this time it's a radiation case styled McMunn v. Babcock & Wilcox Power Generation Group, Inc.
Plaintiffs suffering from a wide variety of cancers allegedly caused by releases of alpha particle emitting processed uranium from the Apollo, PA research and uranium fuel production facility sued Babcock and other operators of the site. Following battles over a Lone Pine order and extensive discovery the sides fired off motions to exclude each others' experts. The magistrate to whom the matter had been referred recommended that plaintiffs' general causation expert Dr. Howard Hu, specific causation expert Dr. James Melius, emissions and regulations expert Bernd Franke and nuclear safety standards expert Joseph Ring PhD be excluded. The plaintiffs filed objections to the magistrate's recommendations, the parties filed their briefs and the District Court rejected the magistrates recommendations and denied defendants' motions.
Dr. Hu had reasoned that since 1) ionizing radiation has been associated with lots of different kinds of cancer; 2) alpha particles ionize; and 3) IARC says alpha particles cause cancer it makes sense that 4) the allegedly emitted alpha particles could cause any sort of cancer a plaintiff happened to come down with. It's not bad as hunches go though it's almost certainly the product of dogma -specifically the linear no-threshold dose model - rather than the wondering and questioning that so often leads to real scientific discoveries. But whether a hunch is the product of the paradigm you're trapped in or the "what ifs" of day dreaming it remains just that until it's tested. Unfortunately for Dr. Hu's hunch, it has been tested.
Thorotrast (containing thorium - one of the alpha emitters processed at the Apollo facility) was an X-ray contrast medium that was directly injected into numerous people over the course of decades. High levels of radon could be detected in the exhaled breath of those patients . So if Dr. Hu's hunch is correct you'd expect those patients to be at high risk for all sorts of cancer? They're not. They get liver cancer overwhelmingly and have a five fold increase in blood cancer risk but they're not at increased risk for lung cancer or the other big killers. Why? It's not clear though the fact that alpha particles can't penetrate paper or even skin suggests one reason. Look for yourself and you'll find no evidence (by which we mean that the result predicted by the hunch has actually been observed) to support the theory that alpha particles can cause all, most or even a significant fraction of the spectrum of malignancies whether they're eaten, injected or inhaled and whether at home or at work. Be sure to check out the studies of uranium miners.
But let's assume that alpha particles can produce the entire spectrum of malignancies, that the emissions from the facility into the community were sufficiently high, and that the citizenry managed to ingest the particles. What would you expect the cancer incidence to be for that community? Probably not what repeated epidemiological studies demonstrating that "living in municipalities near the former Apollo-Parks nuclear facilities is not associated with an increase in cancer occurrence" concluded.
Dr. Hu attacked the studies of uranium miners and of the communities around Apollo by pointing out their limitations. This one didn't have good dose information and that one had inadequate population data. Perfectly reasonable. It's like saying "I think you were looking through the wrong end of the telescope" or "I think you had it pointed in the wrong direction". He's saying "your evidence doesn't refute my hunch because your methods didn't test it in the first place."
Ok, but where's Dr. Hu's evidence? It's in his mind. His hunch is his evidence; and it's his only evidence. He weighed some unstated portion of what is known or suspected about alpha particles and cancer in the scales of his personal judgment and reported that CLANG! the result of his experiment was that the scale came down solidly on the side of causation for all of plaintiffs' cancers.
At the core of the real scientific method is the idea that anyone with enough time and money can attempt to reproduce a scientist's evidence; which is to say what he observed using the methods he employed. Since no one has access to Dr. Hu's observations and methods other than Dr. Hu his hunch is not science. Furthermore, there's no way to assess to what extent the heuristics that bias human decision-making impacted the "weighing it in my mind" approach of Dr. Hu.
Given that there's no way to reproduce Dr. Hu's experiment and given that none of the reproducible studies of people exposed to alpha particles demonstrate that they're at risk of developing the whole gamut of cancer Dr. Hu's argument boils down to that of the great Chico Marx: "Who are you going to believe, me or your own eyes?" Alas, the court believed Chico and held that "Dr. Hu's opinions have met the pedestrian standards required for reliability and fit, as they are based on scientifically sound methods and procedures, as opposed to 'subjective belief or unsupported speculation'".
Next, having already allowed plaintiffs to bootstrap alpha emitters into the set of possible causes of all of the plaintiffs' cancers, the court had no problem letting letting the specific causation expert, Dr. Melius, conclude that alpha emitters were the specific cause of each plaintiff's cancer merely because he couldn't think of any other cause that was more likely. At first that might seem sensible. It's anything but.
Don't you have to know how likely it is that the alpha emitters were the cause before you decide if some other factor is more likely? Obviously. And where does likelihood/risk information come from? Epidemiological studies in which dose is estimated. Of course the plaintiffs don't have any such studies (at least none that support their claim against alpha emitters) but couldn't they at least use the data for ionizing from say the atomic bomb or Chernobyl accident survivors? After all, the court decided to allow plaintiffs' experts to testify that "radiation is radiation".
Well, just giving a nod to those studies raises the embarrassing issue of dose and the one lesson we know plaintiffs' counsel have learned over the last several years of low-dose exposure cases is to never, ever, ever estimate a dose range unless they're ordered to do so. That's because dose is a measurement that can be assessed for accuracy and used to estimate likelihood of causation. Estimating a dose thus opens an avenue for cross examination but more devastatingly it leads to the the argument that runs: "Plaintiff's own estimate places him in that category in which no excess risk has ever been detected."
Fortunately for plaintiffs the court held that Dr. Melius' differential diagnosis or differential etiology method does not require that he estimate the likelihood that radiation caused a particular cancer before he can conclude that radiation is the most likely cause among many (including those unknown).
First the court held that it was not its job to decide which method is the best among multiple methods so long as the method is reliable. For this it relies upon In re TMI Litigation. When In re TMI Litigation was decided (1999) the Nobel Prize in Physiology or Medicine for the discovery that Helicobacter pylori and not stress was the cause of most peptic ulcers was six years in the future. The method of observational epidemiology and application of the Hill causal criteria had generated the conclusion that peptic ulcers were caused by stress. The method of experimentation, observation and application of Koch's postulates established H. pylori (nee C. pyloridis) as the real cause; and, for the umpteenth time, experimentation as the best method. So could a court allow a jury to decide that the peptic ulcer in a plaintiff with a H. pylori infection was caused by stress at work? Apparently the answer in the Third Circuit is "Yes"; scientific knowledge be damned.
Second, citing In re Paoli and other Third Circuit cases, the court held that differential etiology (without knowledge of dose) has repeatedly been found to be a reliable method of determining causation in toxic tort cases. As we've written repeatedly this a claim wholly without support in the scientific literature. Are there studies of the reliability of differential diagnoses made by radiologists? You bet (here for example). Are there studies of immunohistochemical staining for the purpose differential diagnosis in the case of suspected mesothelioma? Yep. Here's a recent example. There are (as of today) 236,063 pages of citations to articles about differential diagnosis on PubMed and none of them (at least none I could find via key word searches) suggests that a methodology such as Dr. Melius' exists (outside the courtroom) and none represent an attempt to test the method to see if it is reliable.
Third, the court held that since Dr. Melius' opinions were the result of his "qualitative analysis" the fact that plaintiffs were living in proximity to the facility during the times of the alleged radiation releases and the fact that Babcock failed to monitor emissions and estimate process losses to the environment was enough to allow a jury to reasonably infer that plaintiffs were "regularly and frequently exposed to a substantial, though unquantifiable dose of iodized [ionized?] radiation emitted from the Apollo facility." How such reasoning can be anything other than argumentum ad ignorantiam is beyond our ability to glean.
Worse yet is this sentence appearing after the discussion about the absence of data: "A quantitative dose calculation, therefore, may in fact be far more speculative than a qualitative analysis." What would Galileo ("Measure what is measurable, and make measurable what is not so") the father of modern science make of that? Yes an estimate could be wrong and a guess could be right but the scientist who comes up with an estimate makes plain for all to see her premises, facts, measurements, references and calculations whereas the expert peddling qualitative analyses hides his speculation behind his authority. Besides, dose is the only way to estimate the likelihood of causation when there are multiple, including unknown, alternate causes. Then again, in addition to everything else Galileo also proved that questioning authority can land you in hot water so we'll leave it at that.
Finally, in the last and lowest of the low hurdles set up for Dr. Melius, the court found that he had "adequately addressed other possible cause of Plaintiffs' cancers, both known and unknown." How? By looking for "any risk factor that would, on its own, account for Plaintiffs' cancers", reviewing medical records, questionnaires, depositions, work histories and interviewing a number of plaintiffs. Presumably this means he looked for rare things like angiosarcoma of the liver in a vinyl chloride monomer worker and mesothelioma in an insulator, commoner things like lung cancer in heavy smokers and liver cancer in hepatitis C carriers, and hereditary cancers (5% to 10% of all cancers) like acute lymphoblastic leukemia in people with Down syndrome or soft tissue sarcomas in kids with Li-Fraumeni Syndrome. You can make a long list of such cancers but they represent perhaps one fourth of all cases. Of those cancers that remain there will be no known risk factors so that once you're allowed to rule in alpha emitters as a possible cause ("radiation is radiation") and to then infer from both "qualitative analysis" and the absence of data that a "substantial" exposure occurred you've cleared the substantial factor causation hurdle (which at this point is just a pattern in the courtroom flooring). Having gotten to the jury all that remains is to make the argument plaintiffs' counsel made before Daubert: "X is a carcinogen, Plaintiff was exposed to X, Plaintiff got cancer; you know what to do."
We're living through an age not unlike Galileo's. People are questioning things we thought we knew and discovering that much of what the Grand Poohbahs have been telling us is false. There's the Reproducibility Project: Psychology the genesis of which included the discovery of a widespread "culture of 'verification bias'" (h/t ErrorStatistics) among researchers and their practices and methodologies that "inevitably tends to confirm the researcher's research hypotheses, and essentially render the hypotheses immune to the facts...". In the biomedical sciences only 6 of 53 papers deemed to be "landmark studies" in the fields of hematology and oncology could be reproduced, "a shocking result" to those engaged in finding the molecular drivers of cancer.
Calls to reform the "entrenched culture" are widespread and growing. Take for instance this recent piece in Nature by Regina Nuzzo in which one aspect of those reforms is discussed:
It would have to change how statistics is taught, how data analysis is done and how results are reported and interpreted. But at least researchers are admitting that they have a problem, says (Steven) Goodman [physician and statistician at Stanford]. "The wake-up call is that so many of our published findings are not true."
How did we get here? A tiny fraction of the bad science is the result of outright fraud. Of the rest some is due to the natural human tendency to unquestioningly accept, and overweigh the import of, any evidence that supports our beliefs while hypercritically questioning and minimizing any that undercuts it (here's an excellent paper on the phenomenon). Thanks to ever greater computing power it's becoming easier by the day to "squint just a little bit harder" until you discover the evidence you were looking for. For evidence that some researchers are using data analysis to "push" until they find something to support their beliefs and then immediately proclaim it read: "The life of p: 'Just significant' results are on the rise." For evidence that it's easy to find a mountain of statistical associations in almost any large data set (whereafter you grab just the ones that make you look smart) visit ButlerScientifics which promises to generate 10,000 statistical relationships per minute from your data. Their motto, inadvertently we assume, makes our case: "Sooner than later, your future discovery will pop up."
Of the remaining bad science, i.e. that not due to fraud or cognitive biases, apparently a lot of it arises because researchers often misunderstand the very methods they use to draw conclusions from data. For example, read "Robust misinterpretation of confidence intervals" and you'll get the point:
In this study, 120 researchers and 442 students - all in the field of psychology - were asked to assess the truth value of six particular statements involving different interpretations of a CI (confidence interval). Although all six statements were false, both researchers and students endorsed, on average more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.
We suppose such results would have surprised us before we got out of law school. Nowadays we just nod in agreement; our skepticism regarding the pronouncements of noted scientific authorities becoming complete after recently deposing an epidemiology/causation expert who didn't even know what "central tendency" meant. He also had never heard of "The Ecological Fallacy"; which explains why he committed the error throughout his report. He couldn't estimate how likely it was that the chemical in question was causative nor did he know the rate of the disease in plaintiff's age/gender bracket. No matter. His opinions came wrapped in the same non-existent scientific methods and the court has been duly impressed with his extensive credentials and service on panels at NCI, IARC, etc. So it goes.
Hopefully courts will begin to take notice of the rot that sets in when scientists substitute personal judgment, distorted by cognitive biases to which they are blind and intuitions which can easily lead their causal inferences astray, for measurement and experiment in their quest to generate scientific knowledge. That and the fact that the method some experts parade about in are not in fact a way of doing science but rather just a way of shielding their unscientific opinions from scrutiny.
Too bad about the method though. If it worked we could hire the appropriate scientific authorities to come up with a cure for cancer. They would ponder the matter, render their opinions as to the cure and testify about why their "qualitative analysis" obviously points to the cure. The jury would pick the best treatment among those on offer and the court would enter a judgment whereby cancer was ordered to yield to the cure. That's not how it works and it wasn't how it worked in the early 1600s when the sun refused to revolve about the earth, heedless of the pronouncements of scientific authorities and courts of the day. Maybe history will repeat itself in full and scientific knowledge, the product of observation, measurement, hypothesis and experiment will again mean just that and not the musings of "experts" bearing resumes and personal biases rather than facts. Maybe if we're lucky the rot will be cut out and the tide will finally turn in the long war on cancer. We'll see.