Mass Torts: State of the Art

Mass Torts: State of the Art

About The Virus That Makes You Dumb

Posted in Causality, Microbiology, Reason

A colleague asked me yesterday what I thought about a story she’d seen in the media regarding a virus often found in algae. Supposedly it can impair human cognition. I told her that as a matter of fact I’d been working up a brief blog post on the topic because of its implications for mass tort litigation, gave her the short version and promised to finish the post and send it along once I’d finished working on a presentation. Now I have so here it is.

A couple of weeks ago "Chlorovirus ATCV-1 is Part of the Human Oropharyngeal Virome and is Associated With Changes in Cognitive Functions in Humans and Mice" was published in Proceedings of the National Academy of Sciences. It’s an excellent example of good science revealing just how strange our world really is and just how little we really know and how poor our guesses can be about the causes of human disease.

I’ll get to the strangeness and the upset applecart of prior beliefs shortly but first it’s important to point out that this is a classic case of scientific discovery. It’s a direct observation of something never seen before followed by an experiment designed to test the implication of its discovery. It is not a case of finding a correlation between one event and another that seemingly followed it and thereafter creating a model/explanation of how the observed association might be causal. That’s just generating a hypothesis. Nor is it the usual null hypothesis testing of small effects that drives most modern toxic tort litigation. That’s just pretending to be surprised when a system that has been slightly perturbed turns out to be slightly perturbed.

Like most modern discoveries this one depended on a new method of observation and curious scientists deploying that new method to peer about the world around them. The new method (methods really but I’ll cram them all into one category as they’re evolving rapidly and besides it shortens the post) involves rapidly (and cheaply) sequencing all of the genetic material in a sample, comparing it to libraries of microorganisms, human cancer cells (see e.g. The Cancer Cell Line Encyclopedia ), etc. and thereby "seeing" the biological diversity present in the sample. The name for it is metagenomics.

Gene sequencing isn’t new of course but what is new is the ability to sequence the unknown unknowns in a sample and to do so rapidly and (relatively) inexpensively. In the past you’d take a sample, culture whatever was present in a petri dish (multiple techniques here too but anyway …) and then sequence whatever was in each of the clumps of clones you’d cultured. The problem is that not every microbe is culturable. In some cases the right medium has yet to be found. In other cases the beastie is dead – an invisible casualty of an unseen battle between the immune system and an invader. Either way we were blind to much of what was going on.

In this study scientists took throat swabs from people participating in an investigation of cognitive ability (among other things) to see what they could see with their new method. Upon comparing the genetic data they found to the libraries available they were surprised to find a match with a virus that infects algae; one that had not previously been identified as being infectious in humans. They then looked to see if there were differences between those with and without the virus and found a marked decrease in cognitive ability among those infected. To test the hypothesis that springs so readily from such a correlation they then ran an experiment on lab animals. Sure enough, cognitive ability including memory declined in those mice inoculated with the virus. But that wasn’t the end of it. They next looked to see what genetic changes the virus had wrought and it turned out that it altered the expression of genes that have to do with cognition. Not quite proof but pretty compelling.

As the use of this new way of seeing, metagenomics, has expanded so have the number of discoveries. Have a look at Elevated Levels of Circulating DNA in Cardiovascular Disease Patients: Metagenomic Profiling of Microbiome in the Circulation and Pathogenic Microbes, the Microbiome, and Alzheimer’s Disease (AD) to see what may await.

And along with the discovery of new causes come ways to track it back up the causal chain (at least until a deep pocket is found). See for example Seeking the Source of Pseudomonas aeruginosa Infections In a Recently Opened Hospital: An Observational Study Using Whole-Genome Sequencing .

One question raised is what duty is owed to those of your workers and customers who might come in contact with what was thought to be a harmless green algae occasionally infected with a virus nobody suspected until now to have the ability to fiddle with your brain’s fine tuning. Another is whether you ought to have your cognitively impaired plaintiff who blames the landfill tested for Chlorovirus. Or how about whether there ought to be a REACh for food (since the companies will come to have, thanks to the use of metagenomic techniques to ensure food safety, vast libraries of the presumably harmless microbes and their ever evolving offspring that ride to market in and on their products)? And what about record-keeping requirements because even helpful bugs sometimes go bad? Is it foreseeable that an innocent bug shipped far from home and thrown in with a strange crowd might pick up a plasmid coded for virulence?

If scientific discoveries raise more questions than they answer then the intersection of the law and new discoveries ought to keep us busy.

 

 

Just When the USPSTF Recommends Type 2 Diabetes Screening for Everyone 45 and Over This Gets Published:

Posted in Rhetoric

Korea’s Thyroid-Cancer "Epidemic" – Screening and Overdiagnosis . The short version is as follows: the ability to screen for thyroid cancer led to an explosion (an almost unbelievable 15 fold increase) in its diagnosis. In fact, it’s now the most common cancer in South Korea. Nevertheless, the mortality rate hasn’t budged. That means the only thing screening managed to accomplish was thousands of needless biopsies and surgeries and their occasional complications, including death.

There’s an excellent Op-Ed piece by one of the paper’s authors in The New York Times that lays it all out but this line really caught my attention:

Too many epidemiologists concern themselves not with controlling infectious disease, but with hoping to find small health effects of environmental exposures – or worse, uncertain effects of minor genetic alterations. Perhaps they should instead monitor the more important risk to human health: epidemics of medical care.

Will this finding which clearly echoes the results of similar studies on the impact of mass screenings for prostate cancer and breast cancer and their respective diagnosis and mortality rates finally dampen the enthusiasm for screening? Not if the new screening recommendation for type 2 diabetes, and the public’s response to prior news about the impact of mass screenings, is any indication.

The U.S. Preventive Services Task Force (USPSTF) now recommends blood glucose screening for adults at increased risk of type 2 diabetes. Among those many risk factors are being aged 45 or older, BMI > 25 and having a close relative with type 2 diabetes. That’s not exactly everybody but it’s certainly a lot of bodies. So how many lives will be saved? Given that this is a Grade B recommendation and that the USPSTF’s review of the evidence led it to conclude that first-line therapy for diabetes prevention (aggressive lifestyle modifications) currently results in a lower incidence of diabetes, cardiovascular mortality and overall mortality you’d assume the answer would be "many". However, "the task force found inadequate direct evidence that measuring blood glucose leads to improvements in mortality or cardiovascular morbidity."

That doesn’t sound very promising. So why make such a recommendation? Who knows? Given the enormous political pressure recently brought to bear on the USPSTF (especially after its recommendation against screening mammography in women under 50) it would be easy to slide into cynicism and speculate about the potential motives of those (other than the undiscovered incipient diabetics) who stand to benefit from this new recommendation (remember: under ACA if the USPSTF recommends it, it gets paid for). Instead, since this is a mass torts blog, I’ll speculate about why the new thyroid-screening study notwithstanding, most folks will enthusiastically line up for another screening opportunity.

Despite widespread coverage and laudatory editorials in leading newspapers both left-leaning and right the evidence that mammography screening in those at negligible risk did more harm than good apparently had little effect. No Fall In Mammogram Rates After USPSTF Recommendations was the conclusion drawn from surveys of the attitudes of almost 28,000 women about screening. The same was true for elderly men and prostate cancer screening (though it must be noted that urologists and middle aged men have demonstrated significant reductions in rates of screening / being screened). So why do so many people go to the bother of having a diagnostic test that is sure to be an annoyance and almost just as sure to do no good? I propose it’s because of one of the greatest decision drivers of all and one effective plaintiff lawyers use to devastating effect. It’s fear of regret.

Think about the power of fear of regret in the context of the reluctance of parents exposed to stories about autism and vaccines to vaccinate their children. They’re weighing the chance of autism times its cost against the chance of irreversible brain damage from measles times its cost. Should they decide that the value of the former outweighs the value of the latter it can only be that, given the grossly unequal risks, they’ve added something quite heavy to that scale holding the price of their children not developing autism in order to bring it even with the costs their children not be brain damaged by measles. What could it be? I suspect that it’s fear of regret / fear of becoming blameworthy.

In looking back across some of the most effective jury arguments I’ve seen (as determined by damages subsequently awarded) it’s those taking the form of "If you don’t stop this company, one day you’ll learn of another death and that victim’s blood will be on your hands" that work best. It’s the argument that appeals not to sympathy but that rather instills the social fear of shame that does the most work. Be en garde.

Ignorance is $$$

Posted in Risk

Kip Viscusi has just published a very good paper about the GM ignition switch recall controversy. In it he argues that blockbuster jury awards in cases where corporations had made design and recall decisions based on cold economic calculations influenced GM’s decision not to do the same sort of risk analysis on its defective ignition switch problem. The result was a delayed recall and lost lives. Viscusi further argues against the admissibility of potentially inflammatory risk analysis evidence on the basis that when such analyses fairly (and highly) value human life they actually promote consumer welfare and so ought not be discouraged. The paper is "Pricing Lives for Corporate Risk Decisions" and can be found here.

GM isn’t the only company to have forsworn risk/benefit assessment. After a presentation to in-house counsel on recent trends among the courts regarding causation and risk, in which I discussed how not to do risk/benefit analyses, the GC of that energy company commented "we don’t do them (risk/benefit assessments) at all and we routinely counsel our business units not to engage in the practice." The sentiment is surely understandable given the usual media reaction to anything that sounds remotely like weighing profits and people on the same scales; and Viscusi’s paper lends further support to it with empirical evidence demonstrating that people, given identical facts about a defect and subsequent harm, tend to award significantly larger punitive damages against those defendants that examined the risks and benefits before they went to market than against those that either did nothing at all or that consciously chose not see how the numbers added up.

How then do companies that eschew risk/benefit analysis deal with risk? Sometimes by clinging to the belief that regulatory compliance renders their product or activity risk-free and sometimes by ignoring it altogether. Unfortunately, in either case it’s just whistling past the graveyard.

Remember that, generally speaking, a wrong is an action that was taken without proper regard for others and that caused some harm. Since every action entails risk (because every action entails change and every change entails risk, assuming the actor isn’t omniscient) any reasonable person who acts must either (a) be ignorant of the risk; (b) suspects a risk but chooses to leave things to the Fates; or (c) estimates that the benefits outweighs risks. Only the diligent then, those who have undertaken (c), have actually made an attempt at calculating the "proper regard for others" given the risk inevitably created.

Now of course it could be the case that a diligent actor is also a nefarious one who has grossly undervalued human life and/or has calculated that the chances his perfidy will be uncovered are slight, but such a case is the very one one for which punitive damages were designed. However, if the studies cited by Viscusi can be relied upon, then even the honest risk assessor (c) who highly values human life will get hammered harder by a jury than the ignorant (a) or the careless (b) manufacturer. Thus the perverse result: in an age when knowledge is ever more accessible, when questions about the likelihood and cost of harm posed by a particular design incorporated into millions of vehicles are readily calculable, and when a sound risk analysis can send a signal that saves lives we have a legal system that threatens enormous costs to any manufacturer who dares to look up the answers.

Viscusi’s solution is essentially to set the value of a human life (he suggests $9.1 million) at a level which, if used by a company in its risk/benefit analysis and if guided truly thereafter by the answer, would mean that the company was not putting its profits before people. Accordingly such an analysis could not be offered into evidence by the Plaintiff to demonstrate knowledge, callousness, etc. though it could be offered by the Defendant, at its option, to show proper regard. It’s a good idea and soundly reasoned. Read the whole thing as it’s well worth your time.

Yet Another Opinion in Which a Court Mistakes Hypothesis for Theory

Posted in Causality, Epidemiology, Reason, The Law

While some may imagine that scientific hypotheses are the product of highly educated people with brilliant minds drawing straightforward inferences from compelling evidence the fact remains that all scientific hypotheses are nothing more than guesses; and as every middle schooler taught the scientific method knows, even the best pedigreed hypotheses are usually false. On the other hand, sometimes it’s the hypothesis with the most dubious provenance that gets promoted to the status of scientific theory (i.e. one that has survived rigorous testing and is powerfully explanatory) as in the case of benzene’s structure:

I was sitting writing at my textbook but the work did not progress; my thoughts were elsewhere. I turned my chair to the fire and dozed. Again the atoms were gambolling before my eyes. This time the smaller groups kept modestly in the background. My mental eye, rendered more acute by the repeated visions of the kind, could now distinguish larger structures of manifold confirmation: long rows, sometimes more closely fitted together, all twining and twisting in snake like motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes.

Because a hypothesis is nothing more than the assembly (by hard work or daydreaming) of a few bits of what is known/believed into a plausible narrative that explains some phenomenon (e.g. gastric lymphoma), because so little is known about the causes of a complex disease like gastric lymphoma such that the discovery of H. pylori suddenly and completely overturned prior views about its causes, and because we can’t know (or factor into our hypotheses) what we don’t know (you’ve heard of the human gut microbiome but what about the human gut virome?) hypotheses are nothing more than speculation. That’s why every epidemiological study you’ve ever read puts the burden of proof squarely on the hypothesis and resolves all doubt in favor of the "null hypothesis" (i.e. the hypothesized causal agent has no effect).

Unfortunately many courts either don’t understand the difference or refuse to distinguish between hypothesis and theory. A recent example is Walker v. Ford. In Walker plaintiff’s expert was allowed to opine on the basis of his hypothesis that asbestos is a cause of Hodgkin’s lymphoma and thereafter to deduce from another of his hypotheses (Hodgkin’s lymphoma is caused by either Epstein-Barr virus, smoking or asbestos) that plaintiff’s lymphoma must have been caused by asbestos as he hadn’t the virus and didn’t smoke. And it isn’t just another case of a court conflating hypothesis generation (guessing) with the scientific method (testing guesses) so that guesswork by a properly credentialed witness is turned into a "scientifically valid method" and Rule 702 can be deemed satisfied. It’s worse. Not only has the hypothesis that asbestos causes Hodgkin’s lymphoma never been verified, it has in fact been repeatedly tested and serially refuted. Furthermore, the most important observation that spawned the hypothesis in the first place (an increased risk of gastric lymphoma among a sample of asbestos workers) has never been reproduced (and will never be reproduced) because when the study was done nobody outside two researchers in Australia even knew H. pylori existed much less to look for it in gastric lymphoma patients – several years would elapse between its discovery and the determination that it is worldwide the leading cause of gastric lymphoma.

The general causation opinion of plaintiff’s expert rested on these studies:

1) Cancer Morbidity of Foundry Workers in Korea. A slight increased risk of stomach cancer and non-Hodgkins’s lymphoma was found among foundry workers exposed to a laundry list of things including asbestos. No exposure assessment was done for any substance and no increase in Hodgkin’s disease was reported. The mortality study of the workforce published this year isn’t any more persuasive – here’s the SMR table for malignant diseases: SMR table.

2) Extranodal marginal zone lymphoma of mucosa-associated lymphoid tissue type arising in the pleura with pleural fibrous plaques in a lathe worker. Guess what? Asbestos isn’t the only cause of pleural plaques and so I stopped reading this article when I got to "He had not been exposed to asbestos."

3) Asbestos exposure and lymphomas of the gastrointestinal tract and oral cavity. This is the study mentioned above that suffers fatally from the understandable ignorance of the confounder H. pylori though it also appears to have the multiple comparison problem as evidenced by the fact that subgroupings of lymphomas, here GI and oral, produced a higher risk than for lymphomas in general. Finally, being a case-control study, there was no estimation of exposure in any of the cases.

4) Does asbestos exposure cause non-Hodgkin’s lymphoma or related hematolymphoid cancers? A review of the epidemiologic literature. I didn’t get past the abstract which concludes that a review of the literature reveals "no increased risk of NHL (non-Hodgkin’s lymphoma) or other HL-CAs (hematolymphoid cancer) associated with asbestos exposure."

Not discussed in Walker but apparently the last nail in the asbestos-causes-lymphoma hypothesis’ coffin (and the last sign of any scientific interest in this apparently dead issue) occurred 10 years ago with the publication in the Annals of Epidemiology of Occupational asbestos exposure and the incidence of non-Hodgkin lymphoma of the gastrointestinal tract: an ecologic study. The study found "no support for the hypothesis that occupational asbestos exposure is related to the subsequent incidence of GINHL (gastrointestinal tract non-Hodgkin’s lymphoma).

These articles along with the expert’s belief that "as long as asbestos reaches an area, regardless of where it is, it can cause different types of cancer" and asbestos can make its way to the lymph nodes, were all he needed to opine that asbestos causes lymphoma including plaintiff’s Hodgkin’s lymphoma (because after all "a lymphoma is a lymphoma" save "for therapeutic purposes"). That’s too much nonsense to unpack in one blog post so I’ll just focus on the claim that wherever asbestos goes in the body it causes cancer. The Institute of Medicine was tasked with answering this very question – is there evidence for a causal relationship to asbestos for cancer of everything from the larynx to the rectum – and generally found that what was in the literature was suggestive but insufficient to reasonably conclude that there is a causal link. See: Asbestos: Selected Cancers.

To save plaintiff’s expert and his hypothesis the appellate court held that it doesn’t matter if an expert’s conclusions are correct. All that matters is that the method whereby he reaches his opinion is reliable, and plaintiff’s expert’s method, guessing about the cause of Hodgkin’s lymphoma by creating a narrative about the causation of Hodgkin’s lymphoma from a few studies (that didn’t actually study Hodgkin’s lymphoma) counts as a reliable one. But who, other than the hopelessly ironic, would label as "reliable" a method (i.e. the guessing that constitutes a scientific hypothesis) of causal determination the product of which is usually incorrect? Recall that not only are most scientific hypotheses false but that even most of those with a statistically significant chance of being true are probably false.

Only scientific theories get the Seal of Reliability, which is to say they make predictions on which you can rely. And they gain that status only by being put to the test, and passing; and by passing I mean that the predictions they make actually come to pass. So what prediction would follow from "asbestos exposure causes Hodgkin’s disease"? Wouldn’t it be "people exposed to asbestos are more likely to get Hodgkin’s disease than those who aren’t"? And what follows from the fact that no study of asbestos-exposed workers has shown an increased risk of Hodgkin’s lymphoma? That the claim "asbestos causes Hodgkin’s disease" isn’t reliable.

So if hypotheses are unreliable in general because by definition they have not been tested, and if the specific hypothesis "asbestos causes Hodgkin’s lymphoma" is unreliable because it has been tested and failed to predict the future it entails, in what sense is the opinion of Walker’s expert "reliable"? Let me know if you figure it out.

Discretizations

Posted in Causality, Microbiology, Risk, Toxicology

Jumping the Snark: Erionite in Mexican Town Tied to High Rate of Mesothelioma (or, how "Sir, have you ever been to Turkey?" became too cute by half)

Plaintiffs’ Experts Were Really, Really Wrong About the Mechanism Underlying MDS and AML

Chromium VI is Weakly Associated With Stomach Cancer

To Reduce the Spread of Pathogens in Common Areas Consider Shark Skin

Telomere Length is Like Life: You Gotta Take the Malignant Melanoma Risk With the Lifespan Boost

A Plaintiff Win and a Very Good Daubert Opinion from the U.S. Ninth Court of Appeals

Posted in Reason

… we have a universally recognized Supreme Court, to which all disputes are taken eventually, and from whose verdict there is no appeal. I refer, of course, to direct experimental observation of the facts.

 E. T. Jaynes, physicist, in Foundations of  Probability Theory, Statistical Inference, and Statistical Theories of Science (1976)

Recently we’ve been grousing about Daubert-invoking opinions that are actually derived from the strange belief that reliable scientific knowledge does not depend upon the existence of supporting observable facts. Today however we’re applauding City of Pomona v. SQM North America Corporation; an opinion that gets Daubert and sound science right because it gets the scientific method right.

At issue was the trial court’s order excluding an expert witness for the City of Pomona in a groundwater perchlorate contamination case. The expert, Dr. Neil Sturchio, was prepared to testify that the perchlorate detected in the city’s groundwater had most likely originated in the Atacama Desert of Chile. Such evidence, if admissible, would implicate defendant SQM as it had imported for sale to California’s agricultural industry many thousands of metric tons of sodium nitrate, an inorganic nitrogen fertilizer, from Chile’s Atacama Desert – the sodium nitrate from which contains on average about 0.1% perchlorate.

It’s a big world though and there are plenty of potential sources of perchlorate in groundwater. Perchlorate occurs naturally and has been found in relative abundance in some arid regions of the American Southwest. It’s also synthesized industrially for use in solid fuel rocket propulsion systems (manufacturers of which were located in and around Pomona) and fireworks so that defense contractors, the Cal Poly Rocketry Club and 4th of July celebrations all fall under suspicion. How then did Dr. Sturchio determine that the perchlorate in Pomona’s groundwater came from a desert almost 6,000 miles away? Well, he didn’t conduct an experiment in his mind (a la McMunn and Harris) involving unobservable facts (a la Messick) in order to generate an untestable conclusion (a la Milward). Instead he followed the scientific method.

Perchlorate consists of one chlorine atom and four oxygen atoms. Since both chlorine and oxygen atoms come in different varieties (called isotopes) the question arose as to whether perchlorate also comes in different varieties defined by the ratio (or distribution) of the various isotopes making up its component parts. If it does, and if those varieties vary according to where they’re from or how they’re made, then something akin to a fingerprint or signature could be generated by analyzing the relative distribution of isotopes in a perchlorate sample. Finding a match in a database of known perchlorate signatures would thereafter yield a suspect and so satisfy plaintiff’s evidentiary burden of production.

According to Dr. Sturchio there is a method (it’s a subtype of mass spectroscopy known as isotope-ratio, or IRMS) for assessing the ratio of isotopes (or variety) of a particular perchlorate sample and there is a list, albeit a short one, of known perchlorate samples and their IRMS signatures. Most importantly there’s a detailed report available that sets out the theory, its rationale, the prediction it makes, the method by which it was tested and the results obtained. Using that method and comparing the results from prior tests on other samples showed, according to Dr. Sturchio, that the perchlorate in the Pomona, CA groundwater had a fingerprint, a signature if you will, just like that of Chilean perchlorate; and not at all like that of either man-made or indigenous perchlorate. Thus his opinion on its origin.

The district court found Dr. Sturchio’s opinion to be unreliable for three reasons. The first was that no government agency had yet certified the method he employed and that it was still being revised. The second was that the particular procedure he had followed in this instance had not been tested and could not be retested. The third was that the database of known perchlorate varieties and their signatures was too small.

Because the theory that perchlorate comes in different varieties has been tested and corroborated by at least one study, and because those varieties can per the theory be distinguished by use of a widely available method, the Ninth quite rightly held that the criteria at the heart of Daubert, testability and reliability, had been satisfied. Defendant was free to test the theory as well as the method and to refute the former and/or explain why the latter was either inappropriate for the task or so inaccurate as to be unreliable, Without evidence of its own to refute plaintiff’s corroborated theory or the accuracy of the method used to test it Dr. Sturchio’s opinions could not be excluded. In such a case, the court held, arguments aimed at the potential for holes in the theory or error in the methodology go to the weight and not the admissibility of the proffered evidence.

The court did a nice job (with one exception to be discussed below) in setting out its reasoning and so rather than go on about it we’ll just refer you to the opinion. Instead we’ll make one comment and then try to explain why the court’s argument against the need for certainty isn’t the usual straw man argument but is rather an understandably awkward attempt to address the problem known as "the glory of science and the scandal of philosophy".  

Our comment goes to the defendant’s argument that the reliability of a scientific method depends upon final government certification of the particular technique. We file it under "Be careful what you wish for" and remind our readers that the power to define what is and what is not a telescope determines what can and cannot be seen through one.

Now for the hard part. Defendant’s argument that the database of perchlorate signatures was too small and the court’s response that the law doesn’t require certainty is just a version of a very old argument that goes like this:

Scientist: If all swans are white then every swan that is seen shall be white. Since all swans ever seen have been white, all swans must be white. It is thus my opinion that if the color of a particular swan is at issue then it will have been white.

Inquisitor: I shall demonstrate that you have fallen into the fallacy known as Affirming the Consequent; which is recognized by all learned men and women to constitute a notorious error of logic. Have you seen, or seen a record of, every swan that lives or has ever lived?

Scientist: I have not.

Inquisitor: So there are swans that have been that have never been seen and swans now living that remain undiscovered and yet you have no knowledge of their color?

Scientist: it is undeniable; and I would add that there are swans yet to be and I am of the opinion that they too will be white.

Inquisitor: Then you further admit that there is an entire category of things, that of swans yet to be, about the color of which you are willing to opine despite having not a single observation of any member of that category?

Scientist: I do. It is after all prediction on which a scientist stakes her reputation.

Inquisitor: A risky gamble, is it not? For despite ten thousand observations of white swans the sighting of a single black one would do what to your theory that all swans are white?

Scientist: It would refute it, utterly.

Inquisitor: How then can your theory be anything more than the most tenuous sort of idle speculation - built out of nothing more than anecdotes, a collection of sightings of some unknown portion of all swans, and at risk of collapse at any moment?

Scientist: Because all those confirmatory sightings with none to the contrary make it at least very probably true.

Inquisitor: Alas, I had hoped you had something more and might save me from my skepticism. Oh well. Do you agree that probability is a measure of uncertainty?

Scientist: Yes, it plays that role in logic.

Inquisitor: And what is meant by uncertainty?

Scientist: To take the case of a coin toss, for example, while I cannot tell you whether when flipped it will come up heads or tails I can tell you the expected frequency of each occurrence; or, given past experience flipping the same coin, I can tell you to what degree I believe the coin will come up say heads. Any probability calculated other than 100% is then a measure of uncertainty.

Inquisitor: But  to make such calculations you must know how many sides the coin has or how many cards are in the deck?

Scientist: True.

Inquisitor: How many swans have there ever been?

Scientist: I do not know.

Inquisitor: And how many swans are yet to be?

Scientist: I do not know that either.

Inquisitor: So you cannot say whether your record of white swan observations constitutes a large, small or merely infinitesimal fraction of all the swans that have ever been or will ever be?

Scientist: I cannot

Inquisitor: And so when you use the word probability you use it in neither the mathematical nor the logical sense, since you have insufficient information to even estimate the probability that your opinion is correct?

Scientist: That is true.

Inquisitor: So now, at last, do you admit that you really do not know?

Scientist (revealed as Sir A. B Hill): Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute at 8:30 the next day.

In City of Pomona plaintiff’s expert had real evidence – facts that are directly observable and subject to testing – and not just a hunch. Those facts and the method by which they had been gathered had been tested by others, and confirmed. And defendant was free to find a black swan. That, in our view, is good enough.

Three Straw Men on a Witch Hunt

Posted in Reason

He who is accused of sorcery should never be acquitted, unless the malice of the prosecutor be clearer than the sun; for it is so difficult to bring full proof of this secret crime, that out of a million witches not one would be convicted if the usual course were followed!

- 17th century French legal authority

While looking for some references to help make sense of the (tortured?) reasoning responsible for Messick v. Novartis Pharmaceuticals,Corp. (a recent U.S. Ninth Circuit opinion arising out of a California intravenous bisphosphonate/osteonecrosis-of-the-jaw (ONJ) case) we came across the quote above in the very enjoyable Does Your Model Weigh the Same as a Duck? Though the aim of that paper is to expose two "particularly pernicious" fallacies of logic infecting drug research methodology, the point it makes when referring to the pre-Enlightenment era’s view of the appropriate evidentiary burden in the trial of witches applies equally to modern courts that lower standards of proof in toxic tort cases lest, they fear, all present day witches (i.e. chemicals/pharmaceuticals) go unburnt.

A strong suspicion that the usual course, i.e. skeptical gatekeeping, won’t be followed in Messick arises early on when the court chooses to demonstrate the strength of its argument (that the trial court erred when it found the opinions of plaintiff’s causation expert to be irrelevant and unreliable) by fighting not one but three causal straw men. As usual there’s Certainty, which advances the quixotic claim that plaintiff needs to prove causation essentially by deduction ( failing to recall Hume, D., A Treatise of Human Nature: "all knowledge degenerates into probability"). Then there’s How, which stands for an argument nobody makes – e.g. we can’t reasonably infer that aspirin reduces the risk of heart disease until it is proven how it does so. Last there’s Sole, which defends the defenseless argument that a putative cause must also be the sole (i.e. only and sufficient) cause of plaintiff’s injury. Unsurprisingly, each straw man is dispatched in a paragraph or less.

What is surprising is the length to which the court goes to save the plaintiff from her own expert, Dr. Richard Jackson. Jackson admitted that the fact that bisphosphonates are a cause of ONJ doesn’t mean that it was the cause of her ONJ. He even admitted that the plaintiff had multiple risk factors for ONJ and that he could not determine "which of those particular risk factors is causing [the ONJ]." You would think that such equivocal testimony would put an end to plaintiff’s quest to prove causation and that’s exactly what the District Court below had held; but the Ninth Circuit thought otherwise.

The appellate court held that while plaintiff’s causation expert "never explicitly stated that Messick’s bisphosphonate use caused her [ONJ]", Dr. Jackson had analogized plaintiff’s use of bisphosphonates to "the oxygen necessary to start a fire." Also, he had said that "[bisphosphonate use] was at least a substantial factor in her development of [ONJ]." Finally he was prepared to opine, based on his "extensive clinical experience", that "a patient without cancer or exposure to radiation in the mouth area would not develop ONJ lasting for years (as had plaintiff) without IV bisphosphonate treatments". Somehow, that’s enough for a plaintiff to get to the jury on causation.

The problem with the assertion that bisphosphonates are to ONJ as oxygen is to fire is that a quick PubMed search reveals numerous cases of ONJ in cancer patients decades before bisphosphonates were ever marketed to them. ONJ has been attributed to bacteria, dental work, radiation and cancer all by itself. Perhaps, given that Dr. Jackson diagnosed plaintiff with bisphosphonate-related ONJ (or BRONJ), he’s really saying: "plaintiff has BRONJ; therefore she has ONJ related to bisphosphonates". But that would just be begging the question and presumably not persuasive to the court. Either way, how an argument that’s either demonstrably false or a logical fallacy can support plaintiff’s causal claim escapes us.

Next, what should we make of the court’s reliance on Dr. Jackson’s "it’s at least a substantial factor" opinion? Apparently what the court is saying is that while (1) there are multiple causes of ONJ including bisphosphonates; and (2) plaintiff had cancer, and perhaps other risk factors, known to cause ONJ; even though (3) her expert can’t say which one did it; because (4) he’s prepared to testify "that Messick’s bisphosphonate use was a substantial factor"; (5) such testimony satisfies California’s substantial factor standard and is admissible. However, the only way that (5) follows from (1 – 4) is if proof of "but for", or counterfactual, causation is not an element of California’s substantial factor causation test and "maybe" causes are good enough. Yet California’s substantial factor standard actually "subsumes the ‘but for’ test". We’re again left scratching our heads.

The final (and apparently to the court most compelling) causation argument was that Dr. Jackson, on the basis of things seen only by himself (i.e. his clinical experience), had ruled out leading alternate causes and so thereby reliably ruled in bisphosphonates. This is of course just ipse dixit making an appearance in its de rigueur guise as "differential diagnosis" – the court admitting as much when it writes "[m]edicine partakes of art as well as science …" while pretending not to notice the impact of the evidence-based medicine revolution. We’ve taken the position in prior posts that believing differential diagnosis (a/k/a differential etiology a/k/a inference to the best explanation) to be akin to the scientific method and that it produces the sort of reliable scientific knowledge contemplated by Rule 702 is simply the sort of pre-scientific thinking common among those prone to being mesmerized by credentials and jargon. Instead of rehashing those arguments consider the case of Dr. Franz Mesmer and what is revealed when the scientific method is applied to the beliefs of doctors drawn from their clinical experience.

After having seen many patients Dr. Mesmer came up with a hunch about how the body worked and how good health could be restored to the sick. His hypothesis was called animal magnetism and it entailed that an invisible force ran through channels in the body which, when properly directed, could effect all manner of cures. Redirecting that force via mesmerization became wildly popular and Dr. Mesmer became quite famous. In what would become the first recorded "blind" experiment clinicians who practiced mesmerism – the art of redirecting the invisible forces to where they were needed – proved unable, when they did not know what it was they were mesmerizing, to distinguish a flask of water from a living thing and neither could they produce any cures. On the commission overseeing the experiment in 1784 was none other than one of the leading lights of the American Enlightenment and rebel against authority – Benjamin Franklin.

Two hundred and ten years later doctors were still seeing in their patients what their hypotheses predicted rather than what was actually occurring. A classic research paper demonstrating the phenomenon is The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Investigators assessing the efficacy of a new treatment for multiple sclerosis (MS) in a flash of brilliance decided to "blind" some of the neurologists who would be clinically assessing patients undergoing one of three treatments while letting the rest of the neurologists involved in the effort know whether a particular patient was getting the new treatment, the old treatment or the sham treatment. While the blinded neurologists and even the patients who had correctly guessed their treatment assignments (a check for the placebo effect) saw no improvement over the old treatment, the unblinded neurologists not only saw a significant positive effect that wasn’t there but they continued to see it for two years. Is there some workaround; a way to test after the fact for the distortion of the lens through which a clinician in the know observes his or her patients? You could try but it would appear to be a mug’s game and furthermore, by the very nature of the bias produced (unblinded clinicians blind to the very existence of their own bias) beyond the ability of cross examination to uncover.

A clinician’s art and a differential diagnosis derived from that art saved the day in Messick. Along the way to deciding that objective, verifiable evidence is not required to prove causation in such cases the court listed her sister circuits said to be of like mind and in the first footnote added that the Fifth Circuit was now alone in not having similarly lowered the gates. How the Fifth Circuit feels about being Daubert‘s last redoubt is unknown to us but we’re pretty sure that a plaintiff would win on causation in a bisphosphonate-ONJ case before that court. That’s because there are five years worth of objective and verifiable (and verified) evidence that (a) ONJ incidence in bisphosphonate-treated cancer patients is drastically and consistently increased; and (b) the likelihood that ONJ in a bisphosphonate-treated cancer patient was due to the treatment is slightly over 98%. See: 2014 AAOMS Position Paper on Medication-Related Osteonecrosis of the Jaw.

We know why plaintiffs’ counsel don’t want courts to embrace the sort of causal reasoning that would make a case like Messick easy for both general and specific causation. It’s because the day a court holds that "a probability estimate of 98% obviously passes the ‘more likely than not’ test" is the prelude to doomsday in low dose asbestos/benzene/etc litigation when that same court holds "a probability estimate of 2% obviously does not’". What we can’t understand is why so many courts refuse to enforce the test by demanding something more than the musings of experts. Witches or bewitchment are our two working hypotheses.

 

Discretizations

Posted in Microbiology, Molecular Biology

Some months ago we decided to put Discretizations on hiatus while we tried to figure out what to do about the tsunami of scientific papers washing up on PubMed (which was already piled deep in the flotsam and jetsam of publish or perish) now that China, India, Korea, etc. are getting in on the fun. On the one hand we’re sorely tempted to take advantage of a situation that presents us daily with awesome posting opportunities like: Hay fever causes testicular cancer! On the other hand we know that most statistically significant findings are in fact false - in no small part because "it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not desire". So, rather than being a part of the problem by propagating noise and not signal we’ve decided to limit our Discretizations to papers that 1) report an actual observation and not just the spectre of one drawn from data dredging and statistical analysis; 2) reflect an effort to reproduce a previously reported finding (e.g. testing a hypothesis previously drawn from statistical inference but in a different population); or, 3) extend (marginally) some previously established finding. Here goes:

Your word for the day is "immunobiotics": good bacteria that can aid in the production of certain blood cells

Obesity is a risk factor for lumbar radicular pain and sciatica – and it may impair healing mechanisms

Poor folks still have (on average) poor ways (which explains a large portion of the disparity in life expectancy between those in the bottom quartile of socioeconomic status and everyone else) 

Hospital staff and visitors can spread nosocomial pathogens to nearby businesses

How does metformin help patients with diabetes? Maybe by making the gut a happier place for A. muciniphila

No evidence found to support the hypothesis that parental smoking (and the benzene exposure that goes with it) during pregnancy increases a child’s risk of developing acute lymphoblastic leukemia (ALL)

Does the debate about segregating cystic fibrosis patients by lung/bronchial microbiome, going so far as to prevent some/most/all-but-one CF sufferers from attending indoor CF Foundation events, presage future debates as more and more diseases are found to have bacterial components? Here are the pros and cons of the CF debate.

The Human Cost of Bad Science

Posted in Reason

Because of the aggressiveness of a disease, its stage when detected and/or the requirement that patients enrolled in clinical trials not simultaneously pursue multiple treatments "patients with progressive terminal illness may have just one shot at an unproven but promising treatment." Too often their last desperate shots are wasted on treatments that had no hope of success in the first place. Two new comment pieces in Nature highlight the extent of the problem.

In Preclinical research: Make mouse studies work, Steve Perrin demonstrates that just like cancer patients, ALS/Lou Gehrigs’ patients are betting their lives on treatments that showed great promise in lab animals only to find that they do no good in humans. So why are 80% of these treatments failing? It’s not a story of mice and men. It’s a story of bureaucratic science. Of going through the motions. Of just turning the crank. And of never, ever, daring to critique your methods lest you find, to take one example, that the reason your exciting new ALS treatment works so well in mice is because your mice didn’t have ALS to begin with – you having unwittingly bred the propensity to develop it out of your lab animals.

Then read Misleading mouse studies waste medical resources. It continues the story of how drugs that should have been discovered to be useless in mice instead made their way into clinical trials where they became false promises on which thousands of ALS patients and their families have pinned their hopes.

We hope those courts that have bought into the idea that reliable scientific knowledge can be gained without the need for testing and replication are paying attention.

A Memorandum Opinion And The Methods That Aren’t There At All

Posted in Causality, Reason, The Law

You’d think that courts would be leery about dressing their Daubert gatekeeping opinions in the "differential etiology method". After all, as you can see for yourself by running the query on PubMed, the U.S. National Library of Medicine / National Institute of Health’s massive database of scientific literature, apparently nobody has ever published a scientific paper containing the phrase "differential etiology method". Of the mere 22 articles ever to contain the sub-phrase "differential etiology" none use it in the sense – to rule in a heretofore unknown cause – meant by the most recent court to don its invisible raiment. Even mighty Google Scholar can manage to locate only 6 references to the "method" and all are law review articles resting not upon some explication and assessment of a scientific method known as differential etiology but rather on the courtroom assertions of paid experts who claimed to have used it.

You’d also hope courts would understand that scientific evidence is no different than any other kind of evidence. It must be still something that has been observed or detected albeit with techniques (e.g. nuclear magnetic resonance spectroscopy) or via analyses (e.g. epidemiology) beyond the ken of laymen. Yet, while they’d never allow into evidence (say in an automobile case) the testimony of someone who had witnessed neither the accident nor the driving habits of the Defendant but who was prepared to testify that he thought the Defendant was speeding at the time of the accident because Defendant looks like the sort of person would would speed and because he can’t think of any other reason for the wreck to have occurred, some courts will allow that very sort of testimony so long it comes from a PhD or an M.D. who has used the "weight of the evidence method". Can you guess how many references to scientific papers using the "weight of the evidence method" PubMed yields? The same 0 as the "differential etiology method".

Nevertheless another (memorandum) opinion has joined the embarrassing procession of legal analyses bedecked in these ethereal methods; this time it’s a radiation case styled  McMunn v. Babcock & Wilcox Power Generation Group, Inc. 

Plaintiffs suffering from a wide variety of cancers allegedly caused by releases of alpha particle emitting processed uranium from the Apollo, PA research and uranium fuel production facility sued Babcock and other operators of the site. Following battles over a Lone Pine order and extensive discovery the sides fired off motions to exclude each others’ experts. The magistrate to whom the matter had been referred recommended that plaintiffs’ general causation expert Dr. Howard Hu, specific causation expert Dr. James Melius, emissions and regulations expert Bernd Franke and nuclear safety standards expert Joseph Ring PhD be excluded. The plaintiffs filed objections to the magistrate’s recommendations, the parties filed their briefs and the District Court rejected the magistrates recommendations and denied defendants’ motions.

Dr. Hu had reasoned that since 1) ionizing radiation has been associated with lots of different kinds of cancer; 2) alpha particles ionize; and 3) IARC says alpha particles cause cancer it makes sense that 4) the allegedly emitted alpha particles could cause any sort of cancer a plaintiff happened to come down with. It’s not bad as hunches go though it’s almost certainly the product of dogma -specifically the linear no-threshold dose model – rather than the wondering and questioning that so often leads to real scientific discoveries. But whether a hunch is the product of the paradigm you’re trapped in or the "what ifs" of day dreaming it remains just that until it’s tested. Unfortunately for Dr. Hu’s hunch, it has been tested.

Thorotrast (containing thorium – one of the alpha emitters processed at the Apollo facility) was an X-ray contrast medium that was directly injected into numerous people over the course of decades. High levels of radon could be detected in the exhaled breath of those patients . So if Dr. Hu’s hunch is correct you’d expect those patients to be at high risk for all sorts of cancer? They’re not. They get liver cancer overwhelmingly and have a five fold increase in blood cancer risk but they’re not at increased risk for lung cancer or the other big killers. Why? It’s not clear though the fact that alpha particles can’t penetrate paper or even skin suggests one reason. Look for yourself and you’ll find no evidence (by which we mean that the result predicted by the hunch has actually been observed) to support the theory that alpha particles can cause all, most or even a significant fraction of the spectrum of malignancies whether they’re eaten, injected or inhaled and whether at home or at work. Be sure to check out the studies of uranium miners.

But let’s assume that alpha particles can produce the entire spectrum of malignancies, that the emissions from the facility into the community were sufficiently high, and that the citizenry managed to ingest the particles. What would you expect the cancer incidence to be for that community? Probably not what repeated epidemiological studies demonstrating that "living in municipalities near the former Apollo-Parks nuclear facilities is not associated with an increase in cancer occurrence" concluded.

Dr. Hu attacked the studies of uranium miners and of the communities around Apollo by pointing out their limitations. This one didn’t have good dose information and that one had inadequate population data. Perfectly reasonable. It’s like saying "I think you were looking through the wrong end of the telescope" or "I think you had it pointed in the wrong direction". He’s saying "your evidence doesn’t refute my hunch because your methods didn’t test it in the first place."

Ok, but where’s Dr. Hu’s evidence? It’s in his mind. His hunch is his evidence; and it’s his only evidence. He weighed some unstated portion of what is known or suspected about alpha particles and cancer in the scales of his personal judgment and reported that CLANG! the result of his experiment was that the scale came down solidly on the side of causation for all of plaintiffs’ cancers.

At the core of the real scientific method is the idea that anyone with enough time and money can attempt to reproduce a scientist’s evidence; which is to say what he observed using the methods he employed. Since no one has access to Dr. Hu’s observations and methods other than Dr. Hu his hunch is not science. Furthermore, there’s no way to assess to what extent the heuristics that bias human decision-making impacted the "weighing it in my mind" approach of Dr. Hu.

Given that there’s no way to reproduce Dr. Hu’s experiment and given that none of the reproducible studies of people exposed to alpha particles demonstrate that they’re at risk of developing the whole gamut of cancer Dr. Hu’s argument boils down to that of the great Chico Marx: "Who are you going to believe, me or your own eyes?" Alas, the court believed Chico and held that "Dr. Hu’s opinions have met the pedestrian standards required for reliability and fit, as they are based on scientifically sound methods and procedures, as opposed to ‘subjective belief or unsupported speculation’".

Next, having already allowed plaintiffs to bootstrap alpha emitters into the set of possible causes of all of the plaintiffs’ cancers, the court had no problem letting letting the specific causation expert, Dr. Melius, conclude that alpha emitters were the specific cause of each plaintiff’s cancer merely because he couldn’t think of any other cause that was more likely. At first that might seem sensible. It’s anything but.

Don’t you have to know how likely it is that the alpha emitters were the cause before you decide if some other factor is more likely? Obviously. And where does likelihood/risk information come from? Epidemiological studies in which dose is estimated. Of course the plaintiffs don’t have any such studies (at least none that support their claim against alpha emitters) but couldn’t they at least use the data for ionizing from say the atomic bomb or Chernobyl accident survivors? After all, the court decided to allow plaintiffs’ experts to testify that "radiation is radiation".

Well, just giving a nod to those studies raises the embarrassing issue of dose and the one lesson we know plaintiffs’ counsel have learned over the last several years of low-dose exposure cases is to never, ever, ever estimate a dose range unless they’re ordered to do so. That’s because dose is a measurement that can be assessed for accuracy and used to estimate likelihood of causation. Estimating a dose thus opens an avenue for cross examination but more devastatingly it leads to the the argument that runs: "Plaintiff’s own estimate places him in that category in which no excess risk has ever been detected."

Fortunately for plaintiffs the court held that Dr. Melius’ differential diagnosis or differential etiology method does not require that he estimate the likelihood that radiation caused a particular cancer before he can conclude that radiation is the most likely cause among many (including those unknown).

First the court held that it was not its job to decide which method is the best among multiple methods so long as the method is reliable. For this it relies upon In re TMI Litigation. When In re TMI Litigation was decided (1999) the Nobel Prize in Physiology or Medicine for the discovery that Helicobacter pylori and not stress was the cause of most peptic ulcers was six years in the future. The method of observational epidemiology and application of the Hill causal criteria had generated the conclusion that peptic ulcers were caused by stress. The method of experimentation, observation and application of Koch’s postulates established H. pylori (nee C. pyloridis) as the real cause; and, for the umpteenth time, experimentation as the best method. So could a court allow a jury to decide that the peptic ulcer in a plaintiff with a H. pylori infection was caused by stress at work? Apparently the answer in the Third Circuit is "Yes"; scientific knowledge be damned.

Second, citing In re Paoli and other Third Circuit cases, the court held that differential etiology (without knowledge of dose) has repeatedly been found to be a reliable method of determining causation in toxic tort cases. As we’ve written repeatedly this a claim wholly without support in the scientific literature. Are there studies of the reliability of differential diagnoses made by radiologists? You bet (here for example). Are there studies of immunohistochemical staining for the purpose differential diagnosis in the case of suspected mesothelioma? Yep. Here’s a recent example. There are (as of today) 236,063 pages of citations to articles about differential diagnosis on PubMed and none of them (at least none I could find via key word searches) suggests that a methodology such as Dr. Melius’ exists (outside the courtroom) and none represent an attempt to test the method to see if it is reliable.

Third, the court held that since Dr. Melius’ opinions were the result of his "qualitative analysis" the fact that plaintiffs were living in proximity to the facility during the times of the alleged radiation releases and the fact that Babcock failed to monitor emissions and estimate process losses to the environment was enough to allow a jury to reasonably infer that plaintiffs were "regularly and frequently exposed to a substantial, though unquantifiable dose of iodized [ionized?] radiation emitted from the Apollo facility." How such reasoning can be anything other than argumentum ad ignorantiam is beyond our ability to glean.

Worse yet is this sentence appearing after the discussion about the absence of data: "A quantitative dose calculation, therefore, may in fact be far more speculative than a qualitative analysis." What would Galileo ("Measure what is measurable, and make measurable what is not so") the father of modern science make of that?  Yes an estimate could be wrong and a guess could be right but the scientist who comes up with an estimate makes plain for all to see her premises, facts, measurements, references and calculations whereas the expert peddling qualitative analyses hides his speculation behind his authority. Besides, dose is the only way to estimate the likelihood of causation when there are multiple, including unknown, alternate causes. Then again, in addition to everything else Galileo also proved that questioning authority can land you in hot water so we’ll leave it at that.

Finally, in the last and lowest of the low hurdles set up for Dr. Melius, the court found that he had "adequately addressed other possible cause of Plaintiffs’ cancers, both known and unknown." How? By looking for "any risk factor that would, on its own, account for Plaintiffs’ cancers", reviewing medical records, questionnaires, depositions, work histories and interviewing a number of plaintiffs. Presumably this means he looked for rare things like angiosarcoma of the liver in a vinyl chloride monomer worker and mesothelioma in an insulator, commoner things like lung cancer in heavy smokers and liver cancer in hepatitis C carriers, and hereditary cancers (5% to 10% of all cancers) like acute lymphoblastic leukemia in people with Down syndrome or soft tissue sarcomas in kids with Li-Fraumeni Syndrome. You can make a long list of such cancers but they represent perhaps one fourth of all cases. Of those cancers that remain there will be no known risk factors so that once you’re allowed to rule in alpha emitters as a possible cause ("radiation is radiation") and to then infer from both "qualitative analysis" and the absence of data that a "substantial" exposure occurred you’ve cleared the substantial factor causation hurdle (which at this point is just a pattern in the courtroom flooring). Having gotten to the jury all that remains is to make the argument plaintiffs’ counsel made before Daubert: "X is a carcinogen, Plaintiff was exposed to X, Plaintiff got cancer; you know what to do."

We’re living through an age not unlike Galileo’s. People are questioning things we thought we knew and discovering that much of what the Grand Poohbahs have been telling us is false. There’s the Reproducibility Project: Psychology the genesis of which included the discovery of a widespread "culture of ‘verification bias’" (h/t ErrorStatistics) among researchers and their practices and methodologies that "inevitably tends to confirm the researcher’s research hypotheses, and essentially render the hypotheses immune to the facts…". In the biomedical sciences only 6 of 53 papers deemed to be "landmark studies" in the fields of hematology and oncology could be reproduced, "a shocking result" to those engaged in finding the molecular drivers of cancer.

Calls to reform the "entrenched culture" are widespread and growing. Take for instance this recent piece in Nature by Regina Nuzzo in which one aspect of those reforms is discussed:

It would have to change how statistics is taught, how data analysis is done and how results are reported and interpreted. But at least researchers are admitting that they have a problem, says (Steven) Goodman [physician and statistician at Stanford]. "The wake-up call is that so many of our published findings are not true."

How did we get here? A tiny fraction of the bad science is the result of outright fraud. Of the rest some is due to the natural human tendency to unquestioningly accept, and overweigh the import of, any evidence that supports our beliefs while hypercritically questioning and minimizing any that undercuts it (here’s an excellent paper on the phenomenon). Thanks to ever greater computing power it’s becoming easier by the day to "squint just a little bit harder" until you discover the evidence you were looking for. For evidence that some researchers are using data analysis to "push" until they find something to support their beliefs and then immediately proclaim it read: "The life of p: ‘Just significant’ results are on the rise." For evidence that it’s easy to find a mountain of statistical associations in almost any large data set (whereafter you grab just the ones that make you look smart) visit ButlerScientifics which promises to generate 10,000 statistical relationships per minute from your data. Their motto, inadvertently we assume, makes our case: "Sooner than later, your future discovery will pop up."

Of the remaining bad science, i.e. that not due to fraud or cognitive biases, apparently a lot of it arises because researchers often misunderstand the very methods they use to draw conclusions from data. For example, read "Robust misinterpretation of confidence intervals" and you’ll get the point:

In this study, 120 researchers and 442 students – all in the field of psychology – were asked to assess the truth value of six particular statements involving different interpretations of a CI (confidence interval). Although all six statements were false, both researchers and students endorsed, on average more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers’ performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.

We suppose such results would have surprised us before we got out of law school. Nowadays we just nod in agreement; our skepticism regarding the pronouncements of noted scientific authorities becoming complete after recently deposing an epidemiology/causation expert who didn’t even know what "central tendency" meant. He also had never heard of "The Ecological Fallacy"; which explains why he committed the error throughout his report. He couldn’t estimate how likely it was that the chemical in question was causative nor did he know the rate of the disease in plaintiff’s age/gender bracket. No matter. His opinions came wrapped in the same non-existent scientific methods and the court has been duly impressed with his extensive credentials and service on panels at NCI, IARC, etc. So it goes. 

Hopefully courts will begin to take notice of the rot that sets in when scientists substitute personal judgment, distorted by cognitive biases to which they are blind and intuitions which can easily lead their causal inferences astray, for measurement and experiment in their quest to generate scientific knowledge. That and the fact that the method some experts parade about in are not in fact a way of doing science but rather just a way of shielding their unscientific opinions from scrutiny.

Too bad about the method though. If it worked we could hire the appropriate scientific authorities to come up with a cure for cancer. They would ponder the matter, render their opinions as to the cure and testify about why their "qualitative analysis" obviously points to the cure. The jury would pick the best treatment among those on offer and the court would enter a judgment whereby cancer was ordered to yield to the cure. That’s not how it works and it wasn’t how it worked in the early 1600s when the sun refused to revolve about the earth, heedless of the pronouncements of scientific authorities and courts of the day. Maybe history will repeat itself in full and scientific knowledge, the product of observation, measurement, hypothesis and experiment will again mean just that and not the musings of "experts" bearing resumes and personal biases rather than facts. Maybe if we’re lucky the rot will be cut out and the tide will finally turn in the long war on cancer. We’ll see.

Lexblog