Yet Another Opinion in Which a Court Mistakes Hypothesis for Theory

While some may imagine that scientific hypotheses are the product of highly educated people with brilliant minds drawing straightforward inferences from compelling evidence the fact remains that all scientific hypotheses are nothing more than guesses; and as every middle schooler taught the scientific method knows, even the best pedigreed hypotheses are usually false. On the other hand, sometimes it's the hypothesis with the most dubious provenance that gets promoted to the status of scientific theory (i.e. one that has survived rigorous testing and is powerfully explanatory) as in the case of benzene's structure:

I was sitting writing at my textbook but the work did not progress; my thoughts were elsewhere. I turned my chair to the fire and dozed. Again the atoms were gambolling before my eyes. This time the smaller groups kept modestly in the background. My mental eye, rendered more acute by the repeated visions of the kind, could now distinguish larger structures of manifold confirmation: long rows, sometimes more closely fitted together, all twining and twisting in snake like motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes.

Because a hypothesis is nothing more than the assembly (by hard work or daydreaming) of a few bits of what is known/believed into a plausible narrative that explains some phenomenon (e.g. gastric lymphoma), because so little is known about the causes of a complex disease like gastric lymphoma such that the discovery of H. pylori suddenly and completely overturned prior views about its causes, and because we can't know (or factor into our hypotheses) what we don't know (you've heard of the human gut microbiome but what about the human gut virome?) hypotheses are nothing more than speculation. That's why every epidemiological study you've ever read puts the burden of proof squarely on the hypothesis and resolves all doubt in favor of the "null hypothesis" (i.e. the hypothesized causal agent has no effect).

Unfortunately many courts either don't understand the difference or refuse to distinguish between hypothesis and theory. A recent example is Walker v. Ford. In Walker plaintiff's expert was allowed to opine on the basis of his hypothesis that asbestos is a cause of Hodgkin's lymphoma and thereafter to deduce from another of his hypotheses (Hodgkin's lymphoma is caused by either Epstein-Barr virus, smoking or asbestos) that plaintiff's lymphoma must have been caused by asbestos as he hadn't the virus and didn't smoke. And it isn't just another case of a court conflating hypothesis generation (guessing) with the scientific method (testing guesses) so that guesswork by a properly credentialed witness is turned into a "scientifically valid method" and Rule 702 can be deemed satisfied. It's worse. Not only has the hypothesis that asbestos causes Hodgkin's lymphoma never been verified, it has in fact been repeatedly tested and serially refuted. Furthermore, the most important observation that spawned the hypothesis in the first place (an increased risk of gastric lymphoma among a sample of asbestos workers) has never been reproduced (and will never be reproduced) because when the study was done nobody outside two researchers in Australia even knew H. pylori existed much less to look for it in gastric lymphoma patients - several years would elapse between its discovery and the determination that it is worldwide the leading cause of gastric lymphoma.

The general causation opinion of plaintiff's expert rested on these studies:

1) Cancer Morbidity of Foundry Workers in Korea. A slight increased risk of stomach cancer and non-Hodgkins's lymphoma was found among foundry workers exposed to a laundry list of things including asbestos. No exposure assessment was done for any substance and no increase in Hodgkin's disease was reported. The mortality study of the workforce published this year isn't any more persuasive - here's the SMR table for malignant diseases: SMR table.

2) Extranodal marginal zone lymphoma of mucosa-associated lymphoid tissue type arising in the pleura with pleural fibrous plaques in a lathe worker. Guess what? Asbestos isn't the only cause of pleural plaques and so I stopped reading this article when I got to "He had not been exposed to asbestos."

3) Asbestos exposure and lymphomas of the gastrointestinal tract and oral cavity. This is the study mentioned above that suffers fatally from the understandable ignorance of the confounder H. pylori though it also appears to have the multiple comparison problem as evidenced by the fact that subgroupings of lymphomas, here GI and oral, produced a higher risk than for lymphomas in general. Finally, being a case-control study, there was no estimation of exposure in any of the cases.

4) Does asbestos exposure cause non-Hodgkin's lymphoma or related hematolymphoid cancers? A review of the epidemiologic literature. I didn't get past the abstract which concludes that a review of the literature reveals "no increased risk of NHL (non-Hodgkin's lymphoma) or other HL-CAs (hematolymphoid cancer) associated with asbestos exposure."

Not discussed in Walker but apparently the last nail in the asbestos-causes-lymphoma hypothesis' coffin (and the last sign of any scientific interest in this apparently dead issue) occurred 10 years ago with the publication in the Annals of Epidemiology of Occupational asbestos exposure and the incidence of non-Hodgkin lymphoma of the gastrointestinal tract: an ecologic study. The study found "no support for the hypothesis that occupational asbestos exposure is related to the subsequent incidence of GINHL (gastrointestinal tract non-Hodgkin's lymphoma).

These articles along with the expert's belief that "as long as asbestos reaches an area, regardless of where it is, it can cause different types of cancer" and asbestos can make its way to the lymph nodes, were all he needed to opine that asbestos causes lymphoma including plaintiff's Hodgkin's lymphoma (because after all "a lymphoma is a lymphoma" save "for therapeutic purposes"). That's too much nonsense to unpack in one blog post so I'll just focus on the claim that wherever asbestos goes in the body it causes cancer. The Institute of Medicine was tasked with answering this very question - is there evidence for a causal relationship to asbestos for cancer of everything from the larynx to the rectum - and generally found that what was in the literature was suggestive but insufficient to reasonably conclude that there is a causal link. See: Asbestos: Selected Cancers.

To save plaintiff's expert and his hypothesis the appellate court held that it doesn't matter if an expert's conclusions are correct. All that matters is that the method whereby he reaches his opinion is reliable, and plaintiff's expert's method, guessing about the cause of Hodgkin's lymphoma by creating a narrative about the causation of Hodgkin's lymphoma from a few studies (that didn't actually study Hodgkin's lymphoma) counts as a reliable one. But who, other than the hopelessly ironic, would label as "reliable" a method (i.e. the guessing that constitutes a scientific hypothesis) of causal determination the product of which is usually incorrect? Recall that not only are most scientific hypotheses false but that even most of those with a statistically significant chance of being true are probably false.

Only scientific theories get the Seal of Reliability, which is to say they make predictions on which you can rely. And they gain that status only by being put to the test, and passing; and by passing I mean that the predictions they make actually come to pass. So what prediction would follow from "asbestos exposure causes Hodgkin's disease"? Wouldn't it be "people exposed to asbestos are more likely to get Hodgkin's disease than those who aren't"? And what follows from the fact that no study of asbestos-exposed workers has shown an increased risk of Hodgkin's lymphoma? That the claim "asbestos causes Hodgkin's disease" isn't reliable.

So if hypotheses are unreliable in general because by definition they have not been tested, and if the specific hypothesis "asbestos causes Hodgkin's lymphoma" is unreliable because it has been tested and failed to predict the future it entails, in what sense is the opinion of Walker's expert "reliable"? Let me know if you figure it out.

A Plaintiff Win and a Very Good Daubert Opinion from the U.S. Ninth Court of Appeals

... we have a universally recognized Supreme Court, to which all disputes are taken eventually, and from whose verdict there is no appeal. I refer, of course, to direct experimental observation of the facts.

 E. T. Jaynes, physicist, in Foundations of  Probability Theory, Statistical Inference, and Statistical Theories of Science (1976)

Recently we've been grousing about Daubert-invoking opinions that are actually derived from the strange belief that reliable scientific knowledge does not depend upon the existence of supporting observable facts. Today however we're applauding City of Pomona v. SQM North America Corporation; an opinion that gets Daubert and sound science right because it gets the scientific method right.

At issue was the trial court's order excluding an expert witness for the City of Pomona in a groundwater perchlorate contamination case. The expert, Dr. Neil Sturchio, was prepared to testify that the perchlorate detected in the city's groundwater had most likely originated in the Atacama Desert of Chile. Such evidence, if admissible, would implicate defendant SQM as it had imported for sale to California's agricultural industry many thousands of metric tons of sodium nitrate, an inorganic nitrogen fertilizer, from Chile's Atacama Desert - the sodium nitrate from which contains on average about 0.1% perchlorate.

It's a big world though and there are plenty of potential sources of perchlorate in groundwater. Perchlorate occurs naturally and has been found in relative abundance in some arid regions of the American Southwest. It's also synthesized industrially for use in solid fuel rocket propulsion systems (manufacturers of which were located in and around Pomona) and fireworks so that defense contractors, the Cal Poly Rocketry Club and 4th of July celebrations all fall under suspicion. How then did Dr. Sturchio determine that the perchlorate in Pomona's groundwater came from a desert almost 6,000 miles away? Well, he didn't conduct an experiment in his mind (a la McMunn and Harris) involving unobservable facts (a la Messick) in order to generate an untestable conclusion (a la Milward). Instead he followed the scientific method.

Perchlorate consists of one chlorine atom and four oxygen atoms. Since both chlorine and oxygen atoms come in different varieties (called isotopes) the question arose as to whether perchlorate also comes in different varieties defined by the ratio (or distribution) of the various isotopes making up its component parts. If it does, and if those varieties vary according to where they're from or how they're made, then something akin to a fingerprint or signature could be generated by analyzing the relative distribution of isotopes in a perchlorate sample. Finding a match in a database of known perchlorate signatures would thereafter yield a suspect and so satisfy plaintiff's evidentiary burden of production.

According to Dr. Sturchio there is a method (it's a subtype of mass spectroscopy known as isotope-ratio, or IRMS) for assessing the ratio of isotopes (or variety) of a particular perchlorate sample and there is a list, albeit a short one, of known perchlorate samples and their IRMS signatures. Most importantly there's a detailed report available that sets out the theory, its rationale, the prediction it makes, the method by which it was tested and the results obtained. Using that method and comparing the results from prior tests on other samples showed, according to Dr. Sturchio, that the perchlorate in the Pomona, CA groundwater had a fingerprint, a signature if you will, just like that of Chilean perchlorate; and not at all like that of either man-made or indigenous perchlorate. Thus his opinion on its origin.

The district court found Dr. Sturchio's opinion to be unreliable for three reasons. The first was that no government agency had yet certified the method he employed and that it was still being revised. The second was that the particular procedure he had followed in this instance had not been tested and could not be retested. The third was that the database of known perchlorate varieties and their signatures was too small.

Because the theory that perchlorate comes in different varieties has been tested and corroborated by at least one study, and because those varieties can per the theory be distinguished by use of a widely available method, the Ninth quite rightly held that the criteria at the heart of Daubert, testability and reliability, had been satisfied. Defendant was free to test the theory as well as the method and to refute the former and/or explain why the latter was either inappropriate for the task or so inaccurate as to be unreliable, Without evidence of its own to refute plaintiff's corroborated theory or the accuracy of the method used to test it Dr. Sturchio's opinions could not be excluded. In such a case, the court held, arguments aimed at the potential for holes in the theory or error in the methodology go to the weight and not the admissibility of the proffered evidence.

The court did a nice job (with one exception to be discussed below) in setting out its reasoning and so rather than go on about it we'll just refer you to the opinion. Instead we'll make one comment and then try to explain why the court's argument against the need for certainty isn't the usual straw man argument but is rather an understandably awkward attempt to address the problem known as "the glory of science and the scandal of philosophy".  

Our comment goes to the defendant's argument that the reliability of a scientific method depends upon final government certification of the particular technique. We file it under "Be careful what you wish for" and remind our readers that the power to define what is and what is not a telescope determines what can and cannot be seen through one.

Now for the hard part. Defendant's argument that the database of perchlorate signatures was too small and the court's response that the law doesn't require certainty is just a version of a very old argument that goes like this:

Scientist: If all swans are white then every swan that is seen shall be white. Since all swans ever seen have been white, all swans must be white. It is thus my opinion that if the color of a particular swan is at issue then it will have been white.

Inquisitor: I shall demonstrate that you have fallen into the fallacy known as Affirming the Consequent; which is recognized by all learned men and women to constitute a notorious error of logic. Have you seen, or seen a record of, every swan that lives or has ever lived?

Scientist: I have not.

Inquisitor: So there are swans that have been that have never been seen and swans now living that remain undiscovered and yet you have no knowledge of their color?

Scientist: it is undeniable; and I would add that there are swans yet to be and I am of the opinion that they too will be white.

Inquisitor: Then you further admit that there is an entire category of things, that of swans yet to be, about the color of which you are willing to opine despite having not a single observation of any member of that category?

Scientist: I do. It is after all prediction on which a scientist stakes her reputation.

Inquisitor: A risky gamble, is it not? For despite ten thousand observations of white swans the sighting of a single black one would do what to your theory that all swans are white?

Scientist: It would refute it, utterly.

Inquisitor: How then can your theory be anything more than the most tenuous sort of idle speculation - built out of nothing more than anecdotes, a collection of sightings of some unknown portion of all swans, and at risk of collapse at any moment?

Scientist: Because all those confirmatory sightings with none to the contrary make it at least very probably true.

Inquisitor: Alas, I had hoped you had something more and might save me from my skepticism. Oh well. Do you agree that probability is a measure of uncertainty?

Scientist: Yes, it plays that role in logic.

Inquisitor: And what is meant by uncertainty?

Scientist: To take the case of a coin toss, for example, while I cannot tell you whether when flipped it will come up heads or tails I can tell you the expected frequency of each occurrence; or, given past experience flipping the same coin, I can tell you to what degree I believe the coin will come up say heads. Any probability calculated other than 100% is then a measure of uncertainty.

Inquisitor: But  to make such calculations you must know how many sides the coin has or how many cards are in the deck?

Scientist: True.

Inquisitor: How many swans have there ever been?

Scientist: I do not know.

Inquisitor: And how many swans are yet to be?

Scientist: I do not know that either.

Inquisitor: So you cannot say whether your record of white swan observations constitutes a large, small or merely infinitesimal fraction of all the swans that have ever been or will ever be?

Scientist: I cannot

Inquisitor: And so when you use the word probability you use it in neither the mathematical nor the logical sense, since you have insufficient information to even estimate the probability that your opinion is correct?

Scientist: That is true.

Inquisitor: So now, at last, do you admit that you really do not know?

Scientist (revealed as Sir A. B Hill): Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute at 8:30 the next day.

In City of Pomona plaintiff's expert had real evidence - facts that are directly observable and subject to testing - and not just a hunch. Those facts and the method by which they had been gathered had been tested by others, and confirmed. And defendant was free to find a black swan. That, in our view, is good enough.


Three Straw Men on a Witch Hunt

He who is accused of sorcery should never be acquitted, unless the malice of the prosecutor be clearer than the sun; for it is so difficult to bring full proof of this secret crime, that out of a million witches not one would be convicted if the usual course were followed!

- 17th century French legal authority

While looking for some references to help make sense of the (tortured?) reasoning responsible for Messick v. Novartis Pharmaceuticals,Corp. (a recent U.S. Ninth Circuit opinion arising out of a California intravenous bisphosphonate/osteonecrosis-of-the-jaw (ONJ) case) we came across the quote above in the very enjoyable Does Your Model Weigh the Same as a Duck? Though the aim of that paper is to expose two "particularly pernicious" fallacies of logic infecting drug research methodology, the point it makes when referring to the pre-Enlightenment era's view of the appropriate evidentiary burden in the trial of witches applies equally to modern courts that lower standards of proof in toxic tort cases lest, they fear, all present day witches (i.e. chemicals/pharmaceuticals) go unburnt.

A strong suspicion that the usual course, i.e. skeptical gatekeeping, won't be followed in Messick arises early on when the court chooses to demonstrate the strength of its argument (that the trial court erred when it found the opinions of plaintiff's causation expert to be irrelevant and unreliable) by fighting not one but three causal straw men. As usual there's Certainty, which advances the quixotic claim that plaintiff needs to prove causation essentially by deduction ( failing to recall Hume, D., A Treatise of Human Nature: "all knowledge degenerates into probability"). Then there's How, which stands for an argument nobody makes - e.g. we can't reasonably infer that aspirin reduces the risk of heart disease until it is proven how it does so. Last there's Sole, which defends the defenseless argument that a putative cause must also be the sole (i.e. only and sufficient) cause of plaintiff's injury. Unsurprisingly, each straw man is dispatched in a paragraph or less.

What is surprising is the length to which the court goes to save the plaintiff from her own expert, Dr. Richard Jackson. Jackson admitted that the fact that bisphosphonates are a cause of ONJ doesn't mean that it was the cause of her ONJ. He even admitted that the plaintiff had multiple risk factors for ONJ and that he could not determine "which of those particular risk factors is causing [the ONJ]." You would think that such equivocal testimony would put an end to plaintiff's quest to prove causation and that's exactly what the District Court below had held; but the Ninth Circuit thought otherwise.

The appellate court held that while plaintiff's causation expert "never explicitly stated that Messick's bisphosphonate use caused her [ONJ]", Dr. Jackson had analogized plaintiff's use of bisphosphonates to "the oxygen necessary to start a fire." Also, he had said that "[bisphosphonate use] was at least a substantial factor in her development of [ONJ]." Finally he was prepared to opine, based on his "extensive clinical experience", that "a patient without cancer or exposure to radiation in the mouth area would not develop ONJ lasting for years (as had plaintiff) without IV bisphosphonate treatments". Somehow, that's enough for a plaintiff to get to the jury on causation.

The problem with the assertion that bisphosphonates are to ONJ as oxygen is to fire is that a quick PubMed search reveals numerous cases of ONJ in cancer patients decades before bisphosphonates were ever marketed to them. ONJ has been attributed to bacteria, dental work, radiation and cancer all by itself. Perhaps, given that Dr. Jackson diagnosed plaintiff with bisphosphonate-related ONJ (or BRONJ), he's really saying: "plaintiff has BRONJ; therefore she has ONJ related to bisphosphonates". But that would just be begging the question and presumably not persuasive to the court. Either way, how an argument that's either demonstrably false or a logical fallacy can support plaintiff's causal claim escapes us.

Next, what should we make of the court's reliance on Dr. Jackson's "it's at least a substantial factor" opinion? Apparently what the court is saying is that while (1) there are multiple causes of ONJ including bisphosphonates; and (2) plaintiff had cancer, and perhaps other risk factors, known to cause ONJ; even though (3) her expert can't say which one did it; because (4) he's prepared to testify "that Messick's bisphosphonate use was a substantial factor"; (5) such testimony satisfies California's substantial factor standard and is admissible. However, the only way that (5) follows from (1 - 4) is if proof of "but for", or counterfactual, causation is not an element of California's substantial factor causation test and "maybe" causes are good enough. Yet California's substantial factor standard actually "subsumes the 'but for' test". We're again left scratching our heads.

The final (and apparently to the court most compelling) causation argument was that Dr. Jackson, on the basis of things seen only by himself (i.e. his clinical experience), had ruled out leading alternate causes and so thereby reliably ruled in bisphosphonates. This is of course just ipse dixit making an appearance in its de rigueur guise as "differential diagnosis" - the court admitting as much when it writes "[m]edicine partakes of art as well as science ..." while pretending not to notice the impact of the evidence-based medicine revolution. We've taken the position in prior posts that believing differential diagnosis (a/k/a differential etiology a/k/a inference to the best explanation) to be akin to the scientific method and that it produces the sort of reliable scientific knowledge contemplated by Rule 702 is simply the sort of pre-scientific thinking common among those prone to being mesmerized by credentials and jargon. Instead of rehashing those arguments consider the case of Dr. Franz Mesmer and what is revealed when the scientific method is applied to the beliefs of doctors drawn from their clinical experience.

After having seen many patients Dr. Mesmer came up with a hunch about how the body worked and how good health could be restored to the sick. His hypothesis was called animal magnetism and it entailed that an invisible force ran through channels in the body which, when properly directed, could effect all manner of cures. Redirecting that force via mesmerization became wildly popular and Dr. Mesmer became quite famous. In what would become the first recorded "blind" experiment clinicians who practiced mesmerism - the art of redirecting the invisible forces to where they were needed - proved unable, when they did not know what it was they were mesmerizing, to distinguish a flask of water from a living thing and neither could they produce any cures. On the commission overseeing the experiment in 1784 was none other than one of the leading lights of the American Enlightenment and rebel against authority - Benjamin Franklin.

Two hundred and ten years later doctors were still seeing in their patients what their hypotheses predicted rather than what was actually occurring. A classic research paper demonstrating the phenomenon is The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Investigators assessing the efficacy of a new treatment for multiple sclerosis (MS) in a flash of brilliance decided to "blind" some of the neurologists who would be clinically assessing patients undergoing one of three treatments while letting the rest of the neurologists involved in the effort know whether a particular patient was getting the new treatment, the old treatment or the sham treatment. While the blinded neurologists and even the patients who had correctly guessed their treatment assignments (a check for the placebo effect) saw no improvement over the old treatment, the unblinded neurologists not only saw a significant positive effect that wasn't there but they continued to see it for two years. Is there some workaround; a way to test after the fact for the distortion of the lens through which a clinician in the know observes his or her patients? You could try but it would appear to be a mug's game and furthermore, by the very nature of the bias produced (unblinded clinicians blind to the very existence of their own bias) beyond the ability of cross examination to uncover.

A clinician's art and a differential diagnosis derived from that art saved the day in Messick. Along the way to deciding that objective, verifiable evidence is not required to prove causation in such cases the court listed her sister circuits said to be of like mind and in the first footnote added that the Fifth Circuit was now alone in not having similarly lowered the gates. How the Fifth Circuit feels about being Daubert's last redoubt is unknown to us but we're pretty sure that a plaintiff would win on causation in a bisphosphonate-ONJ case before that court. That's because there are five years worth of objective and verifiable (and verified) evidence that (a) ONJ incidence in bisphosphonate-treated cancer patients is drastically and consistently increased; and (b) the likelihood that ONJ in a bisphosphonate-treated cancer patient was due to the treatment is slightly over 98%. See: 2014 AAOMS Position Paper on Medication-Related Osteonecrosis of the Jaw.

We know why plaintiffs' counsel don't want courts to embrace the sort of causal reasoning that would make a case like Messick easy for both general and specific causation. It's because the day a court holds that "a probability estimate of 98% obviously passes the 'more likely than not' test" is the prelude to doomsday in low dose asbestos/benzene/etc litigation when that same court holds "a probability estimate of 2% obviously does not'". What we can't understand is why so many courts refuse to enforce the test by demanding something more than the musings of experts. Witches or bewitchment are our two working hypotheses.



The Human Cost of Bad Science

Because of the aggressiveness of a disease, its stage when detected and/or the requirement that patients enrolled in clinical trials not simultaneously pursue multiple treatments "patients with progressive terminal illness may have just one shot at an unproven but promising treatment." Too often their last desperate shots are wasted on treatments that had no hope of success in the first place. Two new comment pieces in Nature highlight the extent of the problem.

In Preclinical research: Make mouse studies work, Steve Perrin demonstrates that just like cancer patients, ALS/Lou Gehrigs' patients are betting their lives on treatments that showed great promise in lab animals only to find that they do no good in humans. So why are 80% of these treatments failing? It's not a story of mice and men. It's a story of bureaucratic science. Of going through the motions. Of just turning the crank. And of never, ever, daring to critique your methods lest you find, to take one example, that the reason your exciting new ALS treatment works so well in mice is because your mice didn't have ALS to begin with - you having unwittingly bred the propensity to develop it out of your lab animals.

Then read Misleading mouse studies waste medical resources. It continues the story of how drugs that should have been discovered to be useless in mice instead made their way into clinical trials where they became false promises on which thousands of ALS patients and their families have pinned their hopes.

We hope those courts that have bought into the idea that reliable scientific knowledge can be gained without the need for testing and replication are paying attention.


A Memorandum Opinion And The Methods That Aren't There At All

You'd think that courts would be leery about dressing their Daubert gatekeeping opinions in the "differential etiology method". After all, as you can see for yourself by running the query on PubMed, the U.S. National Library of Medicine / National Institute of Health's massive database of scientific literature, apparently nobody has ever published a scientific paper containing the phrase "differential etiology method". Of the mere 22 articles ever to contain the sub-phrase "differential etiology" none use it in the sense - to rule in a heretofore unknown cause - meant by the most recent court to don its invisible raiment. Even mighty Google Scholar can manage to locate only 6 references to the "method" and all are law review articles resting not upon some explication and assessment of a scientific method known as differential etiology but rather on the courtroom assertions of paid experts who claimed to have used it.

You'd also hope courts would understand that scientific evidence is no different than any other kind of evidence. It must be still something that has been observed or detected albeit with techniques (e.g. nuclear magnetic resonance spectroscopy) or via analyses (e.g. epidemiology) beyond the ken of laymen. Yet, while they'd never allow into evidence (say in an automobile case) the testimony of someone who had witnessed neither the accident nor the driving habits of the Defendant but who was prepared to testify that he thought the Defendant was speeding at the time of the accident because Defendant looks like the sort of person would would speed and because he can't think of any other reason for the wreck to have occurred, some courts will allow that very sort of testimony so long it comes from a PhD or an M.D. who has used the "weight of the evidence method". Can you guess how many references to scientific papers using the "weight of the evidence method" PubMed yields? The same 0 as the "differential etiology method".

Nevertheless another (memorandum) opinion has joined the embarrassing procession of legal analyses bedecked in these ethereal methods; this time it's a radiation case styled  McMunn v. Babcock & Wilcox Power Generation Group, Inc. 

Plaintiffs suffering from a wide variety of cancers allegedly caused by releases of alpha particle emitting processed uranium from the Apollo, PA research and uranium fuel production facility sued Babcock and other operators of the site. Following battles over a Lone Pine order and extensive discovery the sides fired off motions to exclude each others' experts. The magistrate to whom the matter had been referred recommended that plaintiffs' general causation expert Dr. Howard Hu, specific causation expert Dr. James Melius, emissions and regulations expert Bernd Franke and nuclear safety standards expert Joseph Ring PhD be excluded. The plaintiffs filed objections to the magistrate's recommendations, the parties filed their briefs and the District Court rejected the magistrates recommendations and denied defendants' motions.

Dr. Hu had reasoned that since 1) ionizing radiation has been associated with lots of different kinds of cancer; 2) alpha particles ionize; and 3) IARC says alpha particles cause cancer it makes sense that 4) the allegedly emitted alpha particles could cause any sort of cancer a plaintiff happened to come down with. It's not bad as hunches go though it's almost certainly the product of dogma -specifically the linear no-threshold dose model - rather than the wondering and questioning that so often leads to real scientific discoveries. But whether a hunch is the product of the paradigm you're trapped in or the "what ifs" of day dreaming it remains just that until it's tested. Unfortunately for Dr. Hu's hunch, it has been tested.

Thorotrast (containing thorium - one of the alpha emitters processed at the Apollo facility) was an X-ray contrast medium that was directly injected into numerous people over the course of decades. High levels of radon could be detected in the exhaled breath of those patients . So if Dr. Hu's hunch is correct you'd expect those patients to be at high risk for all sorts of cancer? They're not. They get liver cancer overwhelmingly and have a five fold increase in blood cancer risk but they're not at increased risk for lung cancer or the other big killers. Why? It's not clear though the fact that alpha particles can't penetrate paper or even skin suggests one reason. Look for yourself and you'll find no evidence (by which we mean that the result predicted by the hunch has actually been observed) to support the theory that alpha particles can cause all, most or even a significant fraction of the spectrum of malignancies whether they're eaten, injected or inhaled and whether at home or at work. Be sure to check out the studies of uranium miners.

But let's assume that alpha particles can produce the entire spectrum of malignancies, that the emissions from the facility into the community were sufficiently high, and that the citizenry managed to ingest the particles. What would you expect the cancer incidence to be for that community? Probably not what repeated epidemiological studies demonstrating that "living in municipalities near the former Apollo-Parks nuclear facilities is not associated with an increase in cancer occurrence" concluded.

Dr. Hu attacked the studies of uranium miners and of the communities around Apollo by pointing out their limitations. This one didn't have good dose information and that one had inadequate population data. Perfectly reasonable. It's like saying "I think you were looking through the wrong end of the telescope" or "I think you had it pointed in the wrong direction". He's saying "your evidence doesn't refute my hunch because your methods didn't test it in the first place."

Ok, but where's Dr. Hu's evidence? It's in his mind. His hunch is his evidence; and it's his only evidence. He weighed some unstated portion of what is known or suspected about alpha particles and cancer in the scales of his personal judgment and reported that CLANG! the result of his experiment was that the scale came down solidly on the side of causation for all of plaintiffs' cancers.

At the core of the real scientific method is the idea that anyone with enough time and money can attempt to reproduce a scientist's evidence; which is to say what he observed using the methods he employed. Since no one has access to Dr. Hu's observations and methods other than Dr. Hu his hunch is not science. Furthermore, there's no way to assess to what extent the heuristics that bias human decision-making impacted the "weighing it in my mind" approach of Dr. Hu.

Given that there's no way to reproduce Dr. Hu's experiment and given that none of the reproducible studies of people exposed to alpha particles demonstrate that they're at risk of developing the whole gamut of cancer Dr. Hu's argument boils down to that of the great Chico Marx: "Who are you going to believe, me or your own eyes?" Alas, the court believed Chico and held that "Dr. Hu's opinions have met the pedestrian standards required for reliability and fit, as they are based on scientifically sound methods and procedures, as opposed to 'subjective belief or unsupported speculation'".

Next, having already allowed plaintiffs to bootstrap alpha emitters into the set of possible causes of all of the plaintiffs' cancers, the court had no problem letting letting the specific causation expert, Dr. Melius, conclude that alpha emitters were the specific cause of each plaintiff's cancer merely because he couldn't think of any other cause that was more likely. At first that might seem sensible. It's anything but.

Don't you have to know how likely it is that the alpha emitters were the cause before you decide if some other factor is more likely? Obviously. And where does likelihood/risk information come from? Epidemiological studies in which dose is estimated. Of course the plaintiffs don't have any such studies (at least none that support their claim against alpha emitters) but couldn't they at least use the data for ionizing from say the atomic bomb or Chernobyl accident survivors? After all, the court decided to allow plaintiffs' experts to testify that "radiation is radiation".

Well, just giving a nod to those studies raises the embarrassing issue of dose and the one lesson we know plaintiffs' counsel have learned over the last several years of low-dose exposure cases is to never, ever, ever estimate a dose range unless they're ordered to do so. That's because dose is a measurement that can be assessed for accuracy and used to estimate likelihood of causation. Estimating a dose thus opens an avenue for cross examination but more devastatingly it leads to the the argument that runs: "Plaintiff's own estimate places him in that category in which no excess risk has ever been detected."

Fortunately for plaintiffs the court held that Dr. Melius' differential diagnosis or differential etiology method does not require that he estimate the likelihood that radiation caused a particular cancer before he can conclude that radiation is the most likely cause among many (including those unknown).

First the court held that it was not its job to decide which method is the best among multiple methods so long as the method is reliable. For this it relies upon In re TMI Litigation. When In re TMI Litigation was decided (1999) the Nobel Prize in Physiology or Medicine for the discovery that Helicobacter pylori and not stress was the cause of most peptic ulcers was six years in the future. The method of observational epidemiology and application of the Hill causal criteria had generated the conclusion that peptic ulcers were caused by stress. The method of experimentation, observation and application of Koch's postulates established H. pylori (nee C. pyloridis) as the real cause; and, for the umpteenth time, experimentation as the best method. So could a court allow a jury to decide that the peptic ulcer in a plaintiff with a H. pylori infection was caused by stress at work? Apparently the answer in the Third Circuit is "Yes"; scientific knowledge be damned.

Second, citing In re Paoli and other Third Circuit cases, the court held that differential etiology (without knowledge of dose) has repeatedly been found to be a reliable method of determining causation in toxic tort cases. As we've written repeatedly this a claim wholly without support in the scientific literature. Are there studies of the reliability of differential diagnoses made by radiologists? You bet (here for example). Are there studies of immunohistochemical staining for the purpose differential diagnosis in the case of suspected mesothelioma? Yep. Here's a recent example. There are (as of today) 236,063 pages of citations to articles about differential diagnosis on PubMed and none of them (at least none I could find via key word searches) suggests that a methodology such as Dr. Melius' exists (outside the courtroom) and none represent an attempt to test the method to see if it is reliable.

Third, the court held that since Dr. Melius' opinions were the result of his "qualitative analysis" the fact that plaintiffs were living in proximity to the facility during the times of the alleged radiation releases and the fact that Babcock failed to monitor emissions and estimate process losses to the environment was enough to allow a jury to reasonably infer that plaintiffs were "regularly and frequently exposed to a substantial, though unquantifiable dose of iodized [ionized?] radiation emitted from the Apollo facility." How such reasoning can be anything other than argumentum ad ignorantiam is beyond our ability to glean.

Worse yet is this sentence appearing after the discussion about the absence of data: "A quantitative dose calculation, therefore, may in fact be far more speculative than a qualitative analysis." What would Galileo ("Measure what is measurable, and make measurable what is not so") the father of modern science make of that?  Yes an estimate could be wrong and a guess could be right but the scientist who comes up with an estimate makes plain for all to see her premises, facts, measurements, references and calculations whereas the expert peddling qualitative analyses hides his speculation behind his authority. Besides, dose is the only way to estimate the likelihood of causation when there are multiple, including unknown, alternate causes. Then again, in addition to everything else Galileo also proved that questioning authority can land you in hot water so we'll leave it at that.

Finally, in the last and lowest of the low hurdles set up for Dr. Melius, the court found that he had "adequately addressed other possible cause of Plaintiffs' cancers, both known and unknown." How? By looking for "any risk factor that would, on its own, account for Plaintiffs' cancers", reviewing medical records, questionnaires, depositions, work histories and interviewing a number of plaintiffs. Presumably this means he looked for rare things like angiosarcoma of the liver in a vinyl chloride monomer worker and mesothelioma in an insulator, commoner things like lung cancer in heavy smokers and liver cancer in hepatitis C carriers, and hereditary cancers (5% to 10% of all cancers) like acute lymphoblastic leukemia in people with Down syndrome or soft tissue sarcomas in kids with Li-Fraumeni Syndrome. You can make a long list of such cancers but they represent perhaps one fourth of all cases. Of those cancers that remain there will be no known risk factors so that once you're allowed to rule in alpha emitters as a possible cause ("radiation is radiation") and to then infer from both "qualitative analysis" and the absence of data that a "substantial" exposure occurred you've cleared the substantial factor causation hurdle (which at this point is just a pattern in the courtroom flooring). Having gotten to the jury all that remains is to make the argument plaintiffs' counsel made before Daubert: "X is a carcinogen, Plaintiff was exposed to X, Plaintiff got cancer; you know what to do."

We're living through an age not unlike Galileo's. People are questioning things we thought we knew and discovering that much of what the Grand Poohbahs have been telling us is false. There's the Reproducibility Project: Psychology the genesis of which included the discovery of a widespread "culture of 'verification bias'" (h/t ErrorStatistics) among researchers and their practices and methodologies that "inevitably tends to confirm the researcher's research hypotheses, and essentially render the hypotheses immune to the facts...". In the biomedical sciences only 6 of 53 papers deemed to be "landmark studies" in the fields of hematology and oncology could be reproduced, "a shocking result" to those engaged in finding the molecular drivers of cancer.

Calls to reform the "entrenched culture" are widespread and growing. Take for instance this recent piece in Nature by Regina Nuzzo in which one aspect of those reforms is discussed:

It would have to change how statistics is taught, how data analysis is done and how results are reported and interpreted. But at least researchers are admitting that they have a problem, says (Steven) Goodman [physician and statistician at Stanford]. "The wake-up call is that so many of our published findings are not true."

How did we get here? A tiny fraction of the bad science is the result of outright fraud. Of the rest some is due to the natural human tendency to unquestioningly accept, and overweigh the import of, any evidence that supports our beliefs while hypercritically questioning and minimizing any that undercuts it (here's an excellent paper on the phenomenon). Thanks to ever greater computing power it's becoming easier by the day to "squint just a little bit harder" until you discover the evidence you were looking for. For evidence that some researchers are using data analysis to "push" until they find something to support their beliefs and then immediately proclaim it read: "The life of p: 'Just significant' results are on the rise." For evidence that it's easy to find a mountain of statistical associations in almost any large data set (whereafter you grab just the ones that make you look smart) visit ButlerScientifics which promises to generate 10,000 statistical relationships per minute from your data. Their motto, inadvertently we assume, makes our case: "Sooner than later, your future discovery will pop up."

Of the remaining bad science, i.e. that not due to fraud or cognitive biases, apparently a lot of it arises because researchers often misunderstand the very methods they use to draw conclusions from data. For example, read "Robust misinterpretation of confidence intervals" and you'll get the point:

In this study, 120 researchers and 442 students - all in the field of psychology - were asked to assess the truth value of six particular statements involving different interpretations of a CI (confidence interval). Although all six statements were false, both researchers and students endorsed, on average more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.

We suppose such results would have surprised us before we got out of law school. Nowadays we just nod in agreement; our skepticism regarding the pronouncements of noted scientific authorities becoming complete after recently deposing an epidemiology/causation expert who didn't even know what "central tendency" meant. He also had never heard of "The Ecological Fallacy"; which explains why he committed the error throughout his report. He couldn't estimate how likely it was that the chemical in question was causative nor did he know the rate of the disease in plaintiff's age/gender bracket. No matter. His opinions came wrapped in the same non-existent scientific methods and the court has been duly impressed with his extensive credentials and service on panels at NCI, IARC, etc. So it goes. 

Hopefully courts will begin to take notice of the rot that sets in when scientists substitute personal judgment, distorted by cognitive biases to which they are blind and intuitions which can easily lead their causal inferences astray, for measurement and experiment in their quest to generate scientific knowledge. That and the fact that the method some experts parade about in are not in fact a way of doing science but rather just a way of shielding their unscientific opinions from scrutiny.

Too bad about the method though. If it worked we could hire the appropriate scientific authorities to come up with a cure for cancer. They would ponder the matter, render their opinions as to the cure and testify about why their "qualitative analysis" obviously points to the cure. The jury would pick the best treatment among those on offer and the court would enter a judgment whereby cancer was ordered to yield to the cure. That's not how it works and it wasn't how it worked in the early 1600s when the sun refused to revolve about the earth, heedless of the pronouncements of scientific authorities and courts of the day. Maybe history will repeat itself in full and scientific knowledge, the product of observation, measurement, hypothesis and experiment will again mean just that and not the musings of "experts" bearing resumes and personal biases rather than facts. Maybe if we're lucky the rot will be cut out and the tide will finally turn in the long war on cancer. We'll see.


Back in the day expert witnesses attacked with their credentials and parried with their jargon. Nowadays they're "wonkish". The result is typically something like this recent affidavit of Dr. Arthur Frank - a beyond encyclopedic recitation of the scientific literature and the conclusions he believes obviously flow from it. Yet Dr. Frank is a piker when it comes to wonky reports. Plaintiffs' expert in a recent benzene/AML case generated a report consisting of nearly 400 pages of densely packed text and calculations. Having become somewhat cynical after 25 years of litigation I can't help but suspect that this trend serves mainly the ethos component of rhetoric by seeming to demonstrate, with an eye on lazy gatekeepers, a deep understanding of the topic at hand. Well, that and making life much more difficult for anyone trying to tease apart and analyze all the pieces that make up these towering works of sophistry.

Take for instance the report of Dr. Frank above. The big issue in asbestos litigation today and the one he supposedly set out to opine about is what to make of de minimis exposures both in the context of "but for" causation and substantial factor causation. Instead he sets up the straw man of a serious dispute about whether heavy amphibole exposure can cause a variety of asbestos-related diseases and pummels him to the ground. Page after page he sits on the straw man's chest punching him in the face and for nearly seventy pages the straw man stubbornly refuses to tapout. Finally Dr. Frank gets to the point but his answer is nothing new and nothing we haven't discussed here a dozen times.

The question of what to do about de minimis exposures is a public policy issue that science cannot resolve. Dr. Frank and I had a very nice exchange recently at a conference in San Francisco where we both spoke and he gets it. He asked me "so what happens when you have someone with mesothelioma whose disease was caused by the sum of several de minimis exposures? Is he left without a remedy?" To which I replied "the only difference between that case and Palsgraf is that each of the micro-events (each being harmless without all the others) are the same in the case of asbestos and different in the case of Mrs. Palsgraf. But why would like-ness change the answer as to whether a micro-event was de minimis or not?" We agreeably agreed to think about good arguments for and against it.

You can find a copy of my PowerPoint from the conference here.


Five Beagles Refused To Die

Thinking about Harris v. CSX Transportation, Inc. and trying to understand how a court could come to believe that an educated guess that has never been tested, or one that has been repeatedly tested and serially refuted, could nevertheless constitute scientific knowledge I thought I'd reread Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation by Carl Cranor. It was published earlier this year in a Wake Forest Law Review issue devoted to advancing the thinking that produced Milward and now Harris. In it he sets out his argument that (a) "[t]he science for identifying substances as known or probably human carcinogens has advanced substantially" over the last quarter century and (b) where science leads, courts should follow. (Cranor you'll recall is a philosopher and "the scientific methodology expert for plaintiffs in Milward".)

Cranor begins by asking you to imagine having been diagnosed with "early stage bladder cancer that had been caused by MOCA (4,4'-methylenebis(2-chloroaniline)" following exposure to the chemical in an occupational setting. He then reveals that though "IARC classifies MOCA as a known human carcinogen" many judges would nevertheless deny you your day in court because they don't understand the "new science" for identifying the etiology of cancer. You see, while IARC concluded that "[t]here is inadequate evidence in humans for the carcinogenicity of 4,4'-methylenebis(2-chloroaniline)" its overall evaluation was that "4,4'-methylenebis(2-chloroaniline) is carcinogenic to humans (Group 1)" notwithstanding! And its rationale? MOCA (sometimes bearing the acronym MBOCA just to confuse things) is structurally similar to aromatic amines that are urinary bladder carcinogens (especially benzidine), several assays for genotoxicity in some bacteria and fruit flies have been positive, in rats and dogs MOCA can form DNA adducts, mice and rats and dogs exposed to MOCA develop cancer (dogs actually develop urinary bladder cancer), one of those DNA adducts found in dogs exposed to MOCA was found in urothelial cells from a worker known to have been exposed to MOCA, and finally an increased rate of chromosomal abnormalities in urothelial cells have been observed in some people exposed to MOCA.

At that point I stopped re-reading Cranor's paper and started looking into MOCA.

If MOCA really is a human urinary bladder carcinogen and if thousands of people have been exposed to MOCA in their work for many decades why is there no evidence of an increased risk of malignant urinary bladder cancer among them? Cranor claims the reason IARC concluded that there's "inadequate evidence" for MOCA being a human carcinogen is because "there are no epidemiological studies". Are there no such studies? If workers exposed to MOCA develop the same DNA adducts demonstrated in dogs and if four out of five dogs exposed to MOCA develop bladder cancer then where are all the human cases? And what's the story with the dogs?

It turns out there is an epi study of MOCA-exposed workers. The study was initiated in 2001 and its results were published four years ago. Only one death from bladder cancer was identified and it was not known whether the man was a smoker (a leading cause of bladder cancer). There was one bladder cancer registration for a man who had survived his cancer but he was a former smoker. Finally, there was one case of noninvasive, or in situ, bladder carcinoma; that case was excluded from analysis as there is no reference population that has been screened for benign tumors from which a background rate can be generated (take note of this case of a benign tumor, the significance of which no one can say as it will shortly become important). None of the findings allowed the researchers to reject the null hypothesis, i.e. that MOCA doesn't cause bladder cancer in humans. 

Then there's "Bladder tumors in two young males occupationally exposed to MBOCA". This study was conceived because "[i]n addition to their chemical similarity MBOCA and benzidine have similar potency to induce bladder tumors in beagle dogs, the species considered to be the best animal model for humans", because 9,000 to18,000 workers were being exposed to it, because it was not regulated as a carcinogen and because they had a group of 540 workers - workers whose smoking history was known, who hadn't been exposed to other bladder carcinogens and who had been screened for signs of bladder cancer and monitored for exposure since 1968. Why were they screened and monitored?. Benzidine and 2-naphthylamine are aromatic amines that long before 1968 were known to consistently produce very high numbers of malignant bladder cancers among exposed workers (with the incidence of malignant bladder cancer reaching 100% in one study of very highly exposed workers) so it was reasonably conjectured that all aromatic amines might cause malignant bladder cancer. 

Of the 540 workers none had died of or had symptoms of bladder cancer. However, two workers had been identified for follow up after screening and biopsies on each revealed non-invasive papillary tumors of the bladder. Because, again, there is no reference background population it was impossible to say whether the finding meant the risk of non-malignant asymptomatic tumors among MOCA workers was higher, lower or the same as expected among the unexposed. Nevertheless, after returning to muse about those MOCA-exposed beagles that had developed papillary carcinomas of the bladder the authors concluded that "[t]he detection of the two tumors in young, nonsmoking males is consistent with the hypothesis that MBOCA induces bladder neoplasms in humans."

And that's it for evidence of bladder cancer in humans from studies of humans exposed to MOCA - which is to say that nobody has ever found anyone exposed to MOCA to be at an increased risk of dying from bladder or even at an increased risk of developing clinically apparent bladder cancer. But at least it kills beagles, right?

Before I looked at the animal studies on MOCA I assumed there'd be lots, that they used modern techniques and that they'd been replicated; likely several times. Not so. IARC cited no rodent study done since the 1970s. It's summary of testing listed one study on mice, five for rats and just one for dogs. The mice were fed lots of MOCA for 2/3 of their lives and many developed hemangiosarcomas (mice and dogs get it, you don't and the finding probably isn't relevant to humans in any event) and liver tumors. The rats were fed lots of MOCA in different regimens that varied by dose and protein sufficiency. In one study they got common liver and lung tumors. In the next only liver tumors. In the third lung, liver and even mesotheliomas. Lung, liver and mammary gland tumors were found in the fourth. Plus lung, mammary gland, Zymbal's gland, liver and hemangiosarcoma in the fifth. The rates were often strongly affected by protein insufficiency status. Neither mouse nor rat developed bladder cancer. But those beagles sure did. Well, four beagles did anyway. But that's not what killed them.

In the late 1960s six one year old beagles were started on 100 mg of MOCA three days a week. Six weeks later the dosing was increased to five days a week. Another six beagles which were fed no MOCA served as controls. Time passed. Man landed on the Moon, Watergate brought down a President, the Vietnam War ended, the PC was launched and the first fiber optic cable laid and yet five of the six MOCA beagles carried on (one having died along the way of an unrelated infection). Eventually, as the dogs approached their second decade of life, urinalysis suggested possible neoplasia so one was killed and dissected. It was healthy (other than the usual ailments associated with being 65 dog-yrs old) but it did have a papillary bladder cancer - one that had neither metastasized nor invaded surrounding tissues. Eight months later, having enjoyed it is hoped seventy happy dog years of life, the remaining four beagles, undaunted and asymptomatic to the last, were also killed and dissected. Three of the four had non-invasive, non-malignant bladder cancer. None of the controls, which met a similar fate, had non-invasive, non-malignant bladder cancer.

And that's it. Five dogs in a single study (never replicated) that began more than 45 years ago and ended before PCR, etc. were fed MOCA for a lifetime and never noticed, and perhaps never would have noticed, any effect but for having been "sacrificed" once their biggest mortality risk became old age.

OK. Four of five dogs fed MOCA in a study done long ago developed a non-malignant, non-invasive bladder cancer whereas none of the unexposed dogs developed the condition. Two humans out of 500+ exposed developed the same non-malignant, non-invasive disease and in a different study one human exposed to MOCA had a DNA-adduct like that of an exposed dog. Setting aside the growing skepticism about the usefulness of animal studies let's assume this decades-old study proves that exposure to MOCA causes non-invasive and non-malignant bladder cancer. So what to make of it?

First there's the issue of screening bias. You need look no further than the Japanese and Canadian childhood neuroblastoma screening studies to understand that lots of people, including children, get otherwise deadly cancers that simply go away on their own and that screening in such cases merely increases morbidity without decreasing mortality.

Second there's the whole rationale for labeling MOCA a human carcinogen because its metabolites look like those of benzidine (which does cause malignant bladder cancer). So it walks like a duck and quacks like a duck. It still doesn't cause malignant bladder cancer. Shouldn't that give IARC pause? If you decide MOCA's a carcinogen because metabolism turns it into the same things that benzidine gets turned into shouldn't you be scratching your head when it doesn't cause malignant bladder cancer in mice, rats, dogs or humans? Isn't a little humility in order?

Finally, what do you do with such a causal claim in a toxic tort case? If you don't know how many people get non-malignant, non-invasive bladder cancer how do you know whether MOCA increases, decreases or has no impact on the risk of contracting it? In other words, if you don't know what the background rate for non-malignant, non-invasive bladder cancer is, and you don't know by how much MOCA increases the risk, how can you ever say it's more likely than not a cause of any bladder cancer, much less a particular plaintiffs?

You can't, and that's why the Milward plaintiffs lost when they got back to the trial court. They could make no argument other than to conflate risk with causation. Shorn of the fallacious assertion that any risk is post hoc necessarily causative they couldn't say why benzene was more likely than not the cause of decedent's acute promyelocytic leukemia. They simply couldn't make a coherent argument as to why one putative cause among many was the most likely in the absence of any evidence about which cause was the most common.

In the end Milward serves only as a bad example. An example of what happens, of the time and money wasted, when the law tries to outrun science.

Tags: ,

The West Virginia Supreme Court of Appeals Doesn't Get The Scientific Method

Milward v. Acuity has spawned another troubling anti-science opinion: Harris v. CSX Transportation, Inc. Whereas Milward held that credentialed wise men should be allowed to testify that an effect that has never been observed (indeed one that could not be detected by any analytical method known to man) actually exists, Harris holds that such seers may further testify that an effect that would be observable (if it existed) and which has been repeatedly sought in the wake of its putative cause yet invariably not observed actually exists nonetheless.

How are plaintiffs pulling it off? By convincing some judges that testing, the essence of the scientific method, need not be done in laboratories and need not be independently reproducible. These courts have decided that biomedical discoveries can reliably be made, at least in the courtroom, merely by having an expert witness test in his mind a suggested causal association by running it through whatever causal criteria he thinks appropriate and weighing the resulting evidence according to his subjective judgment. Really, it's that bad. Two courts have now granted paid-for hypotheses a status equal to or higher than (depending on the jury's verdict) that of scientific knowledge (conjectures that have been severely and repeatedly tested and have repeatedly passed the tests). Now, we could point out that hypotheses generated by the process endorsed by these courts pan out perhaps 1 time in 100  - i.e. the method's rate of error is 99% - and so flunks a key Daubert factor, but that ignores the real ugliness here - an attack on the scientific method itself.

It has been said that the point of the scientific method is to let Nature speak for herself. By observing, measuring and recording scientists listen to her. By generating hypotheses about the order in which the observations occurred they attempt to make sense of what she's saying. By testing their hypotheses (i.e. by attempting to reproduce the observed effect in a controlled setting) scientists ask if they've misunderstood her. By publishing their results scientists communicate what they've learned and invite others to try to reproduce and build upon it. This method of objectively assessing a phenomenon, guessing at what it implies about how the world works, testing that guess and then reporting the results along with the method, materials and measurements involved ushered in the world we know today. It also dislodged those who in the past had sought to speak for Nature; those whose power and place had been derived from their ability to explain the world by way of plausible and compelling stories that served some narrative. They were dislodged first because the scientific method proved a better lodestone and second because the method, once applied to ourselves, revealed human intuition and judgment to be woefully prone to bias, fear, superstition and prejudice.  

Luckily for the would-be oracles who made their living as expert witnesses it took a long time and extreme abuse of the word "scientific" before the law came to terms with the scientific method. Finally Daubert accepted the central tenet of the scientific method - i.e. that to be scientific a theory must be testable - and thus necessarily accepted that the law could not lead science as it was obviously unequipped and ill-suited for testing theories. The law would have to follow science.  Other opinions refined the arrangement until we got to where we are today (at least where we are in Texas). Now an expert's opinion must be founded on scientific knowledge and may not reach beyond what can reasonably be inferred from it (i.e. the analytical gap between what is known and what follows from that knowledge doesn't require much of a leap - it's really just a matter of deduction). A case that's kept us busy the last month provides an example.

The plaintiffs' decedent had died of acute myelogenous leukemia (AML) and his family blamed benzene exposure. The battle was fought not over whether benzene can cause AML (though there are some interesting arguments to be made on the subject) but rather over whether plaintiff was exposed to enough and whether the risk posed by the exposure was considerable. The experts did have some leeway on issues like retrospective exposure estimation and whether the latency period was too long as on both sides there were scientific studies demonstrating the effect in question. Yet in the main the experts' opinions mostly overlapped; differing only according to the testimony of the fact witnesses on which their side relied. The jury thus was to decide which of two competing pictures of plaintiff's workplace made the most sense and not whether benzene causes AML. Surely that's the sort of case for which trial by jury was designed.

However, many still chafe against Nature's tyranny and argue for the old ways; for human judgment unconstrained by measurement, testing and thus the embarrassing possibility (likelihood, actually) of having their beliefs publicly refuted. So some argue that Nature is too coy and that she refuses to reveal what they're sure must be true. Others just don't like what she has to say. And of course there's the whole financial angle given that a lot more lawsuits could be filed and won if Nature could be made to speak on command or if the subjective judgment of experts could be re-elevated to the status of pronouncements by Nature.

So what to do? One solution is to adopt the "if you can't beat 'em, join 'em" motto" and bank on the truism that "if you can't find a way to generate a statistically significant association between an effect and what you suspect is its cause then you're too stupid to be a scientist." But that plan first ran afoul of the courts when it was recognized that, for example, improper epidemiological methodology had been employed to generate the results (see e.g. Merrell Dow v. Havner) and more recently as it has become evident that there's a crisis in the biomedical sciences - that many if not most statistically significant results cannot be reproduced and it's because many and probably most reported findings involving small effects (generally an increased risk of 3-fold or less) are false.

What to do, what to do? You need a method a court will say is valid and you need a test that can't be mathematically demonstrated to generally produce bad results and that also can't be run by someone else (lest she falsify your theory and ruin all the fun). What about equating a decision-theory process like weighing the evidence or applying the so-called A. Bradford Hill "criteria" to say significance testing of statistical inferences (e.g. epidemiology) or to fluorescent labeling of macromolecules for quantitative analysis of biochemical reactions? Now you're on to something! Because the weights assigned to bits of scientific evidence are necessarily matters of judgment experts can now "test" their own theories by weighing the evidence in the scales of their own judgment. And any theory that passes this "test" gets to be called "scientific knowledge" and, best of all, can never be refuted. A jury can then decide which of two competing pictures of say the anthrax disease propagation process (e.g. miasma vs germ theory) is the correct one. Robert Koch would be appalled but the Harris court bought it.

The decedent in Harris worked for a railroad and claimed his multiple myeloma (MM) had been caused by exposure to diesel fumes. The problem was that every epidemiological study of railroad workers, i.e. every known test of the potential relationship between working for a railroad and MM, failed to show that MM was associated with railroad work. In fact, every study designed specifically to test the theory that MM follows diesel exhaust exposure by railroad workers has failed to demonstrate an association, much less causation. Plaintiff tried to reframe the question by saying there's benzene in diesel exhaust smoke and that benzene has been associated with MM but the problem was that there's benzene in cigarette smoke too; far more in fact than in diesel smoke, and yet MM risk is not increased with cigarette smoking. Plaintiff then re-reframed the question by arguing that some molecules found in diesel exhaust had been associated with cancer (lung) and "oh, by the way," some of the chromosomal changes found in Mr. Harris' pathology were sometimes seen in people (with a different disease) exposed to benzene. In sum, there was no evidence drawn from observations of the world, i.e. the scientific method to demonstrate that diesel exhaust was a cause of MM in railroad workers; and the trial court excluded the experts' opinions.

On appeal the West Virginia Supreme Court of Appeals latched onto the following quote from Milward which I'll break into its three component sentences:

1) "The fact that the role of judgment in the weight of the evidence approach is more readily apparent than it is in other methodologies does not mean that the approach is any less scientific."

This is where the need for independently verifiable testing is deleted from the scientific method.

2) "No matter what methodology is used, an evaluation of data and scientific evidence to determine whether an inference of causation is appropriate requires judgment and interpretation."

This is where the need for a theory to have passed a serious test; i.e. that the effect has been observed to actually follow the putative cause in an experiment or retrospective epidemiological study, is eliminated as a requirement for a theory to constitute "scientific knowledge."

3) "The use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis, which we have repeatedly found to be a reliable method of medical diagnosis."

This is the punch line. A method for ruling out known diseases to infer the one from which a patient is actually suffering is transformed into a way to rule in by human judgment heretofore unknown causes of that disease without any objective evidence that, to paraphrase Hume, whenever the putative cause occurred the effect has routinely been observed to follow.

Given the foregoing as the court's (mis)understanding of the scientific method it should come as no surprise that it concluded "the experts in the instant case did not offer new or novel methodologies. The epidemiological, toxicological, weight of the evidence and Bradford Hill methodologies they used are recognized and highly respected in the scientific community." The effort to conflate statistical hypothesis testing and pharmacokinetic assays with subjective human judgment was complete and the trial court's ruling was reversed.

So now, in West Virginia, it's enough for an expert to say in response to the question: Has your hypothesis been tested? "Yes, I have weighed the data that gave rise to the hunch in my brain pan and I can now report that it convincingly passed that test and may reliably be considered 'scientific knowledge'". Ugh.

Painting By Numbers

It's hard to argue with the decision of the U.S. Court of Appeals for the Seventh Circuit in Joann Schultz v. Akzo Nobel Paints, LLC; a benzene/AML (acute myelogenous leukemia) wrongful death claim filed by the wife of a painter. The opinion frames the question before the court as follows: Is the fact that plaintiff's oncology expert holds to the linear no-threshold model for carcinogens a sufficient reason under Daubert to exclude his opinion that plaintiff's estimated high benzene exposure was causative given published studies demonstrating an eleven-fold risk of AML among those similarly exposed? The obviously correct answer is "no" and so the court held. (Behold the power of framing!)

We haven't read any of the briefing and so don't know if the question as framed was really what the fight was all about (though we certainly hope there was more to it than just that) but we did read the entire opinion and noted a couple of things you might find of interest. One involves the fact that plaintiffs are finally getting why dose matters (and helps) another is the opportunity that arises when dose is calculated. The last is a lament about the common problem of conflating observational epi studies with the scientific method, casual causal inference via "differential diagnosis" and a pointer to a new tool you might find helpful.

Unlike most toxic tort plaintiffs Schultz had her expert perform a sophisticated dose reconstruction. From it the expert generated a cumulative dose estimate which was then compared to risk data drawn from epi studies plaintiff thought she could defend as scientifically sound. The comparison yielded a large increase in her deceased husband's risk of AML attributable to his benzene exposure. Now that's the way to do it. Better yet, it was essential to getting the summary judgment rendered against her reversed. How did she do it?

According to the appellate court Plaintiff reconstructed the decedent's benzene exposure "using Monte Carlo Analysis, a risk assessment model that accounts for variability and uncertainty in risk factors such as the likely variation in [decedent's] exposure to benzene during different periods and at different plants." The court then proceeded to write "[t]he U.S. Environmental Protection Agency (EPA) has endorsed this methodology as a reliable way to evaluate risk arising from environmental exposure." Sound good?

For the unwary it sounds as though Monte Carlo Analysis(/simulation/etc) is (a) some sort of mathematical equation that (b) generates reliable (and therefore presumed to be admissible) risk estimates while (c) accounting (somehow) for missing data, and it comes with (d) Uncle Sam's seal of approval. Unfortunately, it isn't so.

Rather than going into detail about Monte Carlo simulations and their promise and limitations let's briefly discuss what makes them invaluable when it comes to cross examining the other side's expert. It turns out that it's not some sort of mathematically deduced equation that "accounts for variability and uncertainty" in these Monte Carlo exercises - it's the expert who picks which variables matter along with the formula that expresses the pattern wherein the variables are expected to be found. In other words, the expert using a Monte Carlo method has translated her opinions into a mathematical language. So whereas an expert's deposition fought only in English typically yields dissembling about methods and sharp advocacy about results, a translation of the mathematical language of her model reveals her tactics and much of the other side's strategy. Words like "high" and "low" become e.g. 2 ppm and 0.01 ppm, "increasing" and "decreasing" become calculable slopes, and "it varied throughout his shift" becomes e.g. a power law - each suddenly vulnerable to an informed cross examination.

Yet while we cheered the use of (potentially) explicit and transparent models (we don't know if plaintiff's estimator divulged his spreadsheets and formulae) for estimating dose we groaned at an embarrassingly shabby standard for causal inference.

First there's the method: the differential diagnosis. Since plaintiff's expert ruled-in most of, or at least the important, "risk factors" for AML and then ruled-out everything besides benzene his opinion that benzene caused decedent's AML was deemed admissible. Ugh. Saying something is a risk factor is not the same thing as saying that it's a cause. That so-called risk factor epidemiology isn't science and isn't even likely to turn up real risks has been known for some time. Then there's the category error FAIL of ruling-in and -out of the set of causes things that aren't even causes. Next there's the "best of a bad lot" problem. If you don't have all the real causes "ruled-in" your diagnosis is iffy at best. If you don't have the most likely cause ruled-in then all you're likely doing is picking the least wrong cause. Since the cause of the vast majority of AML cases is unknown and as there was nothing to distinguish Schultz' AML from the thousands that arise spontaneously every year plaintiff's expert's failure to rule-out "whatever it is that causes 90% of all AMLs" should have been fatal to his differential diagnosis. (Note: it wouldn't however be fatal to plaintiff's case assuming the admissibility of her claim of an eleven-fold relative risk given decedent's estimated dose).

Second, there's the claim that because plaintiff's and defendant's epi studies share "the identical methodology - observing AML rates in populations exposed to benzene over time", "Rule 702 did not require, or even permit, the district court to choose between those two studies at the gatekeeping stage." The court would do well to read "The Scandal of Poor Epidemiological Research: Reporting guidelines are needed for observational epidemiology" before pronouncing such a rule. As one of the FDA EMDAC committee members said during the recent Avandia meeting, when it comes to evidence "observational studies are at the bottom of the totem pole." Courts should keep in mind the fact that small effects detected in such studies even though, and perhaps especially when, statistically significant (i.e. report a low p-value) are likely to be false - and that goes for the ones cited by defendants as well as plaintiffs.

If you're still harboring doubts about whether or not there really is an unfolding scandal involving observational epidemiology read the editor's choice, "Improving Observational Epidemiology" in the current edition of International Journal of Epidemiology. If you don't have free access to the entire paper this should encourage you to pay the price for it:

"The ability of observational datasets to generate spurious associations of non-genetic exposures and outcomes is extremely high, reflecting the correlated nature of many variables, and the temptation to publish such findings must rise as the P-values for the associations get smaller. The forces involved - the imperative to publish for a successful research career, the journal publishers' and editors' desire to publish material that gets cited to increase their profiles and the isolation of many epidemiologists working on small,  often non-contributory, studies - are strong. Perhaps epidemiology needs to re-define its training and knowledge base and build in subsequent accreditation routes to promote better standards of epidemiological professional practice. Very few epidemiology departments impose the discipline of laboratory daily log books in which every experiment and analysis is recorded to provide some verification of what was a priori and what was post hoc. Academics involved in 'handle-turning circular research', highlighted nearly a century ago by Paul de Kruif, and commented upon recently in this journal, really do need to find alternative pursuits."

Finally, if you're interested in uncovering abusive practices in observational epidemiology add p-hacking to your vocabulary (you know your meme has gone big time when it's on Urban Dictionary) and "P-Curve: A Key to the File Drawer" to your arsenal. A number of statisticians, alarmed at the realization that the tools of their trade have been used to gin up spurious science on an industrial scale, have developed new tools to detect it. Plaintiffs are turning such tools on drug studies and company-financed employee mortality studies. Meanwhile, in one case of which we're aware, defendants are using them with effect on a whole series of studies resting on nothing more than the curious coincidence that the reported p-values all fell between 0.044 and 0.05. Go figure. Literally.

Squeak Squeak

In the run up to the trial of a case in which we're arguing that the B6C3F1 mouse ain't a man and 1,3 butadiene ain't a human carcinogen just because it causes cancer in the B6C3F1 mouse, out comes "Mice Fall Short as Test Subjects for Humans’ Deadly Ills" by Gina Kolata of the NYTimes. And it's a bombshell. Kolata reports on the paper "Genomic responses in mouse models poorly mimic human inflammatory diseases" and its central finding that immune responses in the mouse, including those related to heart disease and cancer, are no more closely correlated with human responses to the same stimuli than the roll of a pair of dice. It's the long-sought explanation as to why e.g.

"every one of nearly 150 drugs tested at a huge expense in patients with sepsis has failed. The drug tests all were based on studies in mice. And mice, it turns out, can have something that looks like sepsis in humans, but is very different from the condition in humans."

Good stuff, though not all that surprising if you've been following the sad tale of the development of drugs that cure cancer in mice yet have no effect in humans. And it doesn't mean that all scientific studies done on mice are worthless. Far from it. The ability to produce for example so-called knockout mice, rodents lacking a particular gene required to make a particular protein, allows an otherwise forbidden glimpse into the workings of the tiny chemical factories that we call cells. Nevertheless, the study does shatter the assumption that those little factories in mice run just like their counterparts in humans.

However, that's not the end of the story. If you read the whole thing you'll find this:

The study’s investigators tried for more than a year to publish their paper, which showed that there was no relationship between the genetic responses of mice and those of humans. They submitted it to the publications Science and Nature, hoping to reach a wide audience. It was rejected from both.

... reviewers did not point out scientific errors. Instead, [one of the authors] said, “the most common response was, ‘It has to be wrong. I don’t know why it is wrong, but it has to be wrong’ ”

which leads to our final point. Daubert's peer review factor was intended to serve as an independent indicator of reliability. The Court assumed that disinterested scientists on the lookout for bad science served as gatekeepers of the journals through which "scientific knowledge" was disseminated. Perhaps when there were far fewer journals and far fewer academics desperate to be published peer reviewers served such a function. Nowadays they too often serve the status quo - barring from publication the sort of disruptive findings that would discomfit the guild they serve. Thus, if we're not careful, does Daubert risk being effectively transmuted, at least in part, into Frye - i.e. a test of general acceptance rather than a test of sound science.



Be Careful What You Wish For When You Wish For A Standardless Standard Like Lohrmann

As promised we're weighing in on Holcomb v. Georgia Pacific,  et  al - the most recent effort by a court, this time Nevada's supreme court, to paint a fig leaf over the judicial embarrassment that is modern asbestos litigation.

To recap, by 1989 (twenty years after Clarence Borel filed the complaint that launched the mother of all mass torts) the litigation appeared to be winding down. The personal injury (and school abatement) cases had bankrupted most of the companies that once had constituted the lion's share of the asbestos industry. A little over a decade later Pittsburgh-Corning, and soon thereafter Owens-Corning, sought bankruptcy protection. With that, market share-wise, the vast majority of the American "asbestos industry" had been put out of its misery. The remaining defendants (with the exception of Owens-Illinois which serendipitously exited the business back in 1958) were each responsible for a microscopic share of the asbestos used in the United States. Surely the end was near. Instead, because courts tended to create special causation rules in asbestos cases which conflated risk with causation and because those same courts assiduously avoided the question of whether some risks were so small that liability could not fairly be predicated on them, the litigation continued unabated.

Drowning defendants, desperate for anything that might help against the "every asbestos fiber poses risk, mesothelioma is the actualization of risk, therefore plaintiff's meso was caused by every fiber" sophistry that goes on down at the courthouse, often grab for Lohrmann v. Pittsburgh Corning . It's an 80's era asbestosis case holding, essentially, that two to three weeks of work cutting/applying Unibestos wasn't enough to impose liability on Pittsburgh Corning (though things went less well for the other defendants). There was no discussion of dose, nor of the relative potency of amosite nor of the risk posed by each of the defendants' products. Instead, Lohrmann accepted proof of frequent, regular and proximate exposure to a defendant's asbestos-containing product as a proxy for a quantitative assessment of exposure and thereby risk or whatever other consideration drove the court's approximation of the line between de minimis and non-de minimis exposures. But how frequent is frequent? How regular is regular? How proximate is proximate?

"It's better than nothing", one assumes the Holcomb defendants thought when they asked the Nevada Supreme Court to adopt the so-called Lohrmann standard. So how did it work out? Would plaintiff's testimony that exposures were "numerous" be sufficiently regular and frequent? Would working "around" a joint-compound user proximate enough?

According to the Nevada Supreme Court it's enough. And thus the problem with Lohrmann.

Today there's an extensive peer-reviewed literature demonstrating the typical distribution of exposures resulting from almost every conceivable use of asbestos, and a well refined risk model for asbestos-induced mesothelioma that lets anyone estimate the risk associated with the exposures described by a plaintiff and his co-workers. Coming up with a plausible range of exposure is neither too hard nor too expensive. Down here in Texas plaintiffs need someone to testify to a supportable dose range and a review of their experts' bills reveals a typical cost of about $2500 - a fraction of amount they spend on their experts who testify about the history of the use and recognition of the hazards of asbestos exposure. 

So would it be too much of a burden to make plaintiff state a supportable range of his likely exposure? The Nevada Supreme Court thought so - though it didn't say how much it thought such an estimation would cost, nor how much is too much. It also bought the straw man argument that states like Texas which actually ask "about how much" are instead asking the impossible to answer "state precisely how much". From there the court went on to conclude that an inference of causation may reasonably be drawn from expert testimony that infrequent and low level exposures can cause mesothelioma plus evidence that the plaintiff sustained "numerous" instances of working "around" the asbestos-containing product. And sure enough, that's the gist of Lohrmann.

As for why defendants continue to go from court to court demanding the adoption of the Lohrmann standard, that remains a mystery. As for why plaintiffs hate to say "about how much", we already know the answer - compared to the risks posed by some of the products now caught up in the litigation, taking a shower is a death defying feat.

2013 will be the International Year of Statistics. Maybe it'll be the year more judges and lawyers come to appreciate how much better our decision-making can be when we have the courage to demand the data and to accept what it implies.


Foragers, Farmers, Fabricators

The toxic tort litigation directed against the chemical industry has been propelled by more than just the proclivity of the rat zymbal gland to turn cancerous, the ease with which associations can be generated by dredging epidemiological data, and the scent of deep pockets. Essential to the success of the litigation has been the widely held belief that humankind's attempt to harness nature and to bend it to our will is both morally wrong and unacceptably dangerous; and that Mother Nature will have her revenge. So imagine the enthusiasm of the plaintiffs' bar as it collectively ponders suits against "industrial food" aka Big Food.

Eating, whether a bountiful Thanksgiving dinner or an energy bar before a workout, is rarely just about filling up on chemical fuel and essential nutrients. Instead food comes garnished with a variety of religious, ethical and cultural traditions and, especially, intentions; whether of recognizing blessings or pursuing good health and a long life. Mix in our tendency to perceive that which is manmade as inherently riskier and it's easy to understand why the ad campaigns of food companies no longer suggest that they've fooled Mother Nature.

Yet science marches on and Mother Nature is not only found out but fooled as well. Synthetic butter is small potatoes compared to what's happening these days. Who could have imagined just a few years ago that today we'd be talking about genetically engineering bacteria and then eating them by the millions in order to tame inflammation, to wage wars against pathogenic bacteria in our gut and to ward off cancer? And when it comes to pathogens who back then could have hoped to believe that a particular outbreak of food borne illness could be quickly traced to the chanterelle sauce served on the fourth day of a conference? And yet now we can.

Thus the convergence of two trends, 1) the ancient worry about the fruit of knowledge once unleashed being beyond our ability to control it (especially when it's the staff of life that's being re-engineered as a result of that knowledge), and, 2) the ability to identify the source of food borne illness (including so-called obesogens, diabetogens and the like), means we're headed for another era in which uncertainty and fear runs parallel to an era of rapid discovery and unsettling change; and uncertain times are when the plaintiffs' bar thrives because courts are often bewildered by the science and prone to admit simple, comforting narratives with easily villified bad guys and readily lionized good guys. That's what drove the chemical litigation and we suspect that'll be the plan for food litigation.

We plan to cover the litigation, both the science behind it and the law that decides it, via Twitter. Our first tweet: "Don't eat the chanterelle sauce, whatever that is" can be found here.

"Science Is The Belief In The Ignorance Of Experts"

The title is a quote from the speech, "What is Science?" given by Richard Feynman at the fifteenth annual meeting of the National Science Teachers Association in 1966. We thought of it today upon reading that an Italian court has found seven members of the country's National Commission for the Forecast and Prevention of Major Risks guilty of manslaughter. They were sentenced to lengthy prison terms. The crime? Failure to accurately gauge the risk of the L'Aquila earthquake of 2009.

If the convicted experts, leading seismologists and geologists, had been more humble about the limits of their knowledge some among the 309 killed by the quake might have fled when the tremors began. Instead they stayed, reassured by experts who publicly doubted that the tremors were harbingers of the devastating quake that would soon follow.

And if the court had understood that the business of science is to torment experts by seeking to falsify via observation and experiment the very theories that won them their renown it might have taken pity on the defendants. It might therefore also have understood that Nature has the last word and that she has a wicked habit of reminding us just how often even the most rational and widely held scientific beliefs are falsified when finally put to the test. If you need a more recent example, one from the past week, we suggest Gina Kolata's "Diabetes Study Ends Early With a Surprising Result" in the 10/19/2012 edition of The New York Times.


Remember All The Hubbub About Corexit in City of Orange Beach, AL? Well, Nevermind.

Not long after certain groups decided to panic residents of the Gulf Coast about Corexit, a mixture of chemicals used to disperse oil from the Deepwater Horizon/Macondo disaster, worried residents of City of Orange Beach, AL started collecting dirty water from local waterways and a nearby lab quickly confirmed that Corexit had indeed washed up on their shores. Since Corexit is composed of common chemicals it was never clear how the lab decided it had detected the original formulation and the lab owner refused to shed any light on the matter. Nevertheless, people were outraged, press conferences were held and all manner of maladies and misfortunes among the locals were blamed on Corexit.

Fast forward two years.

Researchers at Auburn University examined the data on the water samples that had been collected and then decided to test the hypothesis that the chemicals detected had come from Corexit (rather than some other more mundane source). So they sampled rainwater runoff to see if  the same chemicals were in the water that City of Orange Beach was itself discharging into inland and nearshore waters. Sure enough they were; and at the same levels found in the samples collected by locals. The conclusion? "Our assessment indicates that these compounds are unlikely to be present as a result of the use of Corexit dispersants; rather, they are likely related to point and non-point source storm water discharge." Read all about it in "Provenance of Corexit-related chemical constituents found in nearshore and inland Gulf Coast waters".


Is Reasoning Just A Way To Convince Others That The World Is The Way You Want It To Be?

That, to our minds, is the gist of the debate about the reason for reason going on at "The Stone" at The New York Times. We have a different, though perhaps no less cynical, take.

While observing and thereafter interviewing more than a few juries we've noticed something peculiar. Most people would rather be wrong, but thought right, than right, but thought wrong. Why? Our guess is that leadership, and all the perks that come with it, has for a very long time tended to be bestowed upon he or she who made the most accurate predictions about what the future held (e.g. rain or drought) for his or her tribe. And isn't that ultimately what reason does; improve your forecast for tomorrow?

Of course such a faculty would also be an invaluable tool for anyone who would be king. Thus perhaps the fascination with, and fear of, raw human reason. And thus, perhaps, the hesitation of so many courts to yield to it (however error-prone the alternative might be). Food for thought.


Causation is Hard: Multiverse Edition

While we wait to critique Steve Gold's upcoming defense of Milward v. Acuity (the paper is available at SSRN but, alas, it's flagged as a work in progress, neither to be quoted nor cited) we'll have a go at his new paper on causation titled "When Certainty Dissolves Into Probability: A Legal Vision of Toxic Tort Causation for the Post-Genomic Era".

It takes twenty eight pages to get to the meat of the paper and upon arrival, despite some serious disputes with his take on causality, we were pleased when we finally made it (for awhile at least).  That's because it read as though Gold seemingly endorsed the same sort of approach to causal apportionment that we've been advocating here (and elsewhere) for years. Consider his suggestion:

I propose that courts should adopt an expressly probabilistic view of causation when the dominating evidence comprises population-based data of toxic effect. To frame the standard, an exposure should be considered a cause of disease if it was a contributing factor to the disease's occurrence. To be a contributing factor, an exposure would be shown by a preponderance of the evidence - not limited to any single favored type of evidence - to have added incremental risk that the plaintiff would develop a disease that the plaintiff has, in fact, developed. Damages should be apportioned to that contributing factor in proportion to its contribution to the plaintiff's risk.

Thereafter he even goes on to state that de minimis contributions to risk ought not be actionable. So what's not to like? Well, lots, actually. And it begins with a fundamental misunderstanding about "but for" causation.

Gold begins with the claim that "but for" (i.e. counterfactual) causal reasoning somehow doesn't work in toxic tort cases. He writes, "[p]roof of toxic tort claims conform poorly to the traditional deterministic legal model of but-for causation, because toxic injuries almost never involve an easily observed chain of physical events connecting a particular defendant's conduct with a particular plaintiff's harm." Gold's worry is that biomarkers, sequenced genes and the data from molecular epidemiology have delivered only more causal complexity rather than the allegedly promised "deterministic" certainty necessary for old-fashioned legal reasoning. So if "but for" causal thinking doesn't lead to the right legal solution, what does?

After pondering the conclusion that necessarily follows from the realization that e.g. cancer is the consequence of genes + epigenetic regulation + socioeconomic status + gender + microbiota + environmental exposures + unknown unknowns + bad luck (i.e. stochastic processes) Gold thinks he's fallen down the rabbit hole and met Schrodinger's cat and it's both alive and dead all at once. Confusing the uncertainty (which is to say ignorance) described by the statistics used in population studies with quantum indeterminancy, Gold declares that the real problem is one of "scientific indeterminacy". By this he seems to think that because the precise chain of events leading to disease is not only unknown but ultimately unknowable, and because randomness may lie literally at the root of the universe, it is therefore impossible to say that "but for" a particular exposure plaintiff would not have developed her illness.

That's bad enough because we know it's not the wave function that killed Schrodinger's cat . It was the decaying atom; and every time the atom decays the cat must surely die. But it gets worse.

Gold then suggests that the answer is not to live with the uncertainty - i.e. to make our best judgment about whether a gene or exposure or whatever was more likely than not a cause such that the disease would otherwise not have arisen, determine the risk it posed ex ante, and then apportion liability among those non-de minimis tortious risks ex post. Instead he suggests we throw every black box risk factor that passes his lowered Milward v. Acuity test of so-called general causation out for the jury's consideration, bad genes included and then let the them apportion liability among all risk factors, tortious and otherwise. He's thus changed the formula from Liability = Duty + Breach + Causation + Harm, to Liability = Duty + Breach + Risk + Any-disease-that-has-ever-been-associated-with-that-risk-even-if-it-probably-wasn't-causative-in-this-case.

In part II tomorrow we'll explain why such a formulation would vastly expand the number of defendants swept up into toxic tort litigation and why his belief that even with a tiny risk "someone somewhere" is ultimately harmed is true only if A) risk factors mean something beyond how likely it is we're wrong about our causal inferences; and, B)  there's a whole lot more universes out there besides our own.

Concentrate And Ask Again

You wouldn't think it reasonable to test a scientific hypothesis by consulting the Magic 8-Ball. You can't imagine any scientist asking: "should I reject the null hypothesis (e.g. that coffee doesn't cause pancreatic cancer)?" and then turning the Magic 8-Ball over to discover the answer. That's because every adolescent who ever wondered hopefully about a pretty girl: "does she really like me?" knows that subtle reformulations of the question and repeat inquiries will eventually cause the Magic 8-Ball to reveal the desired answer. Well, unfortunately, it turns out that some scientists learned the lesson of the Magic 8-Ball only too well.

We've complained repeatedly about the problems resulting from the effort to turn the discovery of knowledge into an academic industrial manufacturing process in which the only quality assurance check is the test of statistical significance as gauged by the p-value. Reliance on a low p-value as some sort of modern Oracle is the reason that most such "science" is wrong. We know why low p-values alone offer little assurance that a conjecture has not been refuted but what accounts for all the low p-values? In psych research, at least, it's now pretty clear that it's the ability to sample until you hit that magic p<0.05 level.

Assume scientists wrote up their experiments, set the sample parameters, ran the tests, gathered the data, analyzed the results and published what they found. The p-values calculated across all hypotheses tested ought to be distributed randomly across the range of potential values, right? What would you think if it turned out that that there were an unusually large clump of p-values just ever so slightly <0.05? You might think "publication bias!". Or you might think "editors don't understand that low p-values don't have much to say about one of the essential elements for establishing reliability: reproducibility." But what you should really be considering is whether or not the researcher, thanks to the power of computers and modern stats packages, wasn't watching the data accumulate and then either rationalized halting the program once statistical significance was achieved or rationalized collecting more and more data until the statistical significance was achieved i.e. until the Magic 8-Ball proclaimed "You may rely on it"; at which point he quit testing and instead set about writing up his latest scientific discovery. The paper reporting the suspicious clump and the unsettling reasons for it is "A Peculiar Prevalence of p Values Just Below .05" It's well worth your time if you're interesting in knowing why so much "science" is so wrong nowadays.

In fact, the problem of irreproducible "science" has become so harmful and so widespread that a plan to enlist independent laboratories to attempt to verify the findings of high-profile research before they're published has been implemented. You can read about it at Nature in "Independent labs to verify high-profile papers".

Remember, only you can prevent experts, bearing "science" resting on nothing more than statistical significance, from depriving your client of her life, liberty or hard earned cash.


Gods of the Analytical Gaps

Kuhn v. Wyeth, one of the many hormone replacement therapy (HRT) - induced breast cancer claims unleashed after one arm of the WHI revealed that the risks of HRT outweighed any benefits, is another case in which a court has concluded that a reasonable though untested hypothesis is good enough for Daubert purposes.

The WHI was the first randomized controlled trial (RCT) to examine the effects of HRT on breast cancer risk and it remains the most powerful study yet to address the question. Kuhn's problem was that this central pillar of the litigation showed that there was no increased risk of breast cancer for short term (<3 yrs) users like her. Plaintiff's expert thus needed to find a way around the WHI study while preserving its main finding.

With the help of plaintiff's counsel her expert rummaged through three observational studies and identified data which, when considered together, are indeed suggestive of an increased risk at exposure durations less than three years. However, 1) none of the studies were RCTs; 2) the largest (The Million Women Study) is laden with potential biases and confounders and doesn't even prove that HRT causes breast cancer; 3) the Calle study relies on self-reporting of past use (a notorious source of bias) and anyway didn't even find much of a risk increase until after the third year of use; and, 4) the final study, by Fournier, et al, explicitly disclosed that "[w]idespread use of progesterone is a French peculiarity" such that the drug combination being primarily studied wasn't even the one Kuhn blamed for her breast cancer. After a hearing on a motion to exclude the magistrate judge deemed the testimony unreliable. On appeal the 8th Circuit reversed.

The court held that "[t]aken together, the Calle study and the foreign studies constitute appropriate validation of and good grounds for" plaintiff's expert's opinion. There's no discussion of what constitutes "appropriate validation" of an expert's opinion nor the "good grounds" for holding it but it appears that plausibility was the only test being applied. What the court did was to examine those subsets of data within the three observational studies relied on by the expert, along with his explanations as to how they fit together, and thereafter conclude that the bits of data as framed by the expert do indeed support the theory of an increased risk for short-term users. And it's actually a plausible hypothesis. But is a plausible hypothesis the sort of "scientific knowledge" that satisfies Rule 702? We don't think so.

There's a whole journal dedicated to clever medical hypotheses - each hypothesis typically resting upon a lot more than just bits of three articles. The journal exists to "publish papers which describe theories, ideas which have a great deal of observational support and some hypotheses where experimental support is yet fragmentary" and the ideas submitted for publication are first "reviewed by the Editor and external reviewers to ensure their scientific merit" and reviewers judge the papers based on their plausibility. And even after all that what gets published isn't "scientific knowledge". What's published are just ideas;  potentially big ideas; yet nothing more than interesting ideas, until they're put to a test - a severe test - and survive it. That's science, and only the ideas that survive its testing can claim to be scientific knowledge.

As for the law, in a courtroom in Arkansas an expert witness will be allowed to advance as scientific knowledge a medical hypothesis conceived by a lawyer and fleshed out by the expert in just 5 hours. Scientific knowledge won't be needed to bridge the analytical gap between a tragic case of cancer and the drug combination claimed to be responsible. An untested yet plausible hypothesis, judged according to the persuasiveness of the expert advancing it, will suffice. We call the experts who can pull it off Gods of the Analytical Gaps.



The False Claims Act, Academic Fraud and Statistical Inference

We thought about throwing up a post discussing US ex rel Jones v. Brigham and Women's Hospital and Harvard Medical School last month but couldn't come up with a good tie-in to mass torts, toxic torts, regular-old-torts or any of the science stuff we like to write about. 

US ex rel Jones is a False Claims Act case arising out of an allegation that a researcher "deliberately manipulated [data] in order to achieve statistically significant results". Had the results not been statistically significant the researcher "could not have reported his findings in published scientific journals or to the NIH in support of an application" for the grant made the basis of the FCA claim, said the government's expert. The trial court granted summary judgment for defendants but the 1st Court of Appeals reversed. Scientific fraud will at last be on trial.

But ok, somebody(ies) cooked the books to get a bunch of NIH grant money. We're shocked. Shocked! that a tool created for gamblers has been used to burnish entreaties to the National Institutes of Health based on probabilistic reasoning. Alas. That's where we left it. Then we read "Fraud-Detection Tool Could Shake Up Psychology".

It discusses a statistical tool that lets its user surmise whether or not a researcher being investigated has been cherry-picking the data and consciously, or perhaps unconsciously, excluding data  that undermine his hypothesis. Apparently, by excluding data points that severely conflict with his hypothesis, a dishonest researcher can (surprisingly) reduce variance and thereby increase significance. The problem for the dishonest researcher is that across all the data a histogram of randomly generated p-values is supposed to look like this whereas the dishonest researcher's p-values produce a histogram like this.

There's nothing that limits the tool to empirical psychology; a discipline which has been rocked by repeated episodes of fraud involving researchers bent on generating "science" that confirms the common sense notions of one group or another. These days pretty much every researcher has the computing power necessary to see which data are critical obstacles in the path of "proving" a hypothesis. Thereafter he can generate ad hoc rationalizations for excluding the troublesome data; or just ignore them altogether, or just nudge them off the table and into the trash can. Problem solved!

Now we can see who's been solving problems by ignoring (or deleting) problematic data.

So how did we get here; a place where we need such a tool? Why are most hypotheses supported by statistical data probably false; and why is a lot of what's false actually fraudulent? US ex rel Jones manages to let slip the root cause. Rather than confining significance testing to saying something about whether a hypothesis is probably wrong it is now commonly, and wrongly, thought "to cause proof of a particular scientific hypothesis to emerge from  the data." Thus, rather than discovering if, on our endless ordeal of trial and error, we've gone off the rails yet again (which is to say that our pet theory has likely been falsified) many have come to use these tools to establish the claim that their hypotheses have been verified (or, worse yet, proven).

And that gets us to the essence of science. If, per our reading of Karl Popper, all we know is what isn't so, then the way forward to truth is heralded upon saying of new data: "That's funny". Similarly, the claim that our dogmas, preconceived notions, biases or prejudices have been verified by science ought to immediately summon our inner skeptics. Of course, too many are content to be fed a steady diet of seeming affirmations of their own convictions. But whichever side of the falsificationist/verificationist divide you fall on, our guess is that in the years to come the application of statistical tools like the one described above will open your eyes. Many things we thought of as "science" may turn out to have been the result of an elaborate fraud. Let the chips fall where they may.

The Best Causation Opinion of 2012

In fact, it may well be the best causation opinion of the last half century; and we're willing to bet it will be one of the most important causation opinions going forward. Dixon v. Ford Motor Company nails substantial factor causation and in the process proves that there are still plenty of judges willing to think deeply about causal inference, about uncertainty and about the limits of liability in a world of inevitable risk. Ultimately, the court held that risk is the measure of legal causation in those cases where causal inferences are made probabilistically, and therefore when a causation expert opines that a putative cause was a "substantial factor" without saying how much risk it imparted she fails to answer the question she was called to give; and so has nothing helpful to say about the matter.

Dixon is another tragic mesothelioma case in which a defendant, having contributed to the victim's cumulative asbestos exposure something between nothing and next to nothing, was hammered by the jury. It then appealed, after taking a few judicial deductions, a $3+ million judgment. As usual the defendant complained that plaintiff's causation expert (this time an epidemiologist) ought not to have been allowed to testify because she had "extrapolated downward" from the known segment of the asbestos/mesothelioma dose/response curve to an area in which hard data was lacking and thus could not reliably say that the defendant's small contribution to the victim's dose was a substantial causative factor. 1) That's not plaintiffs' game these days, as we said in Small Glasses; but more importantly, 2) as the court made plain in Dixon, "substantial factor" in such cases isn't about causation, it's about risk.

"'Substantiality' is a legal concept and not an objective property testable by the scientific method", wrote the Dixon court. The question thus required to get at the essence of substantial factor causation in a dose/response disease case is not "was it a big cause or a little cause?" It's not even "was it really and truly a cause?" The first question is nonsensical and the second is unanswerable. Instead, the question is: "assuming it was a cause, was the risk it imparted prospectively of such a degree as to justly warrant the imposition of liability?" Risk, said the court, is thereby the measure of (legal) causation and proving that the risk imparted was substantial is plaintiff's burden. Showing that defendant imparted a substantial degree of risk, according to the Dixon court, is thereby what is required to bridge the "analytical gap" between "asbestos causes mesothelioma" and "plaintiff's exposure to defendant's asbestos caused her mesothelioma".

Accordingly, the court held that when all plaintiff's causation expert could say about causation was that "every exposure to asbestos is a substantial contributing cause" the only thing she wound up saying about the risk given plaintiff by Ford was that it was "more than nothing." "For obvious reasons an infinitesimal change in risk cannot suffice to maintain a cause of action in tort". Thus, the opinion of plaintiff's expert "merely implied that there was some non-zero probability that [plaintiff] was exposed to asbestos from Ford's product, and that this resulted in some non-zero increase in her risk of contracting mesothelioma. As such, [the conclusion of plaintiff's expert] that the risk and probability of causation was 'substantial' provided the jury with nothing more than her subjective opinion of 'responsibility,' not scientific evidence of causation."

So how does a plaintiff establish a substantial risk? Not by proving the exact dose and thus exact risk incurred.The Dixon court disposed of the straw man argument that requiring plaintiff to show risk (and thereby dose) quantitatively demands an impossible degree of certainty and precision. Such a rule, wrote the court, would obviously be "folly". Rather, all plaintiff is being asked to do is to "estimate exposure and risk with reasonable scientific or medical certainty." Once such an estimate is made the jury can decide whether it is sound and if so whether the risk imparted was substantial - subject to the law's requirement that it be more than de minimis.

That's exactly what we've been arguing since a client was kind enough to let us write an amicus in Borg-Warner v. Flores. Make the plaintiff say what dose, and thereby what risk, was given. Why? The effect of plaintiff's estimate of dose in one of the first post-Borg-Warner cases should suffice for an answer. Using their expert's dose estimate the calculated risk of death posed by our client's product was demonstrated to have been less than 1 in six billion; that's equivalent to the risk of death (from cancer caused by radon) imparted by spending just fifteen minutes in a building constructed of brick or stone. Such calculations and comparative risk exercises serve to vividly demonstrate both the de minimis nature of the risks imparted and the absurdity of imposing liability for them.

PLoS Medicine is Publishing An Attack On "Big Food"

A new series in PLoS Medicine says we're going through another epidemiologic transition; this time it's a "nutrition transition", from a simple traditional diet to a highly processed food diet "resulting in a stark and sick irony: one billion people on the planet are hungry while two billion are obese or overweight". Can you guess who gets the blame? Can you guess what playbook they use? Here are some hints:

"In contrast to the actions of Big Tobacco, soda industry CSR initiatives are explicitly and aggressively profit-seeking."  CSR = corporate social responsibility

Neoliberal policies, including the opening of markets to trade and foreign investment, create environments that are conducive to the widespread distribution of unhealthy commodities by multinational firms.

Big Food attains profit by expanding markets to reach more people, increasing people's sense of hunger so that they buy more food, and increasing profit margins through encouraging consumption of products with higher price/cost surpluses

So are we in the midst of yet another epidemiologic transition? The last one was a bust. It turned out that we never really left the age of infectious diseases. Our bet is that the war on "Big Food" may generate fees but will do little to alleviate either hunger or the obesity epidemic.

The 5th Circuit Knows Why Differential Diagnosis Can't Be Used To Establish "General Causation"

In Johnson v. Arkema the plaintiff alleged that he had developed chemical pneumonitis as a result of exposure to hydrochloric acid (HCl) and an organotin compound (MBTC) during the course of his work in a factory manufacturing glass bottles. A fume hood made by Arkema, purchased by his employer and used to draw off HCl and organotin fumes after the substances are applied to the bottles as part of the manufacturing process, allegedly failed to work properly. The result, Johnson claimed, was that he was exposed to the fumes and subsequently suffered both acute and chronic lung injuries. However, following the magistrate's recommendation, the trial court granted summary judgment in favor of Arkema; determining that plaintiff's experts had failed to reliably opine as to causation.

The causal reasoning by plaintiff's expert here was particularly bad. The fact that some lung irritants cause fibrosis at high levels hardly proves that all lung irritants cause fibrosis at all levels of exposure. Similarly, a study suggesting that extraordinarily high levels of exposure to the substance increased the risk of pulmonary fibrosis in baboons is hardly evidence that a tiny fraction of that exposure over two days caused a particular case of pulmonary fibrosis in a human. What makes the opinion blogworthy though is the takedown of plaintiff's attempt to use differential diagnosis to "rule in" a substance as a cause of pulmonary fibrosis when no scientific study has ever shown the substance to produce the malady in humans.

Holding that "an expert may not rely on a differential diagnosis to circumvent the requirement of general causation" the court concluded"

Thus,  before courts can admit an expert's differential diagnosis, which, by its nature, only addresses the issue of specific causation, the expert must first demonstrate that the chemical at issue is actually capable of harming individuals in the general population, thereby satisfying the general causation standard.

That general causation can't reliably be established merely by having a doctor pick the one putative cause that seems most plausible out of a group of untested or unverified putative causes is pretty obvious but some courts continue to think otherwise. Take for instance the recent ruling in the consumer "popcorn lung" case of Watson v. Dillon.

Relying on Hollander v. Sandoz, and thereby on Turner v. Iowa Fire Equipment, Co., the trial court essentially concluded that doctors who treat patients possess some faculty whereby they are able to discern that a thing is capable of causing illness simply by process of elimination. The idea comes from Turner and finds its way into a variety of cases supporting plaintiffs' novel causation claims despite the fact that Turner resulted in the exclusion of junk science - but only because plaintiff's expert did not "scientifically eliminate" (whatever that means) the other possible causes of her injury. Here's the much quoted reasoning:

The first several victims of a new toxic tort should not be barred from having their day in court simply because the medical literature, which will eventually show the connection between the victims' condition and the toxic substance, has not yet been completed.   If a properly qualified medical expert performs a reliable differential diagnosis through which, to a reasonable degree of medical certainty, all other possible causes of the victims' condition can be eliminated, leaving only the toxic substance as the cause, a causation opinion based on that differential diagnosis should be admitted.

But on what basis should such an opinion be admitted? Is there anything beyond the assumption that if something is "toxic" at some level then it's probably capable of causing whatever afflictions mankind suffers unless science has ruled it out? Nope. It's thus just another example of argumentum ad ignorantiam . And where did the idea that toxins ought to be presumptively "ruled in" come from? Once again, it's the idea that we are living in "The Age of Degenerative and Man-Made Diseases" - that we're reaping the bitter harvest of modernity. Too bad they didn't have the PhyloChip back then when chemophobia was hatched -  a time when it was believed that the microbial world was not only understood but conquered. A lot of time, money and effort might not have been wasted. And who knows how many lives might have been saved?


The Can-Can. How A Texas Plaintiff Found A Way To Causation.

Out of various bits of data and a dollop of professional judgment the plaintiff's expert in a Texas medical malpractice case created a causal narrative that has survived on appeal. See Constancio v. Shannon Medical Center. Here's how.

Plaintiff was admitted to the hospital with a serious MRSA infection resulting in sepsis and eventually died following a depressive respiratory event that produced ultimately fatal brain damage. Plaintiff's theory was that a combination of drugs had precipitated the decedent's respiratory distress and that a failure to monitor his oxygen saturation level via pulse oximetry caused the staff to miss the resulting hypoxia which quickly progressed to devastating brain damage.

Drawing from pieces of the medical records, analyses of the potential interactions among the implicated drugs in the medical literature and his  experience treating similar patients, plaintiff's expert came up with the following:

"[I]f you intervene early, you can prevent deaths from sepsis."

If you give Phenergan it "can potentiate the effects of all the drugs."

It "can also increase the effects of sedation."

When this happens "you can have low blood pressure"; and

"the heart can stop";

and when that happens, if pulse oximetry is being used,

"the health care provider can increase the oxygen or have other medical interventions."

So what's the problem? Remember the product rule!

Each of those cans is just begging to be explored with a "What are the odds of ...?" line of questioning. Assuming just 4 "cans" out of the 5 above the expert would have to opine that the odds of each individual event happening was 85% or greater if the chain of can-cans is to produce a "more likely than not" conclusion. Throw in two more "cans" for "increased oxygen can prevent hypoxia in such cases" and "such patients can go on to survive sepsis" and the expert would need to be over 90% on each link of his/her causal chain or it wouldn't reach the level of "more likely than not" he would have survived.

Now, that having been said, it ought not be up to Plaintiff to test her causal chain once she's created a plausible narrative.  That's Defendant's job. And that's why plaintiff prevailed on appeal.


Small Glasses

The Betz v. Pneumo Abex commentary has generally sounded the following themes:

(a) The "novelty" threshold requirement for a Frye hearing has a broader meaning than previously thought. Henceforth, "a Frye hearing is warranted when a trial judge has articulable grounds to believe that an expert witness has not applied accepted scientific methodology in a conventional fashion in reaching his or her conclusion."

(b) The trial judge, who "was unable to discern a coherent methodology supporting the notion that every single fiber from among, potentially, millions is substantially causative of disease", did not abuse his discretion by excluding the "every fiber" causal theory of Dr. Maddox.

(c) Maddox' opinion that "every fiber was a substantial factor in causing plaintiff's mesothelioma" is riven by an "irreconcilable conflict with itself" because "one cannot simultaneously maintain that a single fiber among millions is substantially causative, while also conceding that a disease is dose responsive."

(d) Maddox reached his conclusion about causation not by his claimed "series of 'small bridges'" but rather by improperly extrapolating from known portions of the asbestos/mesothelioma dose-response curve to find causation at much lower levels - levels never demonstrated by epidemiological studies to be associated with the disease.

We're glad, of course, that the court found that novelty doesn't wear off expert opinions (i.e. they don't become immune to scrutiny under Frye) just because they've been peddled to many juries for many years. And we're similarly pleased so see shot down the untestable/unverifiable/unfalsifiable claim that a single fiber not only might have caused, but was in fact a substantial cause, of a given case of mesothelioma. But we're afraid the court missed the subtle game Maddox was really playing.

He wasn't extrapolating down. He wasn't playing the one-hit, linear no-threshold, conflate causation with risk game (though he does in other jurisdictions). And his causal model can account for both a single fiber being a necessary cause of a given plaintiff's mesothelioma and a correlation between dose and disease without any tension. Best of all, it's perfect for exploiting jurisdictions with naive liability attribution schemes such as Pennsylvania's at the time Maddox's opinion was rendered (Pennsylvania finally overhauled its joint and several liability scheme in 2011).

To understand what's up think about the following opinions of Maddox:

1) "it is the total and cumulative exposure that should be considered for causation purposes."

2) "Cumulative exposure, on a probability basis, should thus be considered the main criterion for the attribution of a substantial contribution by asbestos to lung cancer risk."


3) "[T]he more common analogy that has been used is the example of a glass of water. One drops marbles into the glass of water until the water finally overflows from the glass. Is it the first marble or the last marble that causes the glass to overflow? Well, both, all of them. The marbles cause the glass to overflow. That's a cumulative effect."

So it's clearly not a one-hit model that he's proposing; and for good reason. As more and more courts have become savvy about risk it's been harder and harder for plaintiffs to survive the objection that a de minimis exposure presents a de minimis risk and thus cannot by necessity be a "substantial factor". That's what the Pennsylvania Supreme Court was getting at in Betz when it wondered how "if all Dr. Maddox could say is that a risk attaches to a single asbestos fiber - that he could also say that such risk is substantial when the test plaintiffs may have been (and likely were) exposed to millions of other fibers from other sources including background exposure."

The point is made explicitly a few pages later when the court refers back to its discussion in Gregg about the plaintiff's proof problem in such cases:

"... we do not believe that it is a viable solution to indulge in a fiction that each and every exposure to asbestos, no matter how minimal in relation to other exposures, implicates a fact issue concerning substantial-factor causation in every "direct-evidence" case. The result, in our view, is to subject defendants to full joint-and-several liability for injuries and fatalities in the absence of any reasonably developed scientific reasoning that would support the conclusion that the product sold by the defendant was a substantial factor in causing the harm."

So what sort of model makes each asbestos fiber a substantial factor in causing mesothelioma and yet is consistent with a dose-response relationship between asbestos and mesothelioma? It's something we're seeing more and more often. It's the idea that each person has, to apply Maddox' metaphor from above, her or his own unique water glass of a defense mechanism. Some people's defense capacities are bigger and some are smaller and some are microscopic. But whatever the size, once they're overwhelmed mesothelioma or leukemia or whatever ensue. Thus, when Maddox says that every fiber to which plaintiff was exposed contributed to his risk and "once an individual develops a mesothelioma, the risk becomes the cause" what he's really saying is that though some marbles dropped in the glass might be bigger than others (e.g. crocidolite > amosite > chrysotile) and some might have been added with thimbles while others with coal shovels the only thing that really matters is whether or not the body's defenses were breached. Once they are the glass overflows, the mesothelioma develops and every last fiber was literally a necessary "but for" cause of the disease.

So what does the dose-response curve for asbestos and mesothelioma actually represent in such a model? It describes the outcomes of varying abilities to withstand asbestos across varying levels of exposure to asbestos. A lower defense capacity (small glass) requires less asbestos to trigger mesothelioma; higher defenses require more, etc. Think of the data points on the curve then as small glasses that have overflowed. What such a model suggests is a human population that mostly has large glasses {which fits the observation that even the most heavily exposed workers have less than a 1 in 10 chance of developing mesothelioma); a small portion has small glasses; and a tiny portion have smaller glasses still - all the way down until you get to those rare but unlucky souls who have no defense whatsoever. For them the background exposure dose doesn't increase their risk since it's already 1.

It's an especially clever model for for litigation because:

a) every fiber is a necessary element of a sufficient causal set (NESS) so causation is a snap;

b) it escapes the no duty problem posed by de minimis exposures in states that view substantial factor causation from a risk/foreseeability perspective;

c) it turns modern victims into eggshell plaintiffs and so makes every case that much more foreseeable; and,

d) because it's a Landers v. East Texas Salt Water Disposal Co sort of cumulative causal claim it's especially dangerous in states with joint and several liability schemes.

The problem is that there's no evidence for the model. It rests solely on the following two generalizations (that do little more than highlight our ignorance of the mechanisms whereby asbestos causes mesothelioma) and the conclusion drawn from them:

Mesothelioma is caused by exposure to asbestos fibers

A victim's asbestos exposure consists of every fiber inhaled

Therefore, a victim's mesothelioma was caused by every asbestos fiber inhaled

As for the equally evidence-free small glasses theory it goes like this:

Not everyone exposed to asbestos develops mesothelioma

But every victim's mesothelioma was caused by her or his asbestos exposure

And every victim has had a different asbestos exposure

Therefore something about the victim (small glasses) determines the outcome of exposure

If you haven't seen it yet it's surely on its way to a courtroom near you. Our advice? Attack the premises - they're made of clouds.






 "'Hunches', even if held by experts, are not scientific knowledge."

Banning plastic grocery bags might be a very bad idea.

There's no such thing as negligent failure to generate the same hypothesis as plaintiff's future experts.

There's a very slight increased risk of lung cancer among Carolina chrysotile textile workers with 100 f-yr/ml exposure.

Apparently, the Restatement (Third) of Torts, Liability for Physical and Emotional harm, section 7's exception permitting no-duty rules when "relatively clear, categorical, bright-line rules of law" can be promulgated is big enough to drive the very big take-home truck through it. We predict dyspepsia for Michael Green. See Mary Campbell v. The Ford Motor Company.



I thought of fulvene, also known as 5-methylene-1,3-cyclopentadiene, when I read the following in a new law review article (funded, strangely enough, by a National Science Foundation grant):

Tort actions may impel industry to take voluntary steps to redesign chemical molecules ... to be less toxic.

Fulvene you see is made up of six carbon and six hydrogen atoms. So is benzene and so are a few other molecules. The point of course is that while you might be able to rearrange a car's component parts to make it somehow safer while leaving it a car you can't rearrange benzene's atoms (or those of any other complex molecule for that matter) without turning benzene into something else. Something with a different boiling point, solubility, reactivity and the like. Something that cannot, as benzene can, be used to make the breast cancer drug tamoxifen.

The law review article is "Litigating Toxic Risks Ahead of Regulation: Biomonitoring Science in the Courtroom" and it dovetails with "How Chemicals Affect Us" which you've likely seen in the NYTimes. Each claims that very low levels of exposure to substances previously thought safe may be causing subtle changes and each ends with a call for regulation; the former by way of lowering evidentiary standards in tort proceedings so as to bring about more claims and bigger awards and the latter by way of the regulatory state. Irrespective of wielder the same tool is urged: one that resolves all uncertainties in favor of stasis, of inaction, i.e. the Precautionary Principle.

"Litigating Toxic Risks", funded under a $366,785 research grant for "Toxic Ignorance and the New Right-to-Know: The Implications of Biomonitoring for Regulatory Science", proceeds from the hypothesis that "toxic tort litigation has emerged as a means of controlling risks." It recounts 1) the number of chemicals that have never been tested for toxicity (tens of thousands); 2) the non-stop synthesis of new ones; 3) the purported shortcomings of TSCA; 4) the fact that asbestos and lead paint are made of chemicals and turned out to adversely affect some of those exposed; 5) the apparently obvious conclusion "it follows that many of today's routine chemical exposures are cause for great health concern"; and, finally, 6) the ability of biomonitoring to demonstrate those chemicals to which we've been exposed. The authors then deduce that the effort to regulate chemicals via toxic tort litigation "depends greatly on whether courts are able to apply tort theories to the scientific data used in appraising the health risks of chemicals".

They lament, however, that there's no cause of action for simply being exposed to the activities of other people; that plaintiffs must show harm - an adverse health effect - before they can prevail. Regarding those chemicals to which everyone is exposed in low doses they complain that it's not practical for plaintiffs to do epidemiological studies since there is (unsurprisingly) no unexposed reference population. Furthermore, the cost and time involved in doing epi and tox studies are significant. So, if standards of proof could just be lowered the class action mechanism would expose potential defendants to existential liability risks for harms they probably didn't cause (see pg. 6) so that vast sums could be extracted from them and the production of synthetic chemicals would be thereby curtailed or eliminated.

Additional helpful measures would include dropping the requirement that class members demonstrate that they have actually been exposed to the substance in question. As support for this assertion the authors write "[t]he courts' current stance contradicts standard scientific procedure, where it is well recognized that sampling can lead to reliable assumptions about population characteristics". (Really? A calculated sample mean is superior to knowledge of the actual population mean for making conclusions about the population? And superior to even knowing the actual exposure of each member of the population?)

To make sure that as many people as possible can assert medical monitoring claims the article's authors urge "implementation of the precautionary principle in the legal standards required to show significant exposure and increased risk of disease". The precautionary principle apparently will turn every "is it likely" hurdle to plaintiffs' recovery into an "is it possible" speed bump.

As for damages "courts can accept, as legally actionable injuries, subtle health and developmental impacts as well as emotional concern and stress related to chemical exposure."

So far some 50 million different chemical substances have been cataloged and 12,000 new ones are added every day. Most were synthesized by nature rather than by man. Over the eons our ancestors managed to survive in this sea of chemicals, surrounded and inhabited by countless biochemical factories constantly synthesizing new molecules in order to survive in and/or exploit their ever-changing environment - and our ancestors largely did it by synthesizing their own new molecules. We've only had trouble when we've been out-engineered by our biochemical competitors or when we've violated the rule: "all things in moderation". So what's with the chemohysteria over trace exposures and the discovery that our bodies notice and adapt to them on the fly?

I think a large part of it stems from the fact that we've come to realize our genetic code is more toolbox than blueprint; that we're far more impermanent than we ever imagined; and, that so much of what we believed about how it all works, especially decades old myths about the principal causes of human diseases, is being swept away by remorseless empiricism. The attempt to incorporate the Precautionary Principle into the law can thus be seen as part of a deeply conservative movement, standing athwart science, yelling Stop!




The Debate About Science, Evidence and Scientific Evidence: A Plaintiff Attorney's Perspective

Competing ideas are sharpened as they're ground against one another. We've posted our thoughts about the role of science in the courtroom - now read: "The Difference Between Scientific Evidence And The Scientific Method" by Max Kennerly of the Beasley Law Firm.



How Science Works (When It Doesn't Work)

In light of the NYTimes' "A Sharp Rise in Retractions Prompts Calls for Reform" we thought it would be a good time to revisit some of our objections to "Reference Manual on Scientific Evidence (Third Edition)"; particularly the second chapter, "How Science Works". Here goes.

Avoiding any pretense of humility the Reference Manual dismisses as woefully naive and inadequate those claims about the essence of the scientific endeavor that were ingrained in us in school. Sir Francis Bacon's scientific method? "[E]ven ... in his own time there were those who knew better." The idea that scientists ought to be unbiased observers of nature? What is seen from the shoulders upon which observers stand in order to see a bit further depends as much on the giant as on the observer; meaning somehow "Bacon has been left behind". In the chapter's section on myths and facts about science the Reference Manual says "Myth: Scientists are people of uncompromising honesty and integrity. Fact: They would have to be if Bacon were right about how science works, but he was not." Talk about cynical.

Then there's the section on Sir Karl Popper, unaffectionately known among some academics as "the man who murdered Karl Marx and Sigmund Freud". Seeing no use for hypotheses that were infinitely explanatory yet unable to accurately predict anything, Popper accepted Hume's problem of induction but found in those theories that made claims about what would follow if they were true a way to distinguish science from pseudoscience. Can the hypothesis be put to a test that it would fail were it not true? This is the criterion of falsifiability which found its way into Daubert.

The Reference Manual (like so many attacks on Popper since Daubert came out) starts by body-slamming pro wrestling-style a straw man: the claim that Popper believed a good scientist should think mightily, perceive a pattern, find a sound explanation and then spend the rest of her days attempting to prove herself a fool; meanwhile taking no sensible action that would follow from her hypothesis. To the contrary, Popper clearly opined that we ought to act on the best available evidence while keeping an open mind.

The Reference Manual then points out that a falsified theory might actually be an indictment of the giant upon whose shoulders the scientist stands rather than her reasoning. Sure. It might be true that everything we thought we knew about something is wrong (and revolutions do happen - think e.g. the human microbiome) but such a claim would be extraordinary and demand extraordinary proof. Barring such proof we're pretty safe concluding that a claim requiring we throw out everything we know is likely false. Unsurprisingly the Reference Manual, operating on the view that objectivity is an illusion, that you can never prove anything is false and that you can never prove anything is true ("the apparent asymmetry between falsification and verification that lies at the heart of Popper's theory thus vanishes") and thus without any track to follow, quickly careens into post-modernism.

Suddenly "science" is about context. Suddenly "science" is no longer about a quest for truth. "It takes a great deal of hard work to come up with a new theory that is consistent with nearly everything that is known in any area of science. Popper’s notion that the scientist’s duty is then to attack that theory at its most vulnerable point is fundamentally inconsistent with human nature." Science is thus about self-interest and power:



Myth: Scientists must have open minds, being ready to discard old ideas in favor of new ones.

Fact: Because science is an adversary process in which each idea deserves the most vigorous possible defense, it is useful for the successful progress of science that scientists tenaciously hang on to their own ideas, even in the face of contrary evidence (and they do, they do).



Of Thomas Kuhn the Reference Manual shrugs and says that all that business about paradigms collapsing to be replaced by new ones is similarly unevolved thinking. Instead "science does not, as Kuhn seemed to think, periodically self-destruct and need to start over again, but it does undergo startling changes of perspective that lead to new and, invariably, better ways of understanding the world. Thus, science does not proceed smoothly and incrementally, but it is one of the few areas of human endeavor that is truly progressive." One imagines the author spray painting "Ptolemy Lives!" on a subway wall.

Finally, even poor Galileo gets thrown in the ash bin of scientific history. Remember "the authority of thousands is not worth the humble reasoning of one single person"? False (or so says the Reference Manual). "[A]uthority is of fundamental importance to science... The triumph of reason over authority is just one of the many myths about science ..." Ugh.

So all the great thinkers were wrong. Objectivity is out. Testability is out. Keeping an open mind is out. Skepticism is right out. The appeal to authority is not a logical fallacy but fundamental to science. And supposedly it all adds up to making 21st century science conducted under such an understanding the best ever since, according to the Reference Manual: "There is no doubt at all that twentieth century science is better than nineteenth century science, and we can be absolutely confident that what will come along in the twenty-first century will be better still." So how's that working out for us so far in the 21st century?

Not so well. From the NYTimes article linked above it appears that bad science, unreproducible science and downright fraudulent science are all way up. We find ourselves in a "dysfunctional scientific climate." The problem? "You can't afford to fail, to have your hypothesis disproven." "It's a much more insidious thing that you feel compelled to put the best face on everything." That's it then. The conduct the Reference Manual calls science, hatching and clinging tenaciously and unquestioningly to a pet theory, even in the face of falsifying evidence, in hopes of becoming an authority in order to get more money just to repeat the cycle all over again, has led to a crisis in science. And the solution? Happily Dr. Casadevall, on the committee that oversees the writing of the Reference Manual, preaches giving graduate students a better understanding of what science ought to be - "the science of how you know what you know". Dollars to donuts the sermon will be long on critical thinking and real short on appeals to authority.

Science gained its prestige and respect not only from its ability to predict (and thus to allow us to make better choices) but also from its promise to respect knowledge, however humble she who reasoned it out. And despite whatever rot and corruption has crept into it in the last 20 years, the science that Daubert embraced still commands enormous respect. It's a science undaunted by authority, unimpressed by mere credentials and unafraid to dip any belief we hold dear in its acid bath of skepticism. And, especially in the biosciences, it's on the verge of sparking changes on par with, or more likely surpassing, those that followed the industrial revolution.



Pretty Good, Except for Footnotes 82 and 105

See: "Admissibility Versus Sufficiency: Controlling the Quality of Expert Witness Testimony in the United States".

Though a co-author hails from a law school named after one of the lawyers who is said to have made hundreds of millions of dollars off of the breast implant hoax and the lead was one of the first experts called by plaintiffs to testify in Texas' asbestos MDL, they get it right, more or less, when writing that the decision to exclude an expert under Daubert is really a query into the sufficiency of plaintiff's evidence rather than the raw admissibility of the proffered testimony. Can the expert sustain plaintiff's burden of production? That's the question. If not, why bother wasting everyone's time and money to go through the motions of holding a horse race the outcome of which has already been decided?

Their second point, that Daubert's inquiries about falsifiability and the likelihood that the expert's method has produced a false positive, is less compelling. While it's true that you can find a heap of non-analytical philosophical papers positing that truth is personal, that the quest for objective knowledge is meaningless and that nothing can really be falsified, the people who design and especially the people who board airplanes prefer to have them tested first. So don't expect to see falsifiability or rate of error vanish anytime soon.

The third point, that the abuse of discretion standard on appeal needs a "hard look" seems a bit undeveloped and rates a "meh" in our opinion.

The fourth, and best point, is that courts ought not simply let the jury figure it out and then fix the verdict if they get it wrong. I first ran into the issue in a trace benzene case. Admitting that the plaintiff's exposure to benzene at my client's facility was considerably less than his EPA and OSHA estimated background dose the plaintiff's expert opined that "naturally occurring benzene has a different electron resonance orbit than synthetic benzene" and that only man-made benzene had the toxic electron resonance orbit. Having a chemistry degree undergrad came in handy (though a BS-O-Meter would work as well), but not handy enough. The judge refused to exclude the expert saying he'd "take care of it" if the jury didn't. Needless to say a cost-of-defense settlement was reached. So hooray for the conclusion that "in an adversarial system employing lay fact finders there are multiple reasons for imposing a reliability filter on expert evidence."

Our main gripes are with footnotes 82 and 105. In footnote 82 the authors write: "The idea that a regulatory agency would make a carcinogenicity determination if it were not the best explanations of the evidence, i.e., more likely than not, is silly." First, the idea that critical thinking and regulatory agencies subject to political whims make strange bedfellows ought not be surprising. Second, the idea that "specific causation is not scientific" fails both to acknowledge the usefulness of biomarkers and that the very same probabilistic approach to causal inference that works for general causation works for specific causation. Finally, sneaking in reasoning to the explanation as a proxy for "more likely than not" ignores the method's provenance. It is rather a "guess" awaiting a test.

Footnote 105 quibbles with the role of statistics in all this (quoting Haack approvingly) "as the phrase 'tested or verified' suggests, what this really says is that the plaintiff's experts have produced no statistically significant evidence supporting the claim that Parlodel increases the risk of postpartum stroke." No, what the cited case says is that what the plaintiff's experts produced was highly likely to have been wrong. The point of the statistical test is to ask how likely it is that your hypothesis rejecting the null hypothesis is wrong. If you remember that all we know, as we make our way through Galileo's dark labyrinth towards the truth, is what isn't so you'll appreciate the difference.

All in all it's a good read and its conclusion, that courts indeed ought to filter out those claims that can never be proven, makes it well worth the time invested.


Bending the Dose Response Curve

The linear no-threshold model of dose-response meant that plaintiffs could continue to prevail on toxic tort claims even though their exposures had occurred in the modern era and thus were tiny fractions of those that led to epidemics in years past. Either courts permitted plaintiffs to rely on a one molecule / one particle theory of causation (consistent with the view that some risk is associated with a single molecule or particle) or they allowed plaintiffs to conflate causation with risk.

Eventually some courts began to grasp the absurdity that follows from basing proximate cause on a "one-hit" model in a world of trillions of hits while others began to take notice of the fact that despite probing larger and larger populations with low exposures epidemiology was unable to verify the linear no-threshold model for numerous diseases; thereby suggesting that there is indeed a threshold for diseases including leukemia (a new case making the latter point is Schultz v. Glidden Company.) Meanwhile we have argued that the old cases got it right - that causation in an individual toxic tort case is unfathomable and that the most sensible approach is to estimate the risk imparted (e.g. by a single molecule); to ask why it makes sense to impose liability for creating a 1:1,000,000,000,000,000,000 chance of harm; and, further asking why it wouldn't make sense to impose liability for a 1:100,000 or greater risk.

But all of that assumes risk goes to zero or at least continues to decrease as exposure is reduced below previously measured levels. If that assumption is false, if risk starts heading back up as exposure goes down, especially if unpredictably so, then all bets are off. We will have entered another period of great uncertainty, And it's in such times that toxic tort claims flourish. The horsemen of this new age of uncertainty have published a review paper on the topic and if you want to understand what's coming, why it's pitch perfect for the health and wellness movement and why what happened to BPA will be repeated again and again for other chemicals until some new way is established to either verify or refute their claim that dose doesn't make the poison you need to read it:  "Hormones and Endocrine-Disrupting Chemicals: Low-Dose Effects and Nonmonotonic Dose Responses"


"Autism Diagnoses Were Increasing and Would Soon Become the Metanarrative of Vaccine Fear"

"The Legitimacy of Vaccine Critics: What Is Left After the Autism Hypothesis?" is well worth a look. It's something of a political account of the odd left-right populism that fueled the anti-vaccine movement (and that once made Southeast Texas the epicenter of mass tort litigation)  and of what has become of the movement now that the lawyers have gone off on more promising quests.

As usual the crusade began with an invocation of Science but once the hypotheses were tested and found wanting, carrying the banner of "Science" was left to those scientists on the fringe, on the make and often both. Thanks to journals desperate for content, papers continued to be published but by an ever smaller cadre "scientists". Yet despite their hypotheses being unreproducible or otherwise found to be evidence-free, these warriors against the medico-industrial complex continue to this day to command large audiences and to rake in significant research dollars to churn out more of the same. Perhaps most tellingly, like those who claim evolution is "just a theory", one popular speaker is reported to have said of the germ theory of disease: "... the theory is that microorganisms are the cause of many diseases ... Might I say that this is just a theory. Germs may play a role in children getting sick, but they may not be the reason that children get sick." (pg. 84). On its best day such a statement is merely an exercise in sophistry.

The article has the same sad ending as so many past mass torts that relied on fear rather than falsificationism: "Vaccine critics have built an alternative world of internal legitimacy that mimics all the features of the mainstream research world - the journals, the conferences, the publications, the letters after the names - and some leaders have gained access to policy-making positions. " Enjoy; with antacids.



Gatecrashers, Blue Cabs and Bare Naked Statistics

For reasons identified in the very recent paper "Trial by Mathematics - Reconsidered" and the brand new "Epidemiology in the Courtroom: Mixed Messages From Recent British Experience", courts have long been reluctant to surrender their decision-making function to statistical and probabilistic inference. And it's a shame. A shame that such a powerful tool for uncovering what isn't so (as to us Popperians, all we really know is what isn't so) and the best bet for being so (h/t Thomas Bayes) is ignored at best and banned at worst.

The issue first came up for me as a young lawyer while carrying the briefcase for the partner defending a bad fire/burn case involving an allegedly defective electrical appliance. The client had indeed manufactured a number of the appliances with an appalling defect that undoubtedly led to fires. However, very few had the defect - less than 1%. As the appliance had been all but destroyed in the fire and it was therefore impossible to say whether it had the defect, our client claimed that plaintiff couldn't possibly prove her case since more likely than not she hadn't bought one of the defective variety.

On the other hand, plaintiff had the testimony of the fire marshal who said that he was "99% sure" that it was the appliance that had caused the fire (in large part because he'd had a similar experience investigating a fire involving the defective appliance). Since the appliances without the defect had never been reported to have caused a fire and as plaintiff's expert (along with, obviously, our expert) could think of no way the non-defective appliances could start one the fire marshal's testimony stood as a form of product ID. Plaintiff thus moved for partial summary judgment as it was, she averred, 99% certain that the product we'd sold her was one of the defective ones.

I got interested in the question and shortly came across base-rate neglect and Thomas Bayes. I excitedly reported to my boss that I had a way to fit all the facts together, including even the fire marshal's testimony, and demonstrate that the appliance was almost certainly not one of the defective ones. I laid it all out and he gave me the look you'd expect had I set out a proof of how 1 and 1 sums to 387. He went on to say that he privately believed the appliance was one of the defective ones. Our client even believed that the fire marshal's testimony meant that plaintiff had purchased one of the defective product. The case settled. For a heap of $$$s.

Since then I've had similar problems come up and I can report that mock jurors and judges are no more amenable to Bayes than my old boss. The former don't get it and the latter (for the most part) either haven't gotten it or refuse to turn such classically judicial questions over to "mere math". But if it's the truth we're after it's time we started to embrace the mathematical tools that have uncovered and corrected flawed reasoning in so many other areas of human inquiry. And that's the point of the two law reviews I commended to you above. Enjoy.


Check Our Math

What are the odds that a jury, having found for the plaintiff in a typical occupational cancer case, got it right (which is to say, got to the truth)?

Assume the following:

a) The odds that a jury will correctly attribute the cancer to the defendant when, in fact, it was an occupationally-induced cancer is 75% (i.e. the chance of a false negative, that plaintiff loses when she shouldn't, is only 25%)

b) The odds that a jury will find for the defendant when the defendant had nothing to do with plaintiff's cancer is 75% (i.e. the chance of a false positive, that a defendant will be held liable when it had nothing to do with plaintiff's cancer, is only 25%).

c) The odds that a given cancer case is occupationally related is 10% (that's 2.5x the typical estimate of 4% by Sir Richard Doll (of cigarettes-cause-lung-cancer fame) but let's go with it).

In such cases, when the jury has found for the plaintiff, they only get it right 1 in 4 times. Why? Because the vast majority of cancer cases aren't occupationally related. The false positives pile up. It's like the lesson we've learned with PSA tests and mammograms. Imagine 1000 cancer cases. 100 are occupational (using our plaintiff-friendly assumption above). 75 of those 100 will be detected by the jury but of the remaining 900 non-occupational cancer 225 (25%) will be wrongly assumed to have been occupational. That means that the odds that a given verdict in an occupational cancer case is the just one, the one that got it right, is  75/(225 + 75) = .25 = 25%.

Play with the numbers and assume that juries identify the deserving plaintiff 51% of the time and absolve the innocent defendant 51% of the time and use Doll's 4% figure for the number of occupationally related cancers. You'll find that for any given verdict under such a scenario there's less than a 1 in 20 chance that the defendant really caused plaintiff's cancer.

To move the number up to a point where we can be confident of the verdict, while sticking to a "featherweight", 51% more-likely-than-not standard for finding a defendant at fault, we have to assume that most cancers are caused by working. That assumption was popular several decades ago but it's no longer one that has anything to do with reality. Nowadays the math really only works for one type of cancer - mesothelioma.



The conjunction paradox isn't one and doesn't require new standards of proof; rather, it's evidence that the asymmetry in the law markedly favors false positives, whatever some may say.

Cancer-causing H. pylori might find its way into you by hitching a ride in food borne yeasts.

Blame for obesity: BPA vs BUG.

The evidence that hormone replacement therapy "causes" breast cancer isn't as strong as you think.

Occupational exposure to animals: evidence for a causal link with cancer mounts.


Trials and Errors and Milward v. Acuity

Today the U.S. Supreme Court denied U.S. Steel's petition for writ of certiorari in Milward v. Acuity. As a result plaintiffs' expert is now free to testify about his untested and in fact untestable hypothesis.  It's a shame because the First Court of Appeals has apparently lost sight of what science, at least reliable science, is all about.

Consider "Trials and Errors:  Why Science is Failing Us".  The article describes what was deemed the shocking failure of torcetrapib - a drug designed to prevent so-called "good" cholesterol (HDL) from converting into "bad" cholesterol (LDL).  Noting that "[t]here was a vast amount of research" supporting the proposition that increasing the HDL:LDL ratio would prevent coronary artery disease, and given that "the cholesterol pathway is one of the best-understood biological feedback systems in the human body", scientists assumed that they knew how torcetrapib would act and that its effect would be positive.  Instead, "the drug appeared to be killing people."

Rather than a failure of science the story of torcetrapib should be viewed as a triumph, at least so long as science is viewed as a method for subjecting big ideas to the acid bath of skepticism rather than the hatching of seemingly clever notions.  Milward, by allowing an expert to base his causation opinion on nothing more than a plausible hypothesis consisting of a proposed biologic mechanism far less well understood than that of the cholesterol pathway, embraces the big idea view of science.

The debate is an old one but one in which common law countries have generally sided with the scientific method over mere idealism.  Perhaps it has something to do with the experience of French and English bridge designers.  The early French suspension bridges had an unfortunate tendency to collapse despite being beautiful whereas the English versions tended to be ugly but sturdy.  For the French designers the thought was "it must stand because it is so beautiful".  For the English the idea was "it must stand or I shall be fired". 

The opinions of experts should be no less tested and no less reliable than bridges as both have so much riding on them.

"... It is Unacceptably Easy to Publish "Statistically Significant" Evidence Consistent with Any Hypothesis"

Want to look and feel younger?  Well, there's a properly done study, statistically significant at p < .05, showing that people who listen to The Beatles "When I'm Sixty-Four" actually became a year and a half younger!  Far fetched?  Sure, but no more so than umpteen conclusions published in the scientific literature every day purporting to establish some causal connection based on nothing more than a statistical analysis of a series of observations.  That's the point demonstrated conclusively in False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.

Though a paper may validly claim that the likelihood of the reported causal connection being due to chance alone is 5% or less (i.e. p < .05), "False-Positive Psychology" demonstrates that researchers free to modify as few as four variables (such as the number of observations to be made, sorting those observed by gender or stratifying outcomes) more likely than not have "discovered" a causal association that doesn't exist.  The harm done by publishing such false-positives are obvious.  As the authors put it:

 "First, once they appear in the literature, false positives are particularly persistent. Because null results have many possible causes, failures to replicate previous findings are never conclusive. Furthermore, because it is uncommon for prestigious journals to publish null findings or exact replications, researchers have little incentive to even attempt them. Second, false positives waste resources: They inspire investment in fruitless research programs and can lead to ineffective policy changes. Finally, a field known for publishing false positives risks losing its credibility."

To vaccinate against infecting the scientific literature with false-positives the authors conclude with suggestions similar to those we've seen elsewhere in efforts to promote evidence-based science.  At the heart of the suggested approach is transparency from the moment the experiment is conceived all the way through publication.  There are six for authors and four for reviewers; be sure to read them all.


The NYTimes says extra sodium threatens all but a new study says normal people just excrete it

The XMRV / CFS (Chronic Fatigue Syndrome) fiasco is simply an example of how science works

European study finds increased risk of death (CVD and heart failure) for Avandia vs pioglitazone beginning shortly after initiation, and disappearing upon discontinuation, of treatment

Can Vitamin D prevent fractures and even cancer? The evidence is mostly all over the board

Paternal, but not maternal, smoking is associated with an increased risk of childhood leukemia

Causation Just Seems To Get Harder and Harder

See for example:

The hygiene hypothesis does not mean that you should expose your kids to pathogens.

Genes + nitrosamines + helicobacter pylori = pancreatic cancer?

Drinking and smoking does lead to head and neck cancers; but not in the sense that you mean.


Parvovirus B19 infection  in children may not be so benign after all; it may be the first step on the path to childhood leukemia

But always remember, correlation does establish causation. Sal Khan explains it in Correlation and Causation.



It's Not That The Analytical Gap Was Too Wide

It's that analytical gaps are bridged by evidence and there was no evidence that therapeutic doses of Tylenol cause cirrhosis of the liver. Rather than simply saying so the court in Ratner v. McNeil-PPC, Inc.  veered into the always unenlightening analytical gap width assessment:

"The plaintiff did not put forward any clinical or epidemiological data or peer reviewed studies showing that there is a causal link between the therapeutic use of acetaminophen and liver cirrhosis. Consequently, it was incumbent upon the plaintiff to set forth other scientific evidence based on accepted principles showing such a causal link. We find that the methodology employed by the plaintiff's experts, correlating long term, therapeutic acetaminophen use to the occurrence of liver cirrhosis, primarily based upon case studies, was fundamentally speculative (see Lewin v County of Suffolk, 18 AD3d 621), and that there was too great an analytical gap between the data and the opinion proffered. We emphasize that when an expert seeks to introduce a novel theory of medical causation without relying on a novel test or technique, the proper inquiry begins with whether the opinion is properly founded on generally accepted methodology, rather than whether the causal theory is generally accepted in the relevant scientific community."

Why not just say "plaintiff's experts have no evidence that their theory is true"? In the court's defense it was struggling to reconcile its ruling with Zito v Zabarsky in which it had previously held that in the case of a "new drug" plaintiffs need not wait for evidence (e.g. epidemiology) as otherwise:

" [a] strict application of the Frye test may result in disenfranchising persons entitled to sue for the negligence of tortfeasors. With the plethora of new drugs entering the market, the first users of a new drug who sustain injury because of the dangerous properties of the drug or inappropriate treatment protocols will be barred from obtaining redress if the test were restrictively applied.

(I am here reminded of a benzene / CML case I once tried in which plaintiff's counsel in closing bellowed "Mr. Oliver says that not enough workers have gotten CML to prove benzene is the cause of their cancers. I hope when it's his turn to speak that he answers this question: How high must the bodies be stacked before his client will admit that benzene was the cause?")

Anyway, as you can see the court was confronted with its prior opinion in which it had held that a well conceived hypothesis, though lacking utterly any evidence to support it, was enough to get a "new drug" case to the jury. Thus its effort to distinguish the two cases via "Tylenol is not a new drug" (more to the point would have been "there's been ample opportunity to test  your theory via retrospective epidemiological studies and yet still you have no evidence") and thereafter a retreat into the bushes of "analytical gaps".

Karl Popper said that science proceeds by conjecture and refutation. You think up a theory about how some aspect of nature works, determine what predictions follow from it, and then check  to see if the predictions hold up. That last part is what's called evidence. Without it you're left with nothing but a more or less educated guess; and that isn't enough to warrant depriving a citizen of life, liberty or property. That's what Daubert was all about.

Finally, does it look to you like "deduction" has a different meaning in New York? Here it seems to refer simultaneously to the method  by which a general rule is induced from observations and the method by which a causal association is inferred from the rule you've just induced, thus:

 "Generally, deductive reasoning or extrapolation, even in the absence of medical texts or literature that support a plaintiff's theory of causation under identical circumstances, can be admissible if it is based upon more than mere theoretical speculation or scientific hunch (see Zito v Zabarsky, 28 AD3d at 46; see also Black's Law Dictionary [9th ed 2009] [defining "extrapolation" as "(t)he process of estimating an unknown value or quantity on the basis of the known range of variables" and "(t)he process of speculating about possible results, based on known facts"]). Deduction, extrapolation, drawing inferences from existing data, and analysis are not novel methodologies and are accepted stages of the scientific process.

For example, in Zito v Zabarsky (28 AD3d 42), this Court expressly recognized that extrapolation or deduction is warranted in instances where the theory pertains to a new drug."

 Let me know. Thanks.



Hydraulic Fracturing (a/k/a fracing a/k/a fracking) Roundup

Yesterday our energy partners reported on the EPA's claim of water contamination in Wyoming due to hydraulic fracturing fluids used in natural gas production. Today The New York Times is wondering whether earthquakes can be blamed on fracing. Thus it sounds like a good time to provide you some links to recent studies of the process that you may find of interest. Here goes:

Scientific American has the truth about "fracking" and thinks that engineering science has gotten ahead of safety

The comment period for New York's Supplemental Generic Environmental Impact Statement just ended and some public health advocates don't like it

Two miles underground amidst the shale and gas, where the pressures and temperatures are extreme lives a fascinating community

And some of its members traveled there via drilling muds

Finally, some public health advocates and journals tend to overlook one important aspect of the energy business - that it provides lots of high paying jobs and benefits from free laundry service to transportation to health care and often excellent pension benefits; not to mention an interesting and disciplined work environment - a big boost to socioeconomic status which bestows dramatic economic, physical and even mental health benefits that echo through succeeding generations. So let's not forget when balancing risks and benefits of fracing to add the profound public health benefits that flow from good jobs to the benefit side of the ledger.

No Two Experts See Cancer Metastasis The Same Way

How confident should courts be in the opinions of expert witnesses testifying at the bleeding edge of science when the life, liberty or property of a citizen hangs in the balance? Not very, if a new study is any indication. The authors of Conflicting Biomedical Assumptions for Mathematical Modeling: The Case of Cancer Metastasis wanted to build an exemplar mathematical model of cancer metastasis and open it up for testing and systematic evaluation. Instead they found that none of the twenty-eight leading academic experts in metastasis could agree on even the basics of the process. There were, in fact, as many opinions about the course of metastasis as there were researchers.

The authors found that a wide range of incompatible assumptions are held by scientists studying the same subject and that no two experts advanced identical scenarios for cancer metastasis. Most tellingly, the differences were largely invisible to the experts themselves.

As the authors wrote: "In their description of metastasis, experts grouped the same symbols/events differently, they varied their ordering of events, and often suggested recurrent events absent in the outline that we showed them (the 'textbook' version of cancer metastasis). While some disagreements were minor, such as proposing that 'some unknown extra steps occur between these events', others were substantial." They went on to write: "It was clear after 28 interviews that despite similarities, experts think differently about metastasis."

So what to make of it?  If nothing else the paper supports the view that (1) finding an expert whose views on an uncertain area of science align with your client's pleadings is probably no more challenging than going through the buffet line at your local cafeteria and picking out what you want; and, (2) a given expert's opinion, made under such conditions of uncertain science, is almost certainly wrong.



The Sixth Circuit (almost) Gets Substantial Factor Causation

In Moeller v. Garlock Sealing Technologies, LLC the 6th Circuit held that while the decedent's exposure to the defendant's gaskets "may have contributed to his mesothelioma, the record simply does not support an inference that it was a substantial cause of his mesothelioma. Given that the Plaintiff failed to quantify [decedent's] exposure to asbestos from [Defendant's gaskets] and that the Plaintiff concedes that [decedent] sustained massive exposure to asbestos from [other] sources, there is simply insufficient evidence to infer that [Defendant's] gaskets probably, as opposed to possibly, were a substantial cause of [decedent's] mesothelioma... On the basis of this record, saying that exposure to [Defendant's] gaskets was a substantial cause of [decedent's] mesothelioma would be akin to saying that one who pours a bucket of water into the ocean has substantially contributed to the ocean's volume. Cf. Gregg. v V-J Auto Parts, Col, 943 A.2d 216, 223 (Pa. 2007)."

So what's the problem? The problem is that the court is not asking whether the exposure in question created a substantial risk - one that may have been (though we'll never know because there were other possible sufficient causes) the cause of plaintiff's injury. No, the court is asking whether the exposure was likely to have been the "actual cause" of plaintiff's injury. That's made clear when the court writes: "Substantial causation refers to the probable cause, as opposed to a possible cause". Thus, it's not an inquiry as to the conduct (i.e. did Defendant produce more than a de minimis risk) but rather an inquiry as to the amount of the exposure to Defendant's product relative to other exposures.

For defendants then, who increasingly face a litigation environment in which their product contributed a bucket of water into an ocean the size of a bathtub, a few more victories like Moeller v. Garlock threaten to utterly undo them.

Hat tip Nina Webb-Lawton

"... the only group of organisms that have been convincingly shown to cause extinction."

What are they? They're responsible for massive worldwide die-offs of frogs and other amphibians. They're killing huge numbers of bats across the United States and threatening some local populations with outright extinction. They've also been convincingly associated with bee-colony collapse disorder which has wiped out 20 - 40 percent of U.S. honeybee colonies.

But they're not all bad. They invented penicillin and make other good things like beer and bread.

So, what are they? Read all about them in the Institute of Medicine's new report: "Fungal Diseases: An Emerging Threat to Human, Animal and Plant Health: Workshop Summary". The partial quote in our blog title comes from one of the participants, Arturo Casadevall, who said "Fungi are the only group of organisms that have been convincingly shown to cause extinction." And if you want more proof that infectious disease have most certainly not been conquered (and in fact have been invisible to investigators until now) be sure to read the story of the Cryptococcus gatti epidemic that emerged on Vancouver Island and has spread to the Northwestern U.S. killing 40 and sickening over 300 so far.


Once More Unto the Breach, Dear Friends, Once More

The "acid bath" of empiricism is reproducibility; which is to ask, to test, whether the results of an experiment, a study, can be obtained by someone else following the same methods and using the same materials as reported by the original study's authors. How often, would you guess, do academic, peer reviewed and subsequently published study results suggesting molecular biological mechanisms susceptible to intervention, a/k/a "new drug targets", survive the bath? Generally? Nope. Usually? Nope? More often than not? Nope. Sometimes? Ok, even broken clocks are right twice a day. See "Reliability of 'new drug target; claims called into question'."

So what's going on here? Unfortunately all the incentives run in favor of confirming pre-existing biases when it comes to academic research and there are no incentives for disrupting the status quo. No one with hopes of getting her or his PhD writes up "How I Propose to Falsify My Department Rainmaker's Pet Theories" when soliciting a grant.

The game's afoot so remember that unreproduced conjectures have a very high probability of being false.


ht Marginal Revolution


"The Cost to the Health of Our Microbial Ecosystems"

Gina Kolata has another good read at the NYTimes in "The New Generation of Microbe Hunters". The word, as you can see, is quickly getting out; the old ways of thinking about the determinants of human health are crumbling as the discovery that we are "super-organisms", more bacterial than human - at least from a genetic perspective, sweeps away old notions about what makes us sick, what keeps us healthy and even what (and maybe who) we are.

For other dispatches from the revolution you might want to read about just how big a deal this is, how much we know, how much remains to be understood and the promise of biotherapeutics; or maybe, since there's a little Gilgamesh in each of us, how  changing the bacteria in the gut of mice makes the rodents live significantly longer;  then there's a dysregulated microbiome and rheumatoid arthritis; new insights into how H. pylori causes gastric cancer; and gut microbes can cause cancer of the liver and breast (in mice anyway); and changing the gut microbiota to treat type 2 diabetes and, and, and ... There's a torrent of literature but that'll give you an idea what's out there and what's coming.

None of that is to say "Eureka!" they've found the answer. Likely (as it's wise to hedge bets) the causation onion has many layers still uncovered. No, the point is twofold. First, the 40 year old idea championed by public health advocates pushing what they call social, or environmental, justice - that much if not most human suffering is due to bad industrial chemicals or the bad habits inculcated in consumers by nefarious corporations bent on selling them things they don't want or need - was never sound but now it's just silly. Second, if you've been paying attention, you'll understand that an awful lot of illness and suffering has been caused by stuff nobody, we presume, ever fretted about. But who knows? Maybe somebody somewhere has the disrupted microbiome version of the Sumner Simpson Papers. Wouldn't that be something?

Havner: Now We're Really Confused

The Texas Supreme Court just decided Merck v. Garza. The relatively short opinion rolls along (1) reaffirming Havner; (2) apparently adding the further requirement of a second well done epidemiological study "statistically significant at the 95% confidence level" that shows a doubling of risk; (3) rejecting the "totality of the evidence" ipse dixit of plaintiff's expert; but then suddenly (4) utterly confounding us by holding "when parties attempt to prove general causation using epidemiological evidence, a threshold requirement of reliability is that the evidence demonstrate a statistically significant doubling of the risk". What?!

The whole purpose of the "doubling of the risk" requirement had been, we thought, to ensure that when a plaintiff has nothing but probabilistic evidence such evidence must actually support a "more likely than not" causal inference as to her specific illness. There are numerous agents that produce small effects (i.e. relative risks less than 2.0) and which are nevertheless unquestionably causative of human disease. Hopefully, the court meant "specific" where it wrote "general" regarding risk doubling.

Yet there's another problem on the very next page. Apparently courts are now to "examine the design and execution of epidemiological studies using factors like the Bradford Hill criteria to reveal any biases that might have skewed the results of the study." Again: What?! We thought (and we're pretty sure we're right) that Hill's list of factors were his way of assessing a given claim of general causation. And anyway, that's not how you look for bias. This is how you look for bias: "Excess Significance Bias in the Literature on Brain Volume Abnormalities".

In sum we liked, of course, the court's conclusion that when each piece of plaintiff's supposedly supportive evidence is flawed "a plaintiff cannot prove causation by presenting different types of unreliable evidence." Yet, recognizing that causal inference is hard (nearly maddening sometimes) and that statistical inference is complicated and counterintuitive, we wish the court had done a better job on this one. The deviations from standard analysis will only support those who complain that the current court is "merely results oriented".

The Malleability of Memory

Upon reading the New Jersey Supreme Court's decision in State v. Larry R. Henderson the first thought that occurred to me was that if courts hearing toxic tort cases in which product identification is an issue were to scrutinize such testimony under a similar standard our toxic justice problem would soon be solved.

The science stuff starts on page 40. The social science is, per usual, weak with small sample sizes involving unrepresentative subjects and the analysis consists too often of counting hands of credentialled experts. Nevertheless, the evidence that "memories fade with time" and that "memory decay is irreversible" ought to be kept in mind any time a witness claims to recall the incidental use of a product decades before.


From Post-Normal Science to Post-Normal Law?

Carl Cranor certainly understands the impact of Milward v. Acuity as you can see from his recent blog post at the Center for Progressive Reform. Over the coming days we'll examine several of Cranor's points but for today let's start with his enthusiastic approval of the appellate court's rejection of "an 'atomistic' study-by-study assessment of the scientific basis of expert testimony." 

In the sort of scientific induction that is the basis for most expert opinion in toxic tort cases, broad conclusions are drawn from specific data.  Thus in Milward, plaintiff's expert extrapolated from four particular sets of data to his conclusion that benzene is generally capable of causing acute promyelocytic leukemia (APL) in humans.  The trial court, however, reviewed each particular bit of data which supposedly supported the inference and found each to be wanting and so excluded the opinion.  But the US Court of Appeals for the First Circuit held that "[t]he district court erred in reasoning that because no one line of evidence supported a reliable inference of causation, an inference of causation based on the totality of the evidence was unreliable."  That court then went even further and adopted a view of how science is done that is advanced by just a small cadre of non-scientist academics. It approved Cranor's conception of scientific induction holding that "[t]he hallmark of the weight of the evidence approach is reasoning to the best explanation for all of the available evidence."  The problem, for those of us stuck in The Enlightenment, is that an argument founded on false premises cannot, save by sheer accident, lead to the truth.

Let's say there are four studies recording the incidence of some disease in a work force and the dose, or exposure, to the chemical in question sustained by the workers being studied.  The data, according to plaintiff's expert, looks like this:

From such data a scientist could reasonably infer that there is a relationship between dose and the incidence of disease and specifically that doubling the exposure doubles the risk of disease. Data like that is generally a powerful indicator of a true causal connection if subsequently confirmed by other studies.  But let's say that we examine the four data points and find that the expert has either misreported or misinterpreted the data and that it really looks like this:

What the appellate court has said in Milward is that somehow, based solely on the subjective weight given each bit of data and his interpretation of "the totality" of the data, an expert is free to testify to a conclusion that not only is unsupported by, but is completely at odds with, the premises from which it was derived.  What's going on here?  

What's up is that the Court has bought into, whether it recognizes it or not, the concept of "post-normal science".  It's an idea advanced by Jerome Ravitz and embraced by Carl Cranor and many in the movement that seeks to incorporate the precautionary principle into our laws.  The idea is explicated most clearly in "Towards a Non-Violent Discourse in Science" in which Ravitz explains that the Enlightenment's view of science which has prevailed to this day - that "in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will" ... we must "give up this idea and this hope of [ours] that there may be men so much more learned, erudite and well-read than the rest of us as to be able to make that which is false become true in defiance of nature" (Galileo Galilei) - is yielding to a new conception of science necessitated by our modern scary world. A world in which "facts are uncertain, values in dispute, stakes high and decisions urgent". A world in which Enlightenment-style science too often serves "the morally dubious worlds of profit, power and privilege".

Essentially the idea is that science has become "authoritarian". It imposes what it claims to be truth on people who have genuinely held beliefs that lead them to a very different conception of how the universe works. And when it comes to risk, it ignores social constructions of risk that lead people like Cranor to believe that, for example, autism is linked to pharmaceuticals, alcohol and living too close to a freeway. Consequently, "housewife epidemiology" and the fervently held beliefs of activists ought to be weighed in the scales of judgment alongside the data and the test results of such theories.  (See "Legally Poisoned: How the Law Puts Us at Risk From Toxicants"). Most importantly it embraces the view that there are indeed people, experts, "so much more learned, erudite and well-read than the rest of us" as to hold a view of truth immune to, and indeed beyond the reach of, "normal science".

It's all, in our view, a dreadful misreading of Thomas Kuhn's "The Structure of Scientific Revolutions". It twists the conclusion that scientists are no happier to admit their errors than regular folks into a claim that all science is a sort of social construct - that there is no truth, and that what scientists really do is to weigh the facts they find relevant to a nicety in the scales of their subjective judgment. And by incorporating such a view into our law, originally, at least, derived from and founded upon the empiricism of the Enlightenment, we adopt a view in which the law exists not to guide our future actions as citizens but rather, typically ex post facto in the case of toxic torts, to support whatever fad or fear motivates us in the moment.

More in coming days.

Peer Review in Scientific Publications

The Science and Technology Committee of the UK's House of Commons has just published "Peer Review in Scientific Publications" and it does little to bolster the view that peer review is any sort of a seal of approval of sound science. "We found that despite the many criticisms and the little solid evidence on the efficacy of pre-publication editorial peer review, it is considered by many as important and not something that can be dispensed with." It's a safety blanket at best and one that, with the advent of the internet, is yielding to calls for more transparency rather than better editorial gatekeepers. Critical thinking FTW.



Bostic: Baron & Budd Swings For The Fences

As we've discussed previously, the Dallas Court of Appeals in Bostic made it logically impossible for a plaintiff injured as the result of multiple potentially causative exposures to recover from any of those responsible for the exposures. They did so by holding that a plaintiff must not only show that the aggregate dose was the "but for" cause of his injury, but also that each defendant's component dose was the "but for" cause of his injury. Thus, if plaintiff were exposed to two doses, each sufficient to have caused his injury, both defendants could argue with equal force under a "but for" standard "had my product never existed plaintiff would still have been injured because of the other guy's product and so my product cannot possibly have been the 'but for' cause of plaintiff's injury".

The source of the problem seems to originate in confusion over what the Texas Supreme Court means by "substantial factor", We've been arguing since our amicus in Borg-Warner that the essence of every court's discussion of foreseeability or proximate cause is risk. Since, as the National Academies put it in Science and Decisions, "[v]irtually every aspect of life involves risk", what courts have been doing is drawing boundaries between those risks for which the imposition of liability would be just and those for which the imposition of liability would be unjust. Substantial factor then means substantial risk and in toxic tort cases risk is measured by exposure, or dose. A plaintiff need only show that "but for" his exposure to asbestos (in the aggregate) he would not have developed mesothelioma but if he's to carry his burden of showing substantial factor causation he must estimate dose for each defendant's contribution to the overall dose. And that's what we thought Borg-Warner said.

Bostic argues in her brief however that any requirement that a plaintiff show what the dose received from an individual defendants product or premises was likely to have been would make it "scientifically impossible" for any plaintiff to prevail. She says that her expert Dr. Longo testified "that it would be scientifically impossible for him to calculate the precise dose of asbestos" that Bostic experienced as a result of his use of Georgia-Pacific's products. Of course Borg-Warner specifically says that a plaintiff is not required to state with mathematical precision the dose-contribution of each defendant. A supportable approximation is good enough. Bostic further implies that coming up with even an approximation of dose is impossible. Is it?

In 1995 Harvey Checkoway wrote: "Quantitative estimation of exposure has become a central focus in occupational epidemiology over the past decade as a result of the increasing emphasis put on exposure-response characterisation for occupational hazards." He concluded by writing: "Methods of assessment of exposure have been given much more attention in recent years. As a result, increasingly sophisticated approaches to retrospective assessment have been developed ... Nevertheless, no amount of foresight and prospective monitoring will replace the need for sound approaches to retrospective estimation of exposure, and the variety of methods now available provide a basis for that work." Not only have such methods been available to expert witnesses for years their use in benzene and other toxic tort litigation is nowadays utterly unexceptional.  

Of course a supportable retrospective dose estimation is possible and it's done all the time. The attempt to substitute Dr. Longo's estimation of the highest dose from a one time use of Georgia-Pacific's product for Bostic's estimated total dose from its product is akin to substituting the amount of tar and other particulate generated by one cigarette for a plaintiff's pack-years of smoking - it evades the real question of "what was the risk?" and answers instead another question "was there any risk?" It's an effort to conflate risk and causation and so, without saying so, to get the Texas Supreme Court to adopt the Restatement (Third) of Torts and its attempt to substitute any risk for a substantial risk as the outer boundary of liability. Should they prevail they'll have knocked the cover off the ball.



The Attempted "Reshapement" of Toxic Torts

Building up to the publication of the Restatement (Third) of Torts and now reaching what must surely be a crescendo has come one law review article after another assailing various courts' (re)adoption of the Enlightenment's view of causation and their (re)embrace of empiricism. Last Friday for example we posted a link to a recent paper making the case that loose causation standards (as opposed to those of "classical liberalism") in toxic tort cases are vital to the consolidation and empowerment of "the administrative state". And a couple of days before that we wrote about another new article that attempts to provide intellectual support for the proposition that the consequence of courts' application of strict causation standards has been to tip the scales in favor of defendants. Today we'll address an assertion from the second article.

On page 107 author Gold makes the following claim: "To the extent courts treat general and specific causation as separate elements requiring distinct proof, plaintiffs who already confront scientific uncertainty may be required to jump two hurdles instead of one - increasing the likelihood of false negative adjudications on causation." What he's done is to confuse what happens when we estimate the probability of the conjunction of two events (here: e.g. the likelihood that tetra-methyl death (TMD) can cause prostate cancer and the likelihood that plaintiff's prostate cancer was actually caused by TMD) with the manner in which false positives and negatives are identified and the manner in which the odds of being false negative or false positive are estimated. A simple illustration will hopefully suffice.

On chromosome 19, carried by both men and women, is a gene, KLK3, a variant of which is highly correlated with prostate cancer. There's a test for prostate cancer that looks for the KLK3 variant. Since both men and women carry the gene what happens to the number of false negatives (the KLK3 test shows they don't have the variant but later they're found to have prostate cancer after all) and false positives (the test says they have it but they really don't) if a general causation "hurdle" like "can women even get prostate cancer?" is placed before the KLK3 test? The number of false negatives is unchanged (good) and the number of false positives is cut in half (great).

What we suspect Gold is trying to complain about is the following: If we're 51% sure that TMD is a cause of toe cancer and 51% sure that of the possible causes of plaintiff's toe cancer TMD was actually the cause of it then across the set of toe cancer plaintiffs there will be some who recover nothing because of the extreme uncertainty in the science. But that's really just proof that the law is far too lenient towards plaintiffs in toxic tort cases.

What are the odds that our hypothetical toe cancer plaintiff actually got his toe cancer from TMD? Only 26.01% (51% x 51%) - hardly "more likely than not". Yet every court in the country will let a toxic tort plaintiff stack uncertainty upon uncertainty, "more likely than not" upon "more likely than not" to get to a cumulative "more likely than not" in spite of the fact that mathematically that ain't how it works. Furthermore, given the fact that courts will allow plaintiffs to establish both general and specific causation with a single study demonstrating a relative risk (RR) of 2.0 and given the fact that most published research findings with low (less than 4-ish) RRs are false anyway the odds that our toe cancer plaintiff waving his toe cancer - TMD peer reviewed paper actually got his cancer from TMD is no more than 13% (51% x 51% x 51%) and likely much lower - yet he'll get to a jury, meaning he'll be offered a settlement, in every courthouse in the land.

At the end of the day the "Reshapement" of toxic torts won't increase the odds of truly wronged plaintiffs being compensated but it will, where adopted, swamp unfavored companies with meritless claims - perhaps that's the whole point of it.






Would-be Technocrats: don't forget sovereign immunity!

Ovarian cancer and asbestos: causation or misdiagnosis?

Selenium deficiency may enable food-borne bacteria.

Genetic determinism takes hit after hit. (h/t MarginalRevolution)

The more modern type of drug designer says "I don't see the use of this molecule in the body; let us clear it away". h/t Megan McArdle for the idea




The Linear No Threshold Hypothesis: Battling Risks Already Below Background

If the (hypothetical) risk posed by radiation from drinking water is already less than the (hypothetical) risk posed by radiation exposure from the earth's crust, cosmic rays, etc does it make any sense to spend a lot of time and money worrying about it? Only if you're hoping to gain power or money by leading to safety a "populace alarmed ... by an endless series of hobgoblins ..." And when it comes to instigating and propagating public health panics two things work best when conjuring hobgoblins: 1) recasting one side of a normal distribution of disease into a "disease cluster"; and, 2) the linear no threshold (LNT) hypothesis.

As we recently noted, there's a bill in Congress  that rejects (or perhaps implicitly repeals) the stochastic nature of biochemistry and aims for a future in which every community has either average or above average health outcomes.  The bill is a response to the fact that none of the hundreds of cancer cluster investigations, conducted at enormous expense, have produced evidence to support the charges brought against the various industrial chemical hobgoblins claimed to have been responsible. About the only cluster to actually be a non-random event, that of childhood leukemia in Fallon, NV, turned out to be due to a virus. Outraged activists are lobbying hard to ensure that nothing other than putatively man-made causes can be investigated in the future.

A similar codification of the LNT hypothesis may be needed to rescue it from obsolescence. Not only is the hypothesis wholly evidence-free, there's sound evidence to support the claim that in the case of radiation low levels cause adaptive responses that make you less likely to develop cancer. An excellent summary of the evidence against LNT and for adaptation is available free online in "Human Health and the Biological Effects of Tritium in Drinking Water: Prudent Policy Through Science - Addressing  the ODWAC New Recommendation". It's a discussion about whether or not people ought to worry about tritium in drinking water that produces less radiation than the ground on which we walk and the buildings in which we live. The answer is "no". To paraphrase one of my son's favorite Chuck Norris jokes (don't know how or why these suddenly became popular) 'before getting in bed, hobgoblins check their closets for Canadian health scientists'.


The Differential-Diagnosis "Methodology"

The case of Pluck v. BP Oil Pipeline Company, decided last week by the U.S. Sixth Circuit, turned on whether the opinions of plaintiffs' expert were properly excluded as unreliable and on whether his attempt to salvage them, by subsequently filing a supplemental report stating that upon using the court-approved "differential-diagnosis methodology" the identical evidence-free opinions had (not surprisingly) been reached, was timely. The court answered "yes" and "no", respectively.

Along the way to reaching its decision the court restated its view of the soundness of the differential-diagnosis methodology in deciding the cause of a individual's illness. Lots of courts have been saying the same thing of late. But all they're really saying is that using a decision tree to make a decision is OK.  That's like saying that using a digital calculator to calculate the length of the hypotenuse of a right triangle is OK. It doesn't, however, say anything about about whether the data used was accurate nor even about whether the determinative quantities had been measured in the first place.

When we think about differential diagnosis we ought to be thinking of something like this excellent example from Baylor College of Medicine's Radiology Club. Working from a variety of well established causes of a liver mass and precisely measured tests for the presence of each the physicians were able to rule in cholangiocarcinoma while methodically ruling out a variety of other potential causes. We'll save for another day the question of whether or not such a methodology was ever intended to, or is any more capable than cast dice or chicken entrails of, identifying heretofore unrecognized causes of a particular illness a la Milward.

Instead, here's what can happen when a court is satisfied with the utterance of the magic phrase "I used the differential-diagnosis method" and decides it's up to the jury to determine whether the expert's rulings-in and rulings-out are sound. We'll change the facts to those of a well-known example to protect the innocent and to keep us out of trouble with a certain court.

Consider the following, modified from "Judgment Under Uncertainty: Heuristics and Biases". The known, possible causes of plaintiff's illness are either "genes" or  "tetramethyl-death". Bad genes account for 85% of all cases while "tetramethyl-death" is only rarely indicted, accounting for just 15% of the cases. Plaintiff's expert says that plaintiff was exposed to "tetramethyl-death" and didn't have the bad genes. The court believes (perhaps because plaintiff's expert has made the mistake of going out on a Bayesian limb and giving an estimate of his faith in his estimation, hint, hint) that 80% of the time plaintiff's expert is able to accurately distinguish between cases caused by "tetramethyl-death" and those caused by bad genes. What then are the odds that plaintiff's illness was indeed caused by the chemical exposure rather than his genes?

Significantly less than 50%.

So the question becomes, essentially, should a verdict finding that one and one sums to three be upheld assuming the parties had a full and fair opportunity to cross examine the purveyor of such nonsense? A surprising number of jurists say "yes". That's what happens when you don't examine the alleged support for each branch of the decision tree; and that's what happens when you don't "get" percentages.

The Evidence Mounts: The Role of Bacteria and Viruses in Autism

Wakefield's fraud wasn't in suggesting that the gut had something to do with autism; that's just hypothesis formation - and in his case, maybe a good one. No, what got him in trouble was suggesting that he had confirmation/verification (that empiricism business we've been writing about) of the cause advocated by his plaintiff lawyer backers. Too bad. The cause may not be vaccines but there's growing evidence that disruption of the microbes with whom we share this life may in fact be at the heart of the matter. For more see:

"State of the Art: Microbiology in Health and Disease. Intestinal Bacterial Flora in Autism" ; then see; Desulfovibrio Species are Potentially Important in Regressive Autism" followed by "Gastrointestinal Flora and Gastrointestinal Status in Children with Autism -- Comparisons to Typical Children and Correlation with Autism Severity" and then finish it off with "Secrets of the MMR Scare. How the Vaccine Crisis Was Meant to Make Money".

Sadly, rather than working to uncover the cause of human suffering we seemingly spend most of our energy trying to determine to whom we ought to assign blame - first for the illness and then for the false claims of causality. For whatever reason we seem, collectively, more interested in rooting out any possible nefarious human agency rather than in solving, engineer-style, the problem at hand. So, perhaps, back to one of our first posts - "It Takes a Villain". If someone can somehow be blamed for the naughtiness of microbes perhaps we can makes some rapid progress on these matters. Pity the victims but after all every Dark Age, even the mini-version through which we're passing, demands its witches.


Why Do Almost One in Three Americans Experience a Medical Error While Hospitalized?

Start with "Medical Errors in the USA: Human or Systemic", head over to Health Policy Brief: Improving Quality and Safety (04/15/2011) and then try  "'Global Trigger Tool' Shows That Adverse Events in Hospitals May Be Ten Times Greater Than Previously Measured". It reminded me of my Great Grandmother who had a serious stroke yet refused to go to the hospital: "You don't ever want to go to the hospital; if you're lucky you come out no worse off than when you went in." Instead she called for big cans of soup from her pantry to use as workout weights to help get her strength back. Ten years later she was given too much of the wrong medication and died soon after. But 102 ain't a bad bad age to make it to especially if you're independent to (almost) the very end.

Finally, ponder "Should the Practice of Medicine be a Deontological or Utilitarian Enterprise?" Maybe our problem here too in the states is that we're stuck with a promise of heroic effort in every case yet able, obviously, to deliver only the effort that knowledge, time and money allow. So instead we make a big production of hospitalization and in the process gather a mountain of analytical test data that can't possibly be adequately analyzed (in no small part because we don't know what most of it means). In the meantime we subject patients to a staggering numbers of unnecessary tests or pointless biopsies and expose them to all the attendant risks including nosocomial infections.

So for now, and for the foreseeable future unless we're willing to admit that medicine knows a whole lot less than it claims, we'll have to settle for beads and rattles.



The Power of Prayer and Belief Influenced by Legal Outcomes

When considering how legal outcomes may affect beliefs (think experts opining on general causation in the face of a lack of science), it is interesting to consider not just science but pure belief.

A beer joint in a small Texas town built an addition. A congregation prayed that it not be built. Lightning struck. The church rejoiced, firm in the belief of the power of prayer. The bar owner sued the church for causing, directly or indirectly, the lightning strike which smote his expansion. The church then denied the power of prayer. No really. True story.

Let’s again consider general causation and the power of process of elimination causation determinations.

Getting at the Truth: Improving Systematic Reviews

Systematic reviews and meta-analyses are considered to be perhaps the best evidence about treatments/exposures and outcomes from which causal inferences can be drawn. The problem is that they're susceptible to a variety of biases. See e.g. "Bias Due to Changes in Specified Outcomes During the Systematic Review Process".

In the litigation context we increasingly see a systematic review launched by an expert-to-be before the first suit is even filed.  They will thus have already gone through the studies; used their "judgment" to select which ones ought to be considered (and which ought not); determined how much weight to give each; and established a protocol for conducting the review which, surprise surprise, demonstrates a causal link supporting the lawsuits. The strong suspicion is that selection bias (a form of the Texas Sharpshooter effect) is at work (and that, of course, is being charitable). And the biggest problem is that uncovering selection bias is notoriously difficult.

Now there's a move afoot to bring transparency to systematic reviews by requiring that plans, methods and protocols be registered before the systematic review is begun. See "Best Practice in Systematic Reviews: The Importance of Protocols and Registration"; "New Initiative to Make Systematic Review Protocols More Transparent"  and "Open Medicine Endorses PROSPERO". Here's the press release from the Cochrane Collaboration and here's a link to PROSPERO.

Until we get courts to make experts disclose their methods and justifications for selecting, weighing and interpreting data before they develop their opinions articles like the following may come in handy: "How Can We Improve the Interpretation of Systematic Reviews?"

It's World Health Day. What are You Doing to Combat the Spread of Antimicrobial Resistance?

Following the anthrax attack after 9/11 someone I knew began keeping a bottle of Cipro on his desk. He went through it that week. Had his family on it too - more bottles at home. I distinctly remember him telling me that I really needed to get to be buds with a doctor so I could get the good stuff. I worried that I'd put my family at risk by not developing a dealer. Of course I'd never really thought about it before because I had another friend who had always had several Z-paks in her top drawer. "I just call and say I've got a fever and a red throat and he writes me a 'script'." She mainly got them for her mother because her Mom's doctor wouldn't ever give her anything; "he always just says 'it's a virus'."

Now that we have relearned that many bacteria and fungi are keen on killing us and feasting on our corpses shouldn't we be keeping our powder dry? And now that we're learning almost daily of new microscopic bugs suspected of causing cancer (See e.g. "Novel Clues on the Specific Association of Streptococcus gallolyticus subspecies gallolyticus With Colorectal Cancer") shouldn't we hold off on using antibiotics until we see the whites of their colonies cultured in a petri dish lest we make them immune to what few weapons we have?

Read all about it at the WHO's "Combat Drug Resistance" webpage.

Risk, Duty and Foreseeability

The Restatement (Third) of Torts shrivels duty into an if-then statement executable by even obsolete jurists: if an actor's conduct creates a risk of physical harm then he owes a duty to exercise reasonable care.

Duty supposedly needed a new and simple algorithm because opinions turning on the question of duty were seen as incoherent and generally the result of a court having invaded the province of the fact finder (jury, hereafter). Foreseeability, the reporters decided, isn't the sort of legal or policy question judges decide - it's fact and case specific and thus something lay people relying upon common sense and communal norms of behavior ought to decide.

So that judges need not be completely replaced by computers the Restatement's reporters added that in exceptional cases a court may find that due to some other explicitly stated policy a defendant may not owe a duty. Furthermore, a court may on rare occasions properly find that reasonable people could not conclude that an outcome was foreseeable and so hold that the duty auto-generated by the new formulation had not been breached. Very simple indeed. But how's it working out?

If Nebraska (an early adopter of the Restatement's new duty formulation) is any indication the answer is "same results; different justification". Does a landlord who allows a renter to keep a pit bull owe a duty to a third party bitten by the dog? Sure; but it wasn't foreseeable so defendant wins. See Monica S. v. Nguyen. Does the owner of a road grader that can only be turned off while it's still in gear owe a duty to a mechanic called to fix it who twice accidentally bumps the ignition button causing it to start up and run over him? Sure; but it wasn't foreseeable so defendant wins. See Riggs v. Nickel.

What's going on? Look at the gold disk in my graphic. It contains all the acts, however remote, that created the risk of an injury that came to pass (e.g. the risk the road grader owner's great grandmother created by having his grandfather). American courts have pretty much uniformly taken the position that whatever risk the jury is to focus on should not be too remote. Whether because they recognized that "security is mostly a superstition" or that "a man sits as many risks as he runs" courts have in the past made essentially policy decisions to the effect that only a subset of all risks, those that aren't insubstantial, may be subjected to a foreseeability analysis. It's only for that subset of substantial risks that an actor assumes a duty and only for those risks that a jury may find to have been foreseeable that he can be made liable. Now in Nebraska (and Iowa) courts are finding a duty for every risk but then holding that whatever risks they would have formerly found to have been insubstantial are instead simply unforeseeable.

Rather than deciding the limits of tort liability those courts that have adopted the Third Restatement's concept of duty are instead engaged in the business of deciding the limits of human foresight. Hardly sensible and no improvement over the old rule: "you're under no duty to do the impossible i.e. guard against every 1-in-a-million risk you create". Oh well, at least it's frustrating what I suspect was the real purpose of the new duty formulation - to backdoor the Precautionary Principle into the law of torts.

The Process of Expert Elimination

Last week I deposed yet another expert witness who based his opinion on the so-called methodology of "process of elimination"; sometimes referred to as "differential diagnosis". Apparently, the new fad in Daubert-avoidance is to claim that one's opinion was reached via this unassailable method since he's the fourth expert in a row to so claim. Too many judges, all having run the gauntlet of SATs and LSATs, tend to find the method sound and unexceptional. So for them, here's a test.

What is the most likely cause of plaintiff's cancer? (A) "the evil eye"; (B) "bad humours of the blood"; (C) a Voodoo curse; (D) demonic possession; or, (E) Tetra-methyl-whatchamacallit?

Next, say why.

Expert witnesses who play the "process of elimination" game want you to assume, as good high school students do, that the set of all possibly correct answers is encompassed by choices (A) - (E). Of course, not all possible answers are known in real life and seldom do experts present viable alternatives to (E). Nor do they ever, out of fear of all things Bayesian, attempt to lay odds on the theory nor even say by how much subsequent evidence has affected their confidence in the proffered theory.

Our job often then is to explain to the courts that the other side's expert opinion (E) is not the result of considering evidence for and against (E) but is rather a form of the argument to ignorance. Real claims to knowledge still require evidence and the statement that a hypothesis is probably true because the straw men put up to oppose it are unconvincing is an absurdity. Hypotheses will always stand or fall on the evidentiary foundations upon which they themselves rest rather than upon the foundations of competing ideas.


Why Most Experts Are Little Better Than Dart-Throwing Chimpanzees

The measure of any theory is its ability to accurately predict some event that must follow if it is true. Experts, especially those with one big idea or theme, tend to do quite poorly when their predictions are put to the test. See "Why Most Predictions Are So Bad" ht: MarginalRevolution.

To avoid the problem, clever experts try to construct opinions that aren't testable (at least not in their working lifetimes) so as to avoid the embarrassment (and financial loss) that would follow falsification of their ideas. Other experts, particularly those of the irrationalist school, deny that bad predictions undermine theories and may even argue that untestable theories can still be good science so long as an expert's subjective judgment convinces him of its truth. The Milward court couldn't find a problem with either approach.

So if henceforth our job will be to discover how firmly an expert is convinced of his opinion rather than the soundness of the foundations on which it rests how should we go about it? Immanuel Kant suggests the answer in "Critique of Pure Reason":

"The usual touchstone as to whether something asserted by someone is mere persuasion, or at least subjective conviction - i.e., firm belief - is betting. Often someone pronounces his propositions with such confident and intractable defiance that he seems to have entirely shed all worry about error. A bet startles him. Sometimes the persuasion which he owns turns out to be sufficient to be assessed at one ducat, but not at ten. For although he may indeed risk the first ducat, at ten ducats he first becomes aware of what he previously failed to notice, viz., that he might possibly have erred after all. If we conceive in our thoughts the possibility of betting our whole life's happiness on something, then our triumphant judgment dwindles very much indeed; we then become extremely timid and thus discover for the first time that our belief does not reach this far, thus pragmatic belief has merely a degree, which according to the difference of the interest involved may be large but may also be small."

The answer then is clear, I think. Let's put the fees of experts in escrow when they opine upon things not yet generally accepted. If they are proved correct, say within a decade (or at least not refuted - ties going to runners after all), then the expert claims his fees and all interest accrued. If on the other hand he is proved wrong he forfeits the fees and pays a 10% penalty with the total going to charity. Assuming the law retains some interest in the truth such a rule would undoubtedly work far better than any losing-party-pays rule.


A Tonic for Radiation Anxiety

Worried about radiation from medical diagnostics like computed tomography colonography (CTC)? Turns out the benefits of getting scanned every 5 years far outweigh any risks. See: "Radiation-Related Cancer Risks From CT Colonography Screening: A Risk-Benefit Analysis".

Fretting about nuclear technology? The 45,970 workers employed at Atomics International from 1948-1999 have a 12% lower risk of all forms of cancer combined than do other Americans of similar age and race. See: "Updated Mortality Analysis of Radiation Workers at Rocketdyne (Atomics International), 1948-2008".

Panicking about meltdowns? Try the newest copy of Clinical Oncology and articles 16 - 18 and 21 - 25. Pay special attention to "A 25 Year Retrospective Review of the Psychological Consequences of the Chernobyl Accident". Worry-induced mental health problems and ostracization of "victims" turns out to have been the biggest harm


Milward v. Acuity Specialty Products: Popper Out; Feyerabend In

My write up of Milward v. Acuity Specialty Products Group, Inc. is seven pages long yet far from finished.  Since it's very late I'll post just this for now. Whether the court intended it or not Milward, a benzene/APL case, is a radical opinion and a dramatic departure from Daubert v. Merrell Dow.

Milward adopts the "free for all" view of the scientific method favored by technophobes, chemophobes and especially plaintiff lawyers. Meanwhile, it drops the requirements of testability and falsifiability, turns Sir A.B. Hill from a fan of Hume and Popper into a verificationist with a list of things that don't actually matter, and concludes by holding that if an expert's opinion rests on four propositions, all of which are faulty, his opinion is nevertheless admissible so long as he says that "the totality of the evidence" informed his subjective judgment.

To help it understand the scientific method and how tort law should incorporate the concept the Milward court turned to the author of "Legally Poisoned: How the Law Puts Us at Risk from Toxicants" as well as a number of articles going at least back to 1982 that advocate against cost/benefit analysis and in favor of what is now known as the Precautionary Principle and the use of the tort system, once Daubert is out of the way, to effect its purpose. Throw in the court's understanding of comment c from section 28 of the Restatement (Third) of Torts: Liability for Physical and Emotional Harm, then read "The Tyranny of Science" when it's released later this month and you'll understand what Milward is all about and why it's such a big deal.

In coming days I'll go through Milward item by item starting with the elevation of (paraphrasing) 'process of elimination through subjective judgment founded on concepts of society, collective duty and environmental justice' to a scientific method.

Extrapleural Pneumonectomy: Debilitating Complications, Minimal Symptomatic Improvement and Little in the Way of Improved Survival

It runs up plaintiff's damages (in one case costing nearly $1 million) but does extrapleural pneumonectomy (EPP) actually help the plaintiff as a patient suffering from malignant pleural mesothelioma (MPM)? The newest review of the evidence, "Extrapleural pneumonectomy or Supportive Care Treatment of Malignant Pleural Mesothelioma?" establishes persuasively that it doesn't.

Sadly, this review of all the sound clinical literature on EPP is unlikely to change the minds of desperate people facing MPM.


Courts Are Starting To Get Hindsight Bias

One day, out of the blue, a common and widely used chemical which had been tested many times on many different species of lab animals without effect caused a particular strain of mice to develop cancer. Government and industry got together, funded epidemiological and toxicological studies and then agreed on a new and dramatically lower occupational exposure limit. The system worked, right?

Soon those who produced the chemical, even those who produced its utterly innocuous precursors, those who made things with it and those who transported, bought, sold, released or disposed of it were all defending claims by plaintiffs who had been exposed to it decades before. But how could these claimants prove their cases when government and industry had tested the product repeatedly and had found no rat, mouse, dog or monkey that suffered any ill effect short of exposure levels in the range that would produce asphyxiation?

Easy! Find an article detailing the product's important metabolites, find an article suggesting that one of the metabolites is mutagenic, find an article showing that the metabolite was known to result from metabolism of the product, find an article showing that the particular strain of mouse in question was thought by someone, somewhere at some time to be a good model for human carcinogenesis and find an article suggesting an exposure protocol which wouldn't be used until years into the future but which would result in the metabolite otherwise not produced by traditional exposure methods. Having found such easily identified articles plaintiffs' counsel was able to claim that any good company truly interested in worker health would have discovered the product's carcinogenicity no later than 1964.

We went and deposed the authors of those old papers strung together to show that what was unknown was in fact knowable and they unanimously said "Ummm, no". No one cared about the product's metabolites because it wasn't considered to be a carcinogen and the metabolite in question was tested way back when because it was a metabolite of something else and that something else turned out not to be a carcinogen and oh by the way skin-painting and ingestion studies not what came two decades later were state of the art in the early 1960s. Whew.

Then we tried plaintiff's "duty to test / duty to discover" theory to three mock juries. Every one found for the plaintiffs and every one found that what would not be suspected, much less discovered, for another two decades was in fact knowable by the early 1960s. Indeed, the future was so obvious that one mock jury found "wanton indifference" and another decided exposing workers after 1960 amounted to an intentional tort.

Those thirty-six pretend jurors with an average education at about the tenth grade level were hardly oracles. Their extraordinary powers of foresight were, oddly, limited to the product in question. In breakout focus groups after the mock jury research they said that the internet, cell phones and air bags in cars were all unforeseeable in the 1960s (despite the fact that they either already existed or were being discussed in things like Popular Mechanics back then) but the carcinogenicity of the product, well, how could you miss it (despite the fact that it wasn't even hinted at back then precisely because of numerous tests that had come up negative)? So what granted them the power of perfect foresight though only for an obscure issue utterly tangential to their lives? Hindsight bias.

Courts in 2011 are starting to recognize the danger of unintentionally imposing a duty of omniscience if the faulty heuristic is not recognized and dealt with. See Rodriguez v. Stryker but also see Cristiani v. Money. Courts may be aware of the blindspot in our reasoning but how, within existing procedures, do you avoid it and how do you prove your jury was afflicted by it after the fact?

So Much For The Radioactive Drinking Water Scare

As we noted in "Reruns of the NORM Show", recent stories about drinking water drawn from rivers downstream of water treatment plants that handle waste water from gas production operations in Pennsylvania have been long on worries about naturally occurring radioactive materials temporarily concentrated in flowback water but short on any increase in radionuclides downstream. To see if there is in fact anything to worry about Pennsylvania's Department of Environmental Protection (DEP) began monitoring the water downstream of such plants last fall. Yesterday the results came in.

"All samples showed levels at or below the normal naturally occurring background levels of radioactivity." Said DEP's acting Secretary "Here are the facts, all samples were at or below background levels of radioactivity; and all samples showed levels below the federal drinking water standard for Radium 226 and 228".

Pittsburgh quickly discovered that good news isn't the news EPA wants to hear. See "EPA Wants Tougher Test of Pa. Water" in today's Pittsburgh Post-Gazette. Apparently EPA fired off a letter immediately upon learning of the good news. In it the agency demanded permitting and further testing. Its author writes "I stand ready to provide EPA's support and to utilize our federal authorities to require drinking water and wastewater monitoring if that becomes necessary. In addition, EPA is prepared to exercise its enforcement authorities as appropriate where our investigations reveal violations of federal law".

Apparently EPA refuses to take "it's safe" for an answer.


Prevnar and ActHIB: Here We Go, Again

Post hoc, ergo propter hoc. There'd be far fewer toxic tort claims if it weren't for that little logical fallacy which informs so many opinions.

Today there's word from Japan that the use of Prevnar and ActHIB has been suspended following the deaths of four children who died after immunization against bacterial meningitis and pneumonia. Though the vaccines came from separate lots indicating that contamination was not an issue and despite the fact that pneumonia and bacterial meningitis are dreadful diseases the news prompted the usual outpouring of vaccine denunciations.

Here in Houston the news produced claims that (a) the autism-derived and fact-free belief that children get too many shots or that they're given "too close together" means lazy doctors are to blame; (b) vaccines lower I.Q. and cause a 700% increase in cancer; (c) the U.S. Supreme Court has put us at the mercy of drug companies so that "you give those vaccines to your children at their peril"; and, (d) there's a dark conspiracy, in which the media is complicit, to bury stories about the harmful effects of vaccinations. See them all at "The MomHouston Blog".

You may not know people like these but they show up on your juries. Ignore them at your peril.

What is Cancer?

In toxic tort cases plaintiffs' attorneys and their experts tend to rely on one of two theories about the cause of cancer. The first is the "one-hit" model in which a single mutagenic molecule, particle or fiber causes DNA damage leading to a malignant cell that self-replicates uncontrollably. The second theory imagines that the damage leading to the malignancy is the result, somehow (the hypothesis is never set out in any great detail) of the cumulative effect of exposure to many molecules, particles or fibers. They say "it's like a glass of water that finally overflows when one last drop is added, each drop in the glass was a necessary cause of the overflow."

The one-hit theory is rolled out in low dose cases involving from one to a handful of exposure sources. Here the idea is that carcinogenesis is like playing the skull and crossbones lottery. The more tickets you buy (i.e. exposures you encounter) the more likely you are to wind up with the losing ticket. "All it takes is one bullet and they shot trillions of bullets at my client".

The cumulative dose theory is deployed when there are many sources of exposure and where those responsible for the biggest portion of the exposures are bankrupt or have already settled.  Here the idea is that once the individual's defenses are overtopped a malignant clone is born (initiation) or conditions for propelling the spread of an existing malignant clone are created (promotion). The most odious example of this argument was directed, despite my objection, against a client in an asbestos trial in state court in Galveston  - "It takes several men to have a lynching. One to hold the man, one to get the horse; one to get the rope, etc. They (meaning my client) want you to believe that each and every man in the lynch mob must go free just because the act of each man alone would not have resulted in my client's death. I know that's wrong and you know that's wrong!"

Either way, whether it's a matter of each cell playing the cancer lottery one molecule at a time or of  each cell slowly filling over the years it's carcinogenic reservoir you'd think that the more cells you have in your body the more likely you'd be to hit the losing ticket or see a chemoprotective dam collapse. Even for cancers thought to be caused by mishaps during normal cell division you'd think that if you had a lot more cells you'd have a lot more opportunity for mishaps.

But you'd think wrong. People don't get cancer more often than mice and neither do whales - even though they (obviously) have a lot more cells and also live long enough to have them and their progeny divide many many more times. See  "The Mere Existence of Whales" and "Why Don't All Whales Have Cancer? A Novel Hypothesis Resolving Peto's Paradox". Hat tip Marginal Revolution.

So what's going on? Do bigger organisms have better cancer defenses? Does size confer some advantage as suggested by the hypertumor hypothesis? 

Maybe it's the underlying deterministic model that needs tweaking. Maybe cancer rates scale up with physical size because cancer is a system, or a subystem, rather than a simple switch, Indeed there's a growing body of literature showing a tight association between reproductive optimization, energy availability, aging and cancer. Maybe the 30% cancer rate seen across mammalian species represents an evolutionarily determined risk-of-cancer/benefit-of-plasticity ratio that holds true from mice to whales.

If so, that would mean that we're programmed to run a high risk of cancer.  Not exactly the "cancer is a man-made problem" meme in which labor, environmentalists and their lawyers found a common purpose and a common tool.

Hammar Time

The Federal MDL's Magistrate Strawbridge has held that Dr. Hammar's opinion "that 'every occupational and bystander exposure to asbestos above background was a substantial contributing factor in causing [plaintiff's] mesothelioma' is sufficiently reliable to meet the admissibility standard of Rule 702 ..." See In Re Asbestos Products Liability Litigation Relating to: Anderson v. Saberhagen Holdings, Inc.

Because Dr. Hammar testified that his theory that every fiber is causative "could be (though it hasn't been) proven from carcinogenic theory (whatever that is) to a reasonable degree of medical certainty" and because "any of the occupational or bystander exposures could have been (though again they haven't been) sufficient to cause [plaintiff's] mesothelioma ..." his testimony ought not be excluded.

So Dr. Hammar gets to testify about legal duty (substantial factor causation) and base his opinions on unverified hypotheses. Oh, and in another case, plaintiffs get to recover money from the trust funds and then deny that their application forms don't establish substantial factor causation. See In Re Asbestos Products Liability Litigation Relating to: Taylor v. Lucent Technologies NC.

Wow. Just wow.

Cell Phones, Rat Whiskers and Glucose Metabolism

The NYTimes set off another temblor in the interwebz with its story about a study showing that among a few dozen subjects with cell phones strapped to their ears those with an active set emitting microwave radiation showed approximately a 7% increase in brain metabolism of glucose in the area of the brain nearest the antenna. (The NYTimes' article came out the day before the paper was published and was full of quotes from long time cell phone worrywarts - both are obvious red flags). Anyway, the story quickly became its most popular and stories of how irrefutable "proof" that cell phones affect the brain is now available and that "biologic plausibility" - thanks to a couple of quotes in the NYTimes' article amounting to nothing more than rank speculation - is similarly established.

That living cells respond and adapt to their environment should not come as news to anyone yet it always does. So, for those surprised to learn of it, the finding that cells respond and adapt to microwaves (assuming the observation is confirmed) is in and of itself no biggie. Now about the two proposed mechanisms whereby increased glucose metabolism might lead to brain cancer.

One idea advanced was that extra metabolism might generate extra free radicals (molecular boogeymen to protect against which people have variously and at best ineffectually overdosed on vitamins). The other is that the extra metabolism might set off an inflammatory response (chronic inflammation being implicated in some forms of cancer). So the question that occurs is whether there are other forms of stimulation that set off increased brain glucose metabolism and, if so, whether any of them have been implicated in brain cancer.

As luck would have it there's a new study that answers part of the question. The title is entirely too long to type as I'm heading out the door so here's the link. Now if, by the same analysis (stimulation and PET scan), it turns out that stroking the whiskers of a rat produces an even bigger increase in brain glucose metabolism (which is exactly what the paper demonstrates) does it follow that one should hereafter forego stroking one's whiskers (or those of a loved one) lest the doing of it cause brain cancer?


Cherry Picking on My Cherry Coke

Today's scare du jour was just launched by the Center for Science in the Public Interest. They claim that the caramel coloring in Coke (and in dark beer and lots of other good stuff) is carcinogenic and ought to be banned. See "FDA Urged to Prohibit Carcinogenic 'Caramel Coloring'".

The claim can be summed up as follows: industrial caramel is unnatural and the product of scary-sounding processes involving scary-sounding chemicals; one of the resulting constitutive chemicals, 4-methylimidazole, has been found "in significant levels" of five brands of cola; 4-methylimidazole causes cancer in lab rodents; therefore, my Cherry Coke is a cancer hazard. Is there anything to it?

Well, sure enough there's a study of lab rats and mice that found small increases in the risk of lung cancer and leukemia that increased as doses (the rodents got the equivalent of thousands of cans of cola per day worth of 4-methylimidazole) increased. See "Toxicity and Carcinogenicity Studies of 4-Methylimidazole in F344/N Rats and B6C3F1 Mice". But something else very interesting happened along the way to a good health scare - something not mentioned by the CSPI.

It turns out that while there were small and at best equivocal indications that 4-methylimidazole might be associated with one or two rodent cancers there were big, statistically significant and dose-dependent associations between cancer prevention and 4-methylimidazole consumption. For example, compared to the rodents not given 4-methylimidazole the female rodents drinking cola by the barrel were essentially completely protected from mammary tumors as well as a host of other cancers. Overall, rodents on a cola binge experienced a greatly reduced risk of many cancers and saw some tumor rates reduced by orders of magnitudes compared to their cousin rats and mice not given 4-methylimidazole.

There was no call for research into the protective effects of caramel coloring. The great big silver lining wasn't even disclosed. Instead, the two insignificant bits of data showing a small risk of tumors in rodents were cherry picked from the forest of data and the big effect, a cancer-protective effect, was completely ignored.

I'll go out on a limb and predict that this scare, like the CSPI acrylamide in bread, chips and roasted coffee is going to give everybody cancer scare, is also headed for the dustbin of history.


The Man Who Didn't Fall

On the matter of negligence in personal injury cases the first draft of the Restatement (Third) of Torts eliminated the element of duty altogether. After an uproar the authors eviscerated duty and stuck back in what was left. Ultimately, while they failed to restate the law as it pertains to duty they did manage to restate the era of unjustified fears and risk aversion that has persisted since about the time the Restatement (Second) was published.

The Restatement (Third) takes the position that case law as well as scholarship pertaining to duty is largely incoherent. The essence of the claim is that limits to liability couched in terms of relationships or foreseeability are nothing but ad hoc justifications for taking the real question, whether defendant breached a duty of reasonable care owed to society, out of the hands of the jury where it belongs. Thus, dismissing as insupportable judicial inquiries about the connectedness of plaintiff and defendant (the relational approach) as well as those as to the proximity of cause and effect (proximate cause), the Restatement (Third) of Torts, Liability for Physical and Emotional Harm Section 7(a) reads "[a]n actor ordinarily has a duty to exercise reasonable care when the actor's conduct creates a risk of physical harm." It goes on to add that save in "exceptional cases" judges "need not concern themselves with the existence or content of this ordinary duty." In other words, once it has been established that a defendant created a risk the court's work regarding duty is finished and the question of whether defendant's conduct was reasonable is exclusively for the jury to decide.

That's a very long way from either position taken in  Palsgraf v. The Long Island Railroad Company . On one side Chief Justice Cardozo for the majority wrote "[t]he risk reasonably to be perceived defines the duty to be obeyed, and risk imparts relation; it is risk to another or to others within the range of apprehension". On the other Justice Andrews dissented writing both that "[e]very one owes to the world at large the duty of refraining from those acts that may unreasonably threaten the safety of others" and yet that "there is one limitation (on liability). The damages must be so connected with the negligence that the latter may be said to be the proximate cause of the former." He continues "[w]hat we do mean by the word "proximate" is, that because of convenience, of public policy, of a rough sense of justice, the law arbitrarily declines to trace a series of events beyond a certain point."

Cardozo marks the boundary of liability by a circle of close if fleeting relationships while Andrews bounds it within a circle of causes that are close to the injury producing event. Whether measured by the remoteness of the relationship or of the cause from effect both sides agreed that there was a limit to the duty of reasonableness even when a risk has been created. It's that idea of a limit on the duty of reasonableness that was cut out of duty in the Restatement (Third). Consequently foreseeability, the catchall for the various efforts to limit the scope of the duty of reasonable care, is said to have been purged from duty.

The objection that the collection of limitations on the duty of reasonable care (whether Cardozo's or Andrews' or the many iterations based on foreseeability) is incoherent is based, I suspect, on an understandable misunderstanding of what these jurists are trying to say. What I think they've been trying to say is that risk is an inevitable part of life and that some risks are so small that liability isn't warranted even if an injury should follow their creation.

But foreseeability doesn't sound much like risk. Isn't it about predicting the future; about foretelling future events based on what's already known? Yes, but that ex ante calculation of the effects that likely follow causes is what risk is all about.

But why limit liability by the degree of risk? And if a boundary is drawn how can it be done other than arbitrarily? Isn't it better that we follow the Precautionary Principle? Aren't we Addicted to Risk? Wouldn't the world be a better place if we quit taking risks?

To answer those questions let's go back to the railroad guards who created the risk that caused Ms Palsgraf's injury. As a train began to leave the station two men ran to catch it. One safely boarded but the other, "carrying a package, jumped aboard the car, but seemed unsteady as if about to fall. A guard on the car, who had held the door open, reached forward to help him in, and another guard on the platform pushed him from behind. In this act, the package was dislodged, and fell upon the rails." What might the future have looked like had the guards taken the time to consider all of the possibilities that might follow from their effort to keep the man from falling? What might the future have held for him assuming gravity acted upon him as it did upon his package? And what about the rest of us, if inaction were the only way for our fellow citizens to avoid being hauled into court?

For several decades now the merchants of fear have been telling us that everything we thought was progress is instead the cause of human suffering. Vaccines? Autism or worse. Electricity?  Electromagnetic fields, migraines, MS, cancer. Internal combustion engines? Global warming, pm2.5, etc. Plastics? Endocrine disruption, heart attacks, cancer. Pesticides (e.g. DDT) and herbicides? Sterility, cognitive deficits, cancer, overpopulation. Cars? Runaway acceleration, rollovers, fireballs. Computers? ADHD, too many choices, too little control. Shouldn't we have waited until all the kinks were worked out before acting? Click on the vaccines link in this paragraph and also consider the price of fear of regret as expressed by Benjamin Franklin.

It is, I'm afraid, a duty to stop and not act until you've considered all the possible consequences of every action, however improbable, that the Restatement (Third) ultimately embraces. It is an embrace of the Precautionary Principle - an embrace therefore of a political viewpoint; not a restatement of the law regarding duty.

When you think of foreseeability as risk you see what the law really has to say about the duty of reasonable care and the analysis of any claim that it was breached. What it truly says is that the court, taking an ex ante perspective, is to decide whether the risk created was one for which, if found unreasonable and causative by a jury, liability may fairly be imposed. Thereafter, for any act amenable to liability, it's for the jury to express their community's tolerance for risk. Risk, or foreseeability, is thus sensibly in two places. First in the law where the limits of liability are drawn and thereafter with the fact finder who considers the reasonableness of the conduct given the context in which the various factors played out. 

Is there some bright line that can be drawn? Not always though we're certainly arguing down here that in cases where good quantitative risk assessment is available it can be. For example, in one case involving a minute exposure to a carcinogen we can take the plaintiff expert's epidemiology studies and show that the exposure his industrial hygienist calculated produced, at most, a 1 in 13 million risk of death. Compared to other known risks you're much more likely to slip and fatally hit your head on the toilet than to die from the exposure of which plaintiff complains. Our argument is that it make sense then to have the law, i.e. the court, set some reasonable outer limit on liability - perhaps at the 1 in 1 million level. Otherwise we either open up toilet makers and everyone else to ruinous liability for creating risks running towards the impossible end of the spectrum or we without any sound reason decide that some one in a million risks of death are fine while others aren't.

Unsurprisingly it didn't take long for the simplicity and predictability arguments advanced in support of the new and hollow version of risk to run off the rails. Consider Feld v. Borkowski. Iowa's supreme court eagerly adopted the new conception of duty in 2009  (see Thompson v. Kaczinski) only to start tortuously creating exceptions to it less than a year later. (Slow pitch softball is a contact sport? Who knew? By the way, with all due deference to the Creighton coach I suspect angular momentum and early application of torque rather than recklessness answers the question). Rather than clarifying duty I'm afraid the Restatement (Third) has only created a risk of even more obscure and incoherent formulations of exceptions so as to avoid the consequences of duty without limit.

Anyway, whatever the facts and whatever the content of duty courts will in the end have to recognize the wisdom of another famous American, Henry David Thoreau, who wrote "A man sits as many risks as he runs." And that "if a man is alive, there is always a danger that he may die ..." It's exactly that essence of the inevitability of risk that the courts, if not the Restatement (Third), have been trying to express when they talk about duty.




Does Recall Bias Explain Past Associations Between Pesticides and Parkinson's

How do people's memories of pesticide exposures correlate with industrial hygiene estimates of those exposures? Not so well. In fact it's pretty clear that a lot of people with Parkinson's assume that chemicals caused their illness and so are primed to remember past high exposures that had not in fact occurred. For a well done paper showing no association between pesticides and Parkinson's plus a great discussion of recall bias see: "Pesticide Exposure and Risk of Parkinson's Disease - A Population-Based Case-Control Study Evaluating the Potential for Recall Bias".

Those looking for the real cause of the increase in risk of Parkinson's among those involved in farming should pay attention to the endotoxin discussion. I'll check the studies that show endotoxin may protect against lung cancer in cotton textile workers to see if there's any hint of a Parkinson's excess and report back.

Breast Implants and Cancer

I thought it was no biggie when the FDA sent out an email late Wednesday morning saying that an extraordinarily rare malignancy, anaplastic large cell lymphoma (ACLC), had been associated with breast implants. A variety of implants, mainly orthopedic devices, have long been associated with certain rare cancers. Since the site of the cancer tends to coincide with the site of complicating surgical infections it has been thought that an infectious agent was responsible. See e.g. "Soft Tissue Anaplastic Large T-Cell Lymphoma Associated With a Metallic Orthopedic Implant: Case Report and Review of the Current Literature".

A quick review of PubMed showed that concern over ACLC and breast implants had been around for years. See e.g. "Anaplastic Large-Cell Lymphoma in Women With Breast Implants" (free in JAMA) published in 2008. So I went looking for something else to post on. Then, on tonight's 10 o'clock news here in Houston, one of the local stations led off with a story about the late John O'Quinn's litigation against Dow Corning and his claims that silicone implants caused autoimmune disorders and cancer. They made it sound as though O'Quinn had had somehow been vindicated by today's FDA press release. Then they went out and found some sympathetic woman who had recently had a radical mastectomy followed by breast reconstruction and asked her what she thought about the "new report on breast implants and cancer". To her everlasting credit she said she was happy with her decision and was confident that she'd made the right one.

ACLC is not breast cancer and the odds of getting it, assuming the association is confirmed (and there is indeed an awful lot of evidence showing that in the areas around implants whether silicone or metal where infections can set in, cancers can sometimes follow) are about 1 in 900,000. The odds by the way of drowning in your bathtub are significantly higher - somewhere around 1 in 660,000.

The media could have focused on the story of the mounting evidence for a link between pathogens and cancer. Instead they seem to have resorted to a long since discredited narrative about breast implants. It's too bad because the real story is the story of our generation.

A Disingenuous Take On The Vaccine-Autism Fraud

The British Medical Journal has just published an editorial titled "Assuring Research Integrity in the Wake of Wakefield" that addresses what has finally been revealed to have been an elaborate fraud concocted by a scientist and some personal injury lawyers in an effort to launch a mass tort. Unfortunately, rather than addressing the real problem (which is that the majority of the published peer-reviewed papers purporting to find an association between some drug or exposure or gene and a disease are probably false) the authors of the editorial reference a handful of ethical lapses spaced about twenty years apart and ask "[h]ow could this happen again?"; implying rather obviously that scientific fraud is almost as rare as Piltdown Man but nonetheless something about which the academy ought to be vigilant lest the public lose faith in "science".

They conclude with "We must transcend traditional hierarchies and authority gradients to empower everyone in the research enterprise ... to raise questions and "stop the line". I've no idea what the first part means though it sounds suspiciously like something out of "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity". The latter part on the other hand, quite inadvertently I assume, manages to expose the real problem with today's "research enterprise". It refers to the ability of factory workers on an automobile assembly line to halt the process when they detect a problem rather than having to wait until a supervisor calls for a stop. Focus then on the idea that many "researchers" aren't involved in the process of discovery or even design. Rather their part is played down on the assembly line of the Science factory - manufacturing the same sort of science; shift after shift, day after day, year after year. Their job is to identify anything that might throw a wrench in the works or cause the product to be defective and thus rejected by the customer, typically the government, industry or an NGO, to repair or engineer around it and to keep the line running.

The authors' concern then is with the process and not the product. But of course, if you've been paying attention, you know by now that the product is the real problem. In studies like Wakefield's, in which statistics are trotted out to test hypotheses, the "science" is probably wrong even if the researcher isn't consciously cooking the books in order to gin up a mass tort. Read and re-read "Odds Are, It's Wrong" from ScienceNews. Let the following quote sink in: "There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims."

Whether or not the product of the Science factory is worth its price or indeed worth anything at all, it keeps on coming in an ever increasing torrent.  Take for example genome-wide association studies (GWAS) - one of the most notorious examples of  too often useless "research" produced assembly line style. (Note there are efforts to improve it. See e.g. "A Knowledge-Based Weighting Framework to Boost the Power of Genome-Wide Association Studies") A quick search of PubMed reveals that 45 new genome wide association studies have rolled off the line in just the last week. That's great news if you're in the business of selling SNP chips to research universities and a sign of a boom (or bubble) in the fortune of researchers. And maybe it's even good for the economy - being after all a form of digging holes and filling them back in. But where does it end?

Ponder the following from "The Future of the Research University" written in 1997: "We need to think seriously, within the community of research universities, about whether we are producing too many Ph.Ds. This is a controversial question, with different answers in different scholarly disciplines, but the general conclusion seems inescapable: The mathematidcs of exponential growth - each professor producing numerous Ph.Ds who become professors who produce numerous Ph.Ds, etc. - has caught up with us."

With that exponential growth in Ph.Ds desperate for something to research and something to publish it's no wonder that so many turn to statistical tools which, when rigorously and repeatedly applied to any mound of data, will inevitably produce a publishable statistically significant, though often false, result. And despite concern that the exponential growth in the number of Ph.Ds "has caught up with us" there's no sign that the factory is cutting back on the number of shifts. Instead, more factories are being built. Indeed what got me thinking about this was a huge billboard on I-45 announcing that what was once the humble but excellent Teachers' College is now itself a Research University! Sure enough, a stroll through their website reveals that they've bought a great pile of extremely expensive analytical equipment and will soon be adding to the mountain of Science being manufactured. Everybody it sometimes seems is getting in on the "research enterprise".

The point then is that research has become an industry; and an enormous one at that. Best then to stay as skeptical of Big Research and Big Research Publication (note the circle the wagons approach of the Lancet's editorial board when they first got whiff of the fraud) as you are of Big Corporation. And best then as well to train your attention on the product of factory Science even if the process by which it was made is sound. Remember, it's generally not enough to say simply that it was made According to Plan.

El Dorado's Lost City of Uranium and Good Health?

In epidemiology, whenever lower incidence and lower rates of mortality from cancer occurs in a population commonly assumed to be at risk cognitive dissonance is always lurking. What generally happens is that the good news is shrugged off as "the healthy worker effect" and epidemiologists resolve to re-sift the data in order "to get the right answers". The "right answers" of course are often only those that support our preconceptions.

That means there aren't many people willing to even consider the possibility that working in a chemical plant or going off to war or spending a career mining / processing uranium while being exposed to low levels of gamma radiation might actually confer a health benefit. Nevertheless, the so-called healthy worker effect (which is called the healthy warrior effect for those who served in the armed forces) appears so commonly and across so many trades that you have to wonder if something besides simply screening employees for good health is at work. If you're interested here are three studies worth pondering.

In this month's journal of Occupational Medicine you'll find an excellent discussion in "The Healthy Worker Effect in US Chemical Industry Workers". The study was of thousands of Dow Chemical employees - three million years worth of employment combined. The overall risk of death from any cause was lower than expected as was the risk of dying from nine types of cancer thought to be related to smoking. Nevertheless the authors, an epidemiological team working for the Dow Chemical Company, concluded that the findings were likely due to the healthy worker effect though with a caveat. Some have suggested that the healthy worker effect arises because employers dismiss employees with health problems. However, in this study the health outcomes of those employed for a decade or more were the same as those who didn't last very long with the company. The finding thus suggests that the presumed healthy worker effect was generated as each employee was considered for employment such that workers destined to get cancer decades later somehow were never hired in the first place making it in fact a "healthy hire effect".

For another example see "Psychiatric Diagnoses in Historic and Contemporary Military Cohorts: Combat Deployment and the Healthy Warrior Effect". Despite some claims in the media that might make one assume otherwise those who serve in the military see lower than expected numbers of ailments including psychological ones. Here the suggestion is that those prone to psychological illness  are screened out as what risk there was of developing psychological issues after combat was concentrated in those with preexisting mental health issues.

Finally, published last month in the journal Radiation Research, there's "Mortality (1950 - 1999) and Cancer Incidence (1969 - 1999) in the Cohort of Eldorado Uranium Workers". With the exception of lung cancer incidence and mortality which demonstrated a small increase in risk, the Eldorado uranium miners managed to have significantly lower risks of dying from any cause, lower risks of dying from all cancers combined (lung cancer included) and a lower risk of developing any type of cancer cumulatively (lung cancer again included).

Now be honest. If someone had asked you yesterday whether you'd pick (A) uranium workers, or (B) the average Canadian male, as the group likely to have a much larger risk of getting cancer and of dying of cancer which would you have chosen?

So what's at work here? Is it simply that those prone to developing cancer in the distant future are somehow weeded out and never hired in the first place? Or does it perhaps have something to do with the nature of blue collar employment over the last 50 years? To me it all looks a lot like the compliance effect - the phenomenon whereby e.g. those who lead very ordered and structured lives and who thus always take their placebo at the appointed time manage not only to do better than those less disciplined and also on a placebo but oftentimes better even than those less disciplined but at least irregularly taking real medication. So I'll go out on a limb and guess that the claims of toxic tort plaintiffs notwithstanding, large employers engaged in manufacturing not only didn't shorten the lives of their workers they lengthened them by imposing the very order and rigidity about which so many bitterly complain in their depositions.

Of Flame Retardants, Autism and Skepticism

Last October some scientists got together in San Antonio to discuss the potential hazards of flame retardants. They wound up signing the "San Antonio Statement on Brominated and Chlorinated Flame Retardants" co-authored by Ake Bergman. Their claim is that the flame retardants are bio-persistent, toxic (especially when burned), cause neurological development in children and, because of their use in electronics including housings, being effectively dumped in third world countries where the products are recycled.

Bergman was interviewed for The Researcher's Perspective and you can read or listen to the interview at EHP in "The San Antonio Statement, with Ake Bergman". Bergman is quoted as saying of flame retardants "they are acting in a similar way than [sic] the other chlorinated compounds are, which is leading to a number of effects - for example, cancer risks; endocrine-disrupting properties of the chemicals, we have reproductive effects of the chemicals; and not the least, the neurodevelopmental effects that they cause, and for the neurodevelopmental we are talking about young children, the newborns, being affected." He goes on to say that he hopes that five years hence such flame retardants will all be banned and he says that "it's ridiculous to learn that you have nursing pillows with flame retardants..." Pity the poor maker of nursing pillows. A dropped cigarette or a pillow set too close to a space heater and woe be to the manufacturer who made it of cotton and didn't soak it in flame retardants.

Anyway, what should we make of the claim of neurodevelopmental effects? Well, as fate, or luck, would have it Dr Bergman has just co-authored a newly published study titled "Polybrominated Diphenyl Ethers in Relation to Autism and Developmental Delay: A Case-Control Study". The data from the study show that there is no relationship between PBDEs (the flame retardants at issue) and autism. In fact, the researchers managed to find that children suffering from neurological development delay were the ones with the lowest exposures to flame retardants.

So what did the researchers have to say about these findings? See pages 16 - 20 of the paper. Rather than simply reporting "we found no association between neurological development and exposure to flame retardants" the authors spend five pages saying why their exposure data is probably wrong and why even more studies of flame retardants and neurological development are needed.

Wouldn't it be nice, just once, if a scientist found an association between a chemical and some hot button disease and she spent her entire Discussion and Conclusion pointing out the reasons not to panic and not to jump to conclusions? That it doesn't happen when associations are found but does happen when the null hypothesis is confirmed says it all.

Nulliparous Plaintiffs, Fault and Causation

It has been known for a couple of decades now that women who never have children (i.e. women who are nulliparous) and women who do have children but not until they are 30 or older suffer a striking increase in their risk of developing breast cancer. The evidence for the association between never giving birth or delaying having a child continues to accumulate and now it appears that the increased risk is focused on hornmone receptor-positive breast cancers. See "Associations of Breast Cancer Risk Factors With Tumor Subtypes: A Pooled Analysis From the Breast Cancer Association Consortium Studies" in the current issue of the Journal of the National Cancer Institute. So let's say you've got a nulliparous plaintiff alleging that your drug or device or chemical caused or accelerated her hormone receptor-positive breast cancer; how do you handle her status?

The first problem a defendant faces in such a case is the risk of inadvertently wandering into the minefield called "blaming the victim". The plaintiff has either freely made a choice or has tragically been unable to have a child. Either way the jury will react strongly and negatively to any discussion about parity status and causation that makes even the slightest trespass into the issue of fault. Keep the discussion limited to risk factors and their relative potency. But that leads to another problem.

In some of the jurisdictions in which I practice plaintiff's counsel will successfully argue to the trial court that only evidence about about the actual cause of plaintiff's injury is admissible. In other words, unless my expert is prepared to say e.g. that "to a reasonable degree of medical probability plaintiff's breast cancer was caused by her not having children when she was young" testimony about "mere risks" is irrelevant and so inadmissible. The practical effect of such a ruling is that only junk science is admissible on the issue of the actual cause of plaintiff's cancer since my experts tend to be modest about the claims science can make regarding the cause of any individual's cancer. We're stuck then trying to prove a negative, showing we acted reasonably and preserving error.

In this age in which much that was certain (e.g. that we've conquered infectious diseases) is proving not to be so it's time I think for courts to recognize not only that the reasonableness of actions can fairly and effectively be judged according to the risks they conferred but also that causation is in many cases most precisely weighed when competing risks are allowed to be compared against one another.

Finally, and hopefully still on topic, for more evidence of the complexity of causation see "Does Pregnancy Provide Vaccine-Like Protection Against Rheumatoid Arthritis?" Why would pregnancy protect against auto-immune disorders and what's the connection with breast cancer? There are a variety of hypotheses offered but so far no one knows.


"[E]rroneous. But Unfortunately, It Permeates the Medical Research Community"

What is it? It's the view that "when a p-value is </= 0.05 then there is sufficient evidence that the drug works." In what I do it's also the claim that low p-values prove so-called general causation in a toxic tort case. The quotes come from a wonderfully readable paper just published in the Journal of the National Cancer Institute. It's: "Demystify Statistical Significance - Time to Move on From the P Value to Bayesian Analysis".

I particularly liked this: "Statistics in medicine has passed through its infancy and childhood. As it moves into its adolescence, the growing pains of reconciling frequentist and Bayesian views continue... Although the frequentist paradigm has been widely applied and is deeply rooted in medical research, it is time to move on and look for a better solution."

The frequentist paradigm spawned most of the toxic tort litigation of the last few decades and it also provides an easy way to gin up grant money for researchers and funds for "consumer research" so don't expect it to yield to reason without a long and bitter fight. I predict the Mt Sinai and Collegium Ramazzini folks will man the ramparts of the frequentists' final redoubt.

That having been said, perhaps the legal empiricists, learning from medicine's experience with touching hot stoves, can avoid the pain and skip straight to Bayesian analysis. Yeah. Right.

Anyway, expect the fight over whether a drug should or shouldn't be approved, or should or shouldn't have been approved, to move on to Bayesian grounds.


A Cautionary Tale For Legal Empiricists

Having discovered statistical significance testing and thus p-values, (far too) many legal scholars have set about trying to uncover previously unknown first principles of tort law, otherwise unrecognizable causal relationships between stock markets and telecom regulations, and the secret biases of judges they don't like. Running a statistics app over the top of a bunch of numbers is easy these days. Anyone who can operate a laptop can do it. But finding someone who knows what's going on behind the overlay and why is as rare as finding someone who knows what's going on behind the Windows operating system and why.

Indeed, if you don't understand the bundle of assumptions, limitations and trade-offs that underlies statistical significance testing, especially when it's being misused as a form of semi-automated induction, you can wind up proving that ESP is real. That's the real moral of the NYTimes' story "Journal's Paper on ESP Expected to Prompt Outrage" and it's why the publication of the paper discussed in the story is so deliciously subversive. I'm quite certain that a large part of the outrage isn't that junk science is being published - everybody already knows that most published scientific findings are false anyway. And it's not that people are going to realize that the peer-review process does not stop and was never designed to stop (so long as the methods and conclusions drawn from the data appear sound) obviously untrue conclusions about how nature works from being published - most people know that too. No, the real outrage I'd wager is from those who aren't happy that word has gotten out that the use of statistical significance testing, particularly in the social sciences, can be used to "scientifically prove" whatever you want to prove.

This week's NYTimes' Science and Health pages have surely been an eye-opener for those who think that what gets published in science journals is at least probably true. They've covered the XMRV controversy (about a made-up cause of a made-up disease), the debate over whether the autism-vaccine hoax was junk science or outright fraud and now the scientific peer-reviewed evidence that extra-sensory perception, at least when it comes to "erotic images", is real. If nothing else it may at least cause some people to be a bit more skeptical about science, even if it is peer-reviewed.

For those of you defending mass tort cases or handling any case where significance testing raises its head be sure to read "Evidence For Feeling The Future: An Assessment of The Evidence For Feeling The Future With A Discussionn of Bayes Factor and Significance Testing" linked at the end of the Times' story. Remember always that when using such frequentist techniques say in a chemical plant - styrene study you can never prove that styrene doesn't cause cancer - even if it doesn't - but if you collect enough data and test it enough you're guaranteed to eventually come up with some scientific evidence that will pass peer-review showing styrene does cause some form of cancer even though it doesn't. The best thing about the paper are the graphs. They're at the very end though referenced early on. If you don't want to wade through it here's the conclusion: "Our main methodological concern is that inference by p-values fails to seriously consider the null hypothesis as a viable possibility. Consequently, researchers who use it are apt to reject the null on the basis of insufficient evidence. We recommend that researchers adopt Bayes factor methodology, because this approach provides a rational and consistent assessment of the relative evidence between any two hypotheses (citing Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242)


Wealthy and Healthy

Wealth strongly and consistently correlates with good health and an overall reduced mortality risk. A new paper summarizing past research and presenting new data which confirms the link has just been published in the American Journal of Epidemiology. It's titled "Long-Term Effects of Wealth on Mortality and Self-rated Health Status". The study focused on self-reported perceptions of health status and the results mirrored those of the objective measure for mortality: socioeconomic status is a good predictor of health status.

Why do the least wealthy (aka the poor) tend to be the least healthy? Those pushing the cause of so-called environmental justice claim that the poor, who unsurprisingly live in the cheapest and thus least pristine areas, are exposed to toxic chemicals, electro-magnetic fields and ionizing radiation at levels far higher than the rich (who tend not to build their mansions next to refineries) and that such exposures are to blame. Others, the "real food" activists, claim that the poor live in a junk food environment and, the W.I.C. program notwithstanding, have not the means to come by nutritious food. Still others claim that the poor suffer from inadequate health care. There is though another reason the poor might be so afflicted. It was best stated by a now deceased safety man who spent his career with one of the oil companies in Port Arthur, TX as follows: "Poor folks got poor ways."

We were at the deposition of a retired refinery worker who was suffering from leukemia and who was giving a deposition before he passed away to be used by his wife in a subsequent gross negligence case against his and the safety man's employer. The deponent was asked by plaintiff's counsel, "If you'd have known about the dangers these chemicals like benzene posed would you have come to work for xxxx Oil Company?" The man, obviously prep'd for the question answered "No way. I'd 'a stayed in the piney woods loggin' like my daddy done. It might not 'a paid as much but I wouldn't 'a got this cancer". The safety man leaned over and said "Yeah, and he'd have died of cirrhosis at 48 just like his daddy done."

That old safety man went on to explain that the men who came out of the woods, and the cane fields and off the pogy boats to work hourly at those refineries often signed their job applications with an "X" but they wound up solidly in the middle class and their children or grandchildren made it to college and beyond. "Some couldn't even write their name when they got here and nobody ever went back to bein' a gawddamned logger from the xxxx Oil Company."

The epidemiological studies of those refinery work forces have repeatedly found that overall the men lived significantly longer, were significantly healthier and had a lower risk of all cancers combined than similar men in the general U.S. population - despite exposures to asbestos, benzene, butadiene and the like. Compared to loggers, and boat crews and farm laborers the average refinery worker bought himself six more years of healthy life just by going to work for better pay in an environment where middle class values were the norm.   

Are Jurors Getting Better at Spotting Junk Science?

The answer is: maybe, but only when it comes to the really obvious stuff.

In this month's journal of Law and Human Behavior there's a new paper reporting on a study designed to test jurors' ability to discern and deal with bad science. It's titled "I Spy with My Little Eye: Jurors' Detection of Internal Validity Threats in Expert Evidence". The researchers hypothesized that mock jurors could spot glaring flaws but not subtle though equally fatal ones. They also hypothesized that jurors would, when unable to judge the soundness of a scientific paper, use its publication status as a sort of seal of approval for its validity. Their results confirmed their first hypothesis but partially rejected their second one and along the way seemed to confirm the view that the less people understand about science and the scientific method the less impressed they are with publication status. Indeed, unpublished science may well be viewed as "cutting edge" and so, like soap powder, "New! And Improved!"

It's a well done paper; infinitely better than so much of the Empirical Legal Silliness out there. It also provides a measure of hope about jurors' ability to deal with scientific issues. For example, these mock jurors were able to identify and discredit a study without a control group. Using data demonstrating  e.g. that 18% of type II diabetes patients taking drug X had heart attacks or strokes over the following decade to show that a particular plaintiff's heart attack was caused by drug X is of course just a version of the post hoc ergo propter hoc fallacy unless there's some similar group of type II diabetics not on drug X who didn't suffer such a high rate of heart disease and stroke.

On the other hand, more subtle but equally invalidating flaws like obvious confounders, biases and reversal of causation went undetected. These observations led the authors to conclude that "[o]ur results indicate that although jurors may be capable of identifying a missing control group, they struggle with more complex internal validity threats such as a confound and experimenter bias. As such, the role of traditional legal safeguards against junk science in court such as cross-examination, opposing expert testimony, and judicial instructions become increasingly important." These findings and others like them underscore the continuing need for judges to act as gatekeepers. Such objective findings also continue to undercut the fact-free reasoning behind Comment c. Toxic substances and disease, Section 28. Burden of Proof, Restatement (Third) of Torts and its effort to loosen the standards for admitting expert testimony in toxic tort cases.

But how are our judges doing? Are they starting, at last, to "get" science? Seventeen years ago in Daubert v. Merrell Dow Chief Justice Rehnquist wrote, concurring in part and dissenting in part, "I defer to no one in my confidence in federal judges, but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its "falsifiability" and I suspect some of them will be, too." That a Justice on the U.S. Supreme Court could not understand that the demarcation between science and pseudo-science had something to do with being able to test the theory being advanced was shocking in 1993. Surely judges are getting this concept first introduced in middle school. Alas that it is not so. One of the references in the paper above is to a study that found that only 5% of state court judges "demonstrated a clear understanding of falsifiability". Worse yet, 80% of the same judges were confident they were up to the task of gatekeeping.

What does it all mean? I think it means that the battle against junk science is far from over but that lay people are finally becoming a little more skeptical of scientific claims and are at last learning to distinquish between the junk and the science, at least on a rudimentary level.

More Evidence That Exposure to Poultry Viruses and Bacteria Causes Cancer in Humans

Significant increases in mortality risk for some forms of leukemia has again been identified in a cohort of poultry workers. See "Update of Cancer and Non-Cancer Mortality in the Missouri Poultry Cohort". Given the profound changes in the demographics of the poultry industry in recent years it would be interesting to see if population mixing might have been responsible for some or all of the increased risk.

EWG Press Release on Hexavalent Chromium in Tap Water

The Washington Post is reporting "Probable Carcinogen Hexavalent Chromium Found in Drinking Water of 31 U.S. Cities". It's pretty much just the EWG press release "Chromium-6 is Widespread in US Tap Water: Cancer-Causing Chemical Found in 89 Percent of Cities Sampled". 

So what to make of the claim that 74 million Americans are unknowingly bathing in, cooking with and drinking unsafe levels of "the carcinogenic Erin Brockovich chemical"? Well, as luck would have it there's a brand new (free!) paper that contains a very nice summary of the state of scientific knowledge concerning the impact of exposure to hexavalent chromium (Cr (VI)) in water. See "Application of the U.S. EPA Mode of Action Framework for Purposes of Guiding Future Research: A Case Study Involving the Oral Carcinogenicity of Hexavalent Chromium". EPA didn't think it posed a risk of cancer and epidemiological studies have, with one controversial exception, found no link between hexavalent chromium in drinking water and cancer. Nevertheless, when the National Toxicology Program gave lab rodents water containing Cr (VI) at hundreds of times the concentration of the highest levels found in human drinking water it detected a small but statistically significant increase in risk of a rare (for rodents) lower g.i. cancer. Lower doses, i.e. at levels only 300 times higher than those found in the drinking water of 95% of Americans, produced no increase in rodential risk.

Thereafter the state of California proposed a "public health goal" for Cr (VI) in water of 0.06 parts per billion (ppb) or, exactly, 1/1000th the level the lowest level found to slightly increase the risk of cancer in lab mice and rats. Somehow that 0.06ppb level for Cr (VI) in water became the "safe level" according to the EWG (and thus, necessarily, The Washington Post). Assuming, without explanation, that any level above 0.06ppb is thus unsafe it's no surprise that the EWG discovered that 89% of our drinking water is probably carcinogenic.

The Washington Post quotes Max Costa who "chairs the department of environmental medicine at New York University's School of Medicine". He calls the EWG's findings "disturbing". The WaPo doesn't say why they emailed Costa for a comment but we can guess. He was a retained expert witness for the Erin Brockovich plaintiffs. He testifies that "[h]exavalent chromium is one of the most potent carcinogens known to man. It can produce any type of cancer depending upon genetic susceptibility, quantity and route of exposure."

Ok, if 89% of Americans are ingesting dangerous levels of one of the most potent carcinogens known to man, one that can cause any form of cancer, what would you think the National Cancer Institute's Annual Report to the Nation on overall cancer rates would show? Would you be surprised if it showed that the overall incidence of cancer to be declining significantly? Would you be surprised if it showed that colorectal cancer rates (the closest thing to what was slightly increased in mice) have been dropping for a quarter of a century and that their decline is accelerating? If so you need to read "Annual Report to the Nation Finds Continued Declines in Overall Cancer Rates; Special Feature Highlights Current and Projected Trends in Colorectal Cancer".


Fun With Google Books' Ngram Viewer

Assuming that when and in what number a word appears in Google's new database of "nearly 5.2 million digitized books" (h/t  today's NYTimes) is a decent proxy for when and what society's talking about what do these graphs have to say about some popular mass tort topics? The x-axis runs from 1900 to 2008. The y-axis is the percentage or rate of appearance if you will of the word or phrase (e.g. mesothelioma). Try it yourself at Google labs.




Hormone Replacement Therapy:


And how the curve of "the next big thing" tends to look over time:





And the current big thing, the microbiome:





What To Do About Too Many Calories for "Sedentary Young Children"?

The Center for Science in the Public Interest (CPSI) has sued McDonald's over their Happy Meals. The complaint argues that Happy Meals are "unhealthy" and cause obesity, that its marketing is "unfair" because it makes six year old lead plaintiff Maya pester her mother with her "clamor for Happy Meals" and that McDonald's seeks by way of its Happy Meals, like James Dean before it, "to subvert parental authority".

There is much in the complaint to blog about. Far too much actually given that I've got a brief due in the Texas Supreme Court by Friday. For now though consider the claim that Happy Meals provide too many calories for the typical, which is to say sedentary, child. When and how did the typical child get to be sedentary and so at risk of obesity? I'd argue that it has everything to do with turning schools into warehouses for children.

Want some evidence that even moderate exercise protects children from overeating or too much TV/video-gaming? See "Health Status and Behavior Among Middle-School Children in a Midwest Community: What are the Underpinnings of Childhood Obesity?" See also "The Fat-Mass and Obesity-Associated (FTO) Gene, Physical Activity, and Risk of Incident Cardiovascular Events in White Women". Apparently the "fat gene" only causes problems when combined with inactivity.

All in all it looks like the solution to childhood obesity is more about revving up the body's engine than starving it of fuel.

The Linear No-Threshold Theory: A Crumbling Foundation

The idea that a known cause of cancer, e.g. ionizing radiation, poses a risk of cancer at any dose, no matter how small, is a central thesis informing modern environmental and occupational regulations and modern, which is to say low dose, toxic tort cancer litigation. In the toxic tort context plaintiffs regularly employ the logical fallacy of the appeal to ignorance (argumentum ad ignorantiam) to prove that even the slightest exposure was risky. They say that because defendants cannot establish a safe level of exposure it follows that every exposure is necessarily unsafe. The formal name for the idea that risk doesn't drop to zero until exposure drops to zero is the linear no-threshold dose theory or LNT. The LNT theory, always longer on theory and politics than evidence is increasingly under attack. Now even NIOSH has had to concede that at least in some circumstances there is indeed a safe dose for a carcinogen.

In "Checking the Foundation: Recent Radiobiology and the Linear No-Threshold Theory" the author states "a large and rapidly growing body of radiobiological evidence indicates that cell and tissue level responses to [radiation damage], particularly at low doses and/or dose-rates, are nonlinear and may exhibit thresholds ... this evidence directly contradicts the assumptions upon which the microdosimetric [LNT] argument is based". The idea that a substance that is harmful at high levels can be harmless or better yet beneficial or protective (the idea of hormesis) at low levels is discussed at length in this month's issue of Human & Experimental Toxicology.

The claim that "if it takes an ounce to kill ten men then a drop will thousands" was itself just a theory based on the idea that carcinogenesis was a stochastic process. Getting cancer was sort of like hitting the anti-lottery and the more tickets you bought (exposures you sustained) the more likely you were to lose yet if you were unlucky enough just one ticket could do it. Like black box epidemiology LNT was simply a way to ignore the formerly incomprehensible molecular biological mechanisms responsible for cancer. Now that those mechanisms are being uncovered and understood they can no longer be ignored as they shatter one paradigm after another.

Baron & Budd Takes Georgia-Pacific v. Bostic to the Supreme Court of Texas

Georgia-Pacific v. Bostic, which prompted us to write The End of Toxic Tort Litigation in Texas? may be on its way to the Texas Supreme Court. Here's Baron & Budd's Petition For Review. Bostic is a mesothelioma case in which plaintiff prevailed at trial against a peripheral defendant but lost the claim on appeal when the Fifth Court of Appeals held that she had failed to prove that her decedent's exposure to that peripheral defendant's product was a "but for" cause of his fatal cancer.

Bostic's best argument is that the application of a "but for" standard of causation to a particular exposure by the appellate court makes it impossible for any toxic tort plaintiff whose illness was allegedly caused by one or more of several exposures to prevail. Here, plaintiffs get it exactly right. Whenever plaintiff's injury has been proved to have been caused "but for" a bullet (as in the "one hit" case of Summers v. Tice) or "but for" a sufficiently high concentration of salt (as in a cumulative injury as in Landers v. Texas Salt Water Disposal Co.), and where each actor's conduct was tortious, plaintiff is relieved of the logically impossible task of proving that each tortfeasor's conduct was a "but for" cause. The Texas Supreme Court has already made this point in in the asbestos context in  Borg-Warner v. Flores.

Baron & Budd gets it wrong however when they argue that (1) Borg-Warner only demands a Lohrmann-esque frequency, regularity and proximity exposure qualification in malignancy cases in general and in mesothelioma cases in particular; (2) that dose, and therefore risk, quantification is only required when the parties dispute what, in general, caused plaintiff's injury (e.g. it ought not be required when the parties agree that asbestos exposure was responsible for a plaintiff's mesothelioma); and, (3) that the standard for substantial factor causation somehow changes according to the facts of the case.

The whole point of Borg-Warner and the almost two decades of Texas Supreme Court cases that preceded it is to put the requirement for demonstrating wrongful, unreasonable conduct back into the state's law of torts. For Georgia-Pacific to have prevailed on appeal it should have been because its product posed at most a de minimis risk of mesothelioma,.not because plaintiff couldn't prove that its asbestos contributed to Bostic's cancer.  

If all your product poses is a vanishingly small risk of harm, in this a world of inevitable and uncountable risks, then you've acted neither unreasonably nor wrongfully and the substantial factor test, which is a combined query about causation and legal responsibility, will decide the matter and you'll not be deprived of your property. It's precisely because calculations of risk (the measure of the reasonable man and which are derived from estimates of dose) often show peripheral defendants to have acted neither wrongfully nor unreasonably that plaintiffs' counsel hate dose estimation.

Cell Phones and Brain Cancer: What Was The New York Times Thinking?

Recently the NYTimes published "Should You Be Snuggling With Your Cellphone?" in which it reported that the question of whether cell phones pose a risk of brain cancer is far from settled. Indeed, largely ignoring the overwhelming evidence that electromagnetic radiation does not increase the risk of brain cancer the article references an unidentified study showing an increased rate of brain cancer in the presumably cellphonophilic 20-29 year old age group, "400 scientific papers" that support the theory that radio waves cause "damaged brain DNA", and concludes with this quote by epidemiologist and author of the newly published "Disconnect" Devra Davis: "I do think I'm looking at an epidemic in slow motion". 

Google serves up a link to Environmental Health Trust which has front and center handy links to a "press kit", and variety of write-ups including (1) that the best evidence does support a causal link; (2) a warning to women to keep cell phones away from their chests as the radio waves "seep directly into soft fatty tissue" and may cause breast cancer; (3) heavy cell phone use decreases sperm count by 50%; (4) that we should be horrified by the sight of young children using iPhones as they are frying their young brains; (5) a conspiracy by industry and lobbyists to obfuscate the facts and prevent urgently needed anti-cell phone legislation; and, (6) the inevitable lawsuit by someone with an unidentified form of brain cancer who claims cell phones are as addictive as cigarettes and just as deadly - the evidence that cell phones caused his cancer seems to be limited to the fact that he was athletic, a non-drinker / non-smoker who led "an over-the-top healthy lifestyle".

Well, increasing risk of brain cancer in young adults, "damaged brain DNA", a corporate conspiracy and a plaintiff who talked on his phone four hours per day 365 days per year, how is this NOT the start of a massive mass tort? Here's how.

Let's take that troubling trend of increasing brain cancer in 20 - 29 year olds. Open up this month's journal of Neuro-Oncology and you'll find "Brain Cancer Incidence Trends in Relation to Cellular Telephone Use in the United States". Yes, there's small increase in incidence for males but the increase began "before cell phone use was highly prevalent". Yes there was an increase for women in that age group too but it was limited to frontal lobe cancers and since I've never seen a woman use her cell phone by holding it to her forehead I have to wonder if the absence of any increased risk in brain cancers near the ears isn't the most important finding of this huge study. And in fact its authors conclude "these incidence data do not provide support to the view that cellular phone use causes brain cancer". See also: "Mobil Phone Base Stations and Early Childhood Cancers: Case-Control Study" which found "no association between risk of early childhood cancers and estimates of the mother's exposure to mobile phone base stations during pregnancy" and, of course, "Brain Tumour Risk in Relation to Mobile Telephone Use: Results of the INTERPHONE International Case-Control Study" which showed that most cell phone users were if anything at a decreased risk of cancer.

Damaged brain DNA (whatever that is)? "Analysis of Proteome Response to the Mobile Phone Radiation in Two Types of Human Primary Endothelial Cells".

And the claim of industry bias or outright conspiracy to silence the truth or co-opt scientists? Try "Studies of Mobile Phone Use and Brain Tumor Risk Are Independent of Industry Influence".

Finally, brain cancer is always tragic, whether it strikes down a U.S. Senator or my next door neighbor at age 48; and their drinking, smoking and exercising status makes no difference as none are risk factors, positive or negative.

To this day the cause or causes of brain cancer remain unknown. All you can do is drink your coffee or tea and hope that one of life's deadly bolts out of the blue doesn't strike you.

What's the Skinny, Captain Hindsight?

On several occasions we've written about how successful plaintiff lawyers exploit hindsight bias and turn the so-called forseeability defense back on unsuspecting defendants. Why are emergent phenomena impossible to predict yet obvious in retrospect? Take it away, Captain Hindsight

What Do Hamburgers and King Tut Have In Common?

The key to preservation well past their expiration date is dessication.

One of the best things about the Internet is that urban myths don't circulate very long before someone who learned how to think critically puts the smack down on a specious claim. A perfect example is the claim that McDonald's hamburgers aren't "real food" because they don't rot. See e.g. "McDonald's Happy Meal Resists Decomposition for Six Months" and, for the use to which a 12 year old unrotted McDonald's hamburger is put, see: "1996 McDonald's Hamburger".

It didn't take long for someone with critical thinking skills to ask the right question: is a McDonalds'hamburger any more resistant to rot than a homemade burger.? In "The Burger Lab: Revisiting the Myth of the 12-Year Old McDonald's Burger That Just Won't Rot (Testing Results!)" a simple experiment answered the question. McDonalds hamburgers are no more resistant to decay than one made by hand from ground beef and fresh buns from the grocery store. Mummies and hamburgers that dry out and are kept away from moisture last a long time and for the same reason.

The efforts to demonize food prey on the gullible. Stay skeptical.


Bisphenol A Roundup

Since it's detected at low levels in 95% of us and since Americans have been exposed to it for more than 50 years you'd think someone would have noticed if exposure to bisphenol A (BPA) were responsible for widespread illness, deformity and death. Apparently not, at least not if the findings from a recent wave of BPA studies are to be believed.

The new findings are, in no particular order, that BPA: (a) damages sperm (b) inhibits the normal development of ovaries (c) alters brain development (d) causes premature birth (e) may be a carcinogen like DES (f) damages blood cells (g) activates breast cancer cells (h) impairs the body's defenses against colon cancer especially in women; and, (i) makes offspring anti-social and neurotic. And those are just a few of the "findings" published in the last two months. Obviously the world that existed before 1950 or so, before  BPA was everywhere used to seal bacteria out of food and dental cavities, had to have been a much healthier and more peaceful one. Alas.

Maybe That's Why You Fell For Her. She Cooks Just The Way Your Mom's Gut Microbes Like It.

Biological causation gets more complex by the day and the simple, reductionist model of toxin in - disease out common to toxic tort cases gets more absurd by the day. Imagine what you'd have thought two years ago about this claim: the diet of a fruit fly changes the composition of the bacteria in its gut and those bacteria then change the pheromones she releases so that she will attract a male fruit fly with the same sort of bacteria in his gut resulting in a mating preference change that lasts up to 37 generations even if her offspring's diet changes along the way.

Thus, it's not just a matter of "They Are What You Eat"; it may well also be a matter of "You Are What Your Great Great Great Great Grandmother's Intestinal Bacteria Ate". See "Bacteria Can Drive the Evolution of New Species" in Nature News for the write-up and "Commensal Bacteria Play a Role in Mating Preference of Drosophila melanogaster" for the paper that heralds the discovery.

For more on the idea of a "hologenome" - i.e. the idea of thinking of "your" genetic code as consisting of the genes in your chromosomes plus those in the bacteria that live in and on you - see the following:

"Role of Microorganisms in the Evolution of Animals and Plants: the Hologenome Theory of Evolution"

"The Hologenome Theory of Evolution Contains Lamarckian Aspects Within a Darwinian Framework"

"Whole-Body Systems Approaches for Gut Microbiota-Targeted Preventive Healthcare"

Why Do Acquarians and Geminis Differ About the Efficacy of Bone Marrow Transplants?

Let's say that you had chronic myeloid leukemia (CML) and that your doctor recommended a bone marrow stem cell transplant. Let's also say that you decided to check out the literature so as to be better informed. When you did you found that in a well designed study of 626 CML patients a large and highly statistically significant difference in risk of death from the procedure was revealed. Indeed, those transplant patients with the astrological sign of Aquarius were likely not to die in the following 5 years whereas those born under the sign of Gemini had a much greater risk of death and in fact were more likely than not to die within 5 years of the procedure. Finally, let's say that you are, in fact, a Gemini. Do you go ahead with the transplant?

Ok, ok. Before you make up your mind you go looking for other papers published in respected peer reviewed journals that look at zodiac sign and medical treatments. Guess what? You find other papers also showing that your sign is associated with a statistically significant increase in risk of injury or death! Now what?

Well, the first thing you need to remember is if you're dredging data for statistically significant associations and you don't find any, well, all that proves is that you don't know what you're doing. Epidemiology works when you formulate a hypothesis and then test it. When you go dredging for data without having in hand a hypothesis to be tested, knowing that the laws of probability will invariably generate statistically significant yet spurious associations, you're just manufacturing "science"; discovering nothing and flaunting the scientific method in the process.

For a fun read about why Geminis ought not worry about their sign and how statistically significant associations, based on the most rigorous statistical analyses, can be be generated at will to support, if that's the word for it, the most absurd causal inferences read: "Sign of the Zodiac as a Predictor of Survival for Recipients of an Allogeneic Stem Cell Transplant for Chronic Myeloid Leukaemia (CML): An Artificial Association".

"What Can We Get Away With?"

That's not the kind of question about the current state of medical science that you'd expect to hear from someone charged with understanding and promoting public health. On the other hand, if the point is to demonize food, in this case a drink with sucrose (glucose + fructose),  it makes perfect sense.

The NYTimes has gotten a look at the e-mail exchanges between New York's anti-soda crusading health commissioner and his subordinates and published some of them in "E-Mail Reveals Dispute Over City's Antisoda Ad". The email conversation reveals a determined effort to ignore, dispute and ultimately fudge the science in order to come up with an ad campaign that the commissioner hoped would "go viral". The meme he sought to develop was an equivalence between soft drinks and gooey disgusting fat. The ad is propaganda in its purest and, considering it's governmental origin, most disturbing form.

The demonization of sugar is a head scratcher. Your brain burns glucose and during periods of real starvation your body will start to cannibalize itself so that the brain can have that with which it cannot do without - sugar in the form of glucose. On a typical day of normal brain functioning the brain alone burns as much sugar as is found in three cans of sugary sodas.

Too much of a good thing is never good for you but those people who've joined the anti-sugar crusade and who are pledging to lead "sugar-free lifestyles" are either brain dead or are soon about to be.

Don't Point Your Guns at Your Children

For whatever reason CNBC has decided to dredge up an old controversy; one over the safety of the Remington Model 700.

Back in the day we worked on these cases and as a baby lawyer I got a lesson from one of the partners to whom I reported. He'd been an officer in the Army and recounted the tale of how the U.S. Army had instilled in him the good habit of never pointing a weapon at anyone he didn't intend to shoot. It came in handy.

He was going to be away from his wife for the first time in their marriage and she, with two young children, was afraid. So, he showed her how to use his Colt .45. Sitting in their bedroom he demonstrated how it worked just as he had to his men. After dropping out the clip he pointed it towards the ceiling saying "whatever you do, don't point it at anything you don't mean to kill. Everything else can fail but if you're pointing it away from everyone you'll be ok." Having forgotten he'd chambered a round he then put a .45 hole through the roof of his house.

The Model 700 has been extremely popular for decades. Nevertheless, the fact that it's not surprising that pick-up trucks have been in more accidents than Bugatti's seems to have been lost on the reporters in "Remington Under Fire: A CNBC Investigation".


A Placebo a Day Keeps the Doctor Away

Do you comply with your doctor's orders? If you're on a daily medication do you always take it, every day, at the same time, with say the recommended glass of water? If you do, even if there's no medicine in the pill, your odds of living a long and healthy life are good and getting better and it's because you've made yourself the beneficiary of the compliance effect.

I was reminded of a recent study of the so-called compliance effect while reading "Secrets of the Centenarians" in today's NYTimes. In the Times piece the author examines the long life and sharp mind of Mrs Esther Tuttle and her particularly keen insight into the source of good health. She says:"If you respect what the doctors tell you to do, you can live a long life, but you have to do it. You can't ignore the advice."

Though currently inexplicable it turns out that people who follow their doctor's orders, even if it's just to take a sugar pill a day (they don't know it's a sugar pill, of course), do better than those who take a less disciplined approach to their health. The article of which I was reminded is: "The Relationship Between Bisphosphonate Adherence and Fracture: Is it the Behavior or the Medication? Results From the Placebo Arm of the Fracture Intervention Trial". In it the authors report the results of a study of the outcomes of those women on placebo in a drug trial for a drug designed to prevent bone loss (osteoporosis). Even though they didn't get any real medicine and even though they were on the same sugar pill as the women who didn't strictly adhere to their doctor's orders regarding the drug, those women who took their sugar pill religiously had significantly lower bone loss and fewer hip fractures.

This and other recent research on the compliance effect confirm what Mrs Tuttle somehow figured out: resolve (along with resourcefulness, resilience and a refusal to succumb to cynicism) somehow, some way, keeps you alive and keeps you healthy.

Why So Much Peer Reviewed Science Is So Wrong

Ever since Daubert , lawyers have been fending off motions to exclude by proclaiming that the scientific studies on which their experts relied were peer reviewed and thus unassailable. Sadly, judges tend to think that a peer reviewed paper must indeed be some incremental addition to the field's body of knowledge. But are the claims in a paper that's been peer reviewed guaranteed to be true? No. But surely they're at least likely to be true, right? No. In fact, the conclusions reported in most peer reviewed scientific literature are wrong. Even those papers published in the journals with the highest impact factors are either wrong or have never even been tested an alarming percentage of the time.

Peer review is not some sort of certification of a high truth value for someone's research. Peer review exists to make sure no paper gets published whose author had gone about measuring temperature with a yardstick.

Science these days isn't what it was 100 or even 25 years ago. Seldom is what's found in prominent journals simply the reporting of observations (e.g. mesothelioma in a 2 year old girl) or the results of laboratory experiments (e.g. drinking an petri dish of H. pylori and developing peptic ulcer). In recent decades "science" has come to mean in many, many cases the sifting and re-sifting of data until some result so unlikely to have been due to chance alone appears. That result is then deemed knowledge.

But how unlikely, how very rare, must the result have been before it's crowned "Truth"? As unlikely, as rare, as being dealt two pair in a 5 card poker hand.

If you've ever played poker and bet the house on such a hand you're probably homeless. On the other hand, if you've made a living peddling liability theories resting on such odds down at the courthouse you're likely to have several homes.

Anyway, if you want to read about how science went so wrong and why, you can't do better than "Lies, Damned Lies, and Medical Science" just published in The Atlantic. It's the best and most important article you'll read this year about the current state of medical science.

Toxoplasma gondii: Sheep and Goats Have a Vaccine Against It. Why Don't We?

It''s becoming apparent that Toxoplasma gondii is responsible for an awful lot of human suffering around the world. The parasitic organism causes birth defects and spontaneous abortions, neurolgical impairment, eye damage and is increasingly suspected in Alzheimer's, schizophrenia and Parkinson's.

T. gondii infects human cells and reproduces within them eventually setting up shop in cysts throughout the central nervous system, heart, muscle, bone marrow and other organs. Persons infected are infected for life. Human infection is most commonly the result of consuming un-/under-cooked cyst-bearing meat though contact with the feces of animals, especially cats, T. gondii's ultimate host, are another avenue of exposure.

While 11% of Americans are infected with the parasite, that figure rises to as high as 70% in some South American countries. The European Food Safety Authority, worried that foodborne infection by T. gondii is on the rise and is responsible for significant yet underreported and undetected diseases within the EU, has recommended Toxoplasma monitoring of lifestock. Recent estimates of the impact of T. gondii-induced disease reveal it to be "one of the most significant causes of foodborne disease worldwide."

The good news is that thanks to demand from sheep and goat producers there's a vaccine that works well in sheep and goats. The problem is that it's a live cell preparation like the Sabin vaccine discussed during oral argument in Bruesewitz, et al v. Wyeth, Inc.,So what's the problem? The problem is that while it works really well, much better than bits of a dead organism, it's more likely to cause adverse effects. Meanwhile, finding all the bits of a dead organism that prime the immune system while weeding out those that might produce harm is a terribly complicated and expensive process. Add to that the threat posed by litigation over the inevitable errors in science's slow but steady progress through trial and error and it ought not be surprising that sheep and goats get protected while human suffering due to T. gondii spreads.

For a new, free and enlightening paper on the topic see "Vaccination Against Toxoplasma gondii: An Increasing Priority for Collaborative Research?"

Getting the Causation Cart Before the Benzene Horse

Let's assume you're trying to prove that benzene causes a host of cancers of the hematopoietic system - essentially all lymphohematopoietic neoplasms. Wouldn't it be clever to argue that the best studies are those that confirm your bias; i.e. that benzene just has to cause acute nonlymphocytic leukemia (ANLL, AML, etc.)? Better yet, wouldn't it be really clever to enforce plaintiffs counsels' demand that, thanks to a little ex post reassessment, studies including subjects prospectively assessed to have been exposed but after the fact, in an instance or two, found to have spent more time in an office, be marginalized or excluded outright? Finally, wouldn't it be brilliant to assume that those with an increased risk of AML just had to have had higher exposures thereby bringing all their similarly situated brethren into the study along with them?

Well, when you don't do experiments but merely look at the literature and run statistical tools over the top of those papers you cherry pick until the stars align (i.e. show that benzene causes everything hematopoietic) it ought not be a surprise that you wind up verifying your preconceptions. And such was the case with "Occupational Benzene Exposure and the Risk of Lymphoma Subtypes: a Meta-Analysis of Cohort Studies Incorporating Three Study Quality Dimensions".

If benzene was indeed some sort of universal blood carcinogen you'd think the statistical evidence wouldn't need to be so tortured before yielding.

All Things in Moderation: Drinking While Pregnant Edition

Does the dose make the poison? In toxic tort litigation plaintiffs have long argued that at the unmeasured and unobserved low dose end of the dose-response curve risk doesn't reach zero until the dose reaches zero. To support their claim they point to regulators' linear no-threshold risk models, they try to throw the burden of proof on defendants and they conclude by saying that since defendants can't prove there's an absolutely safe level for exposure, all levels must therefore be unsafe.

This "no safe level" argument isn't confined to cancer cases and plaintiff lawyers aren't the only ones who make it. Advocacy groups for a variety of human ailments stake out similarly extreme positions. For example, March of Dimes claims that some 40,000 American children are born annually with fetal alcohol syndrome disorders (FASDs). In addition to claiming "no level of alcohol use during pregnancy has been proven safe" they cherry pick data from weak studies to assert that mothers who consume as little as one alcoholic drink per week have children with (a) small heads ("a possible indicator of brain size"); (b) a 300% increase in risk of growing up to be delinquents; (c) a variety of emotional and learning disorders; and, (d) an increased risk of becoming alcoholics and drug addicts. Finally, March of Dimes flatly states "[t]here is no cure for FASDs."

The good news is that there never has been much evidence to support these horror stories and the better news is that there's a brand new study showing that not only are the children of light drinkers at no increased risk of cognitive defects (at least through age 5), they're likely to have fewer problems, be less prone to hyperactivity disorders and have higher cognitive test scores. See "Light Drinking During Pregnancy: Still No Increased Risk for Socioemotional Difficulties or Cognitive Deficits at 5 Years of Age?"

There's no doubt that chronic binge drinking during pregnancy can do lasting harm to a woman's fetus. Similarly, there's no doubt that roasting yourself in the sun all summer and continuing to irradiate yourself in a tanning bed the rest of the year can lead to malignant melanoma. Yet extrapolating from such findings to declare that there's no safe level of exposure to sun or alcohol or whatever not merely panics parents needlessly; it may well result in the infliction of needless harm on the very people for whose benefit such advocacy is intended.

TIDE Trial on Hold; Likely Never to be Completed

That's it for Avandia. The FDA has put the TIDE trial "on full clinical hold and the regulatory deadlines for its conduct are rescinded." The plan is to reassess the data from the RECORD trial to see if reliable "ischemic risk can be obtained from [] re-adjudication..." In Europe physicians have been directed to stop prescribing the drug. Meanwhile, in the U.S., patients will still be able to get the drug if they are already on it or have tried other drugs like pioglitazone without success.

Thinking About Risk

What does it mean to say that I've given someone a 1 in 1 million risk of death? Is he now just 99.9999% alive? Is he 0.0001% dead? Is either a "loss"? Must we await the verdict of the Fates to decide the reasonableness of my conduct? If so, why does some other agency decide whether an act in the past was good or bad?

What does it mean to say that I've given one million customers a 1 in 1 million risk of death? Is my conduct judged myopically on the basis of the one who died; or, shall we consider the 999,999 who wound up with a useful product and suffered no harm? And how is my conduct to be judged? Shall we add up all the good, subtract out the bad (consequence's books having been audited by the trial court), and have the jury decide whether my decision-making was, being Monday morning after all, "good" in light of the final tally?

If 1 in 1 million is too risky then so is getting out of bed in the morning. Yet 1 in 1 million is 300 American lives nonetheless. And if 1 in 1 million is a reasonable risk why isn't 1 in 100,000 or 1 in 1,000 or 1 in 10? Where do we draw the line, and why?

Does it, or should it, make any difference whether I actually knew the person on whom I imposed a risk? Does the answer change if the risk materializes? Does it, or should it, make any difference whether I actually estimated the risk before imposing it? Why do jurors punish the diligently knowledgeable while being far less wrathful towards the consciously ignorant? What, if anything, should the law do about it?

Ultimately, how should courts deal with the risks we impose on each other in this world of inevitable risks? A very good discussion can be found in "Statistical Knowledge Deconstructed".

I have one quibble and a few takes. First, the effort seems sometimes to be aimed at reconciling a Bayesian decision-making approach ("degress of belief" tending to sound rather appealingly deontological to me) with a consequentialist ex post assessment of the ultimate utility of an act. Consequentialism isn't generally thought to be informative on the ex ante side of decision-making - thus my objection to "... I mean to endorse an epistemic and (thusly) Bayesian conception of risk, not a frequentist conception". Second, his comments about cost/benefit analyses are dead on. Companies are abandoning the process and adopting "Nobody gets hurt, ever!" policies instead. One has to wonder about a legal system that advantages willful ignorance. Third, his suggestion that we let risk more openly inform determinations about where an act falls along the intentional - reckless - negligent - non-negligent spectrum would be especially helpful in mass tort cases in which risks are widely distributed.

Finally, we are in the midst of a scientific revolution in which the product of biological systems are being discovered to be unpredictable and invariably greater than the sum of their parts. Emergent phenomena arising out of vastly complex systems means that the balance sheets needed to make a consequentialist assessment of an act can never be closed nor the credits and debits ever intelligently summed. Perhaps then, like the earth's most ancient and successful organisms, we ought to have rules, or principles, as our guides rather than approaching every problem ad hoc. In that case, knowingly, or willfully ignorantly, imposing a significant risk on your fellow man that manifests might be such a rule for liability.

Avandia's Revenge

The Avandia witch hunt saga just gets more and more absurd. Guess what just popped up from the FDA's Division of Drug Information? Notice of an ongoing safety review of pioglitazone (the "safer alternative" generic pushed by those urging Avandia be pulled from the market.). Apparently "an increased risk of bladder cancer was observed among patients with the longest exposure to Actos (pioglitazone) as well as in those exposed to the highest cumulative dose of Actos."

So does this mean that pioglitazone ought to be banned? Does this mean that the ongoing study of pioglitazone users needs to be discontinued because it's unethical? That's what the anti-Avandia activists argued re: Avandia and the TIDE trial; they wanted everyone off Avandia, on pioglitazone and the experiment halted - pronto.

No; it ought not mean any of those things. What it does mean is that it's foolish to let our decisions be bound by the results of a single data dredge.

For more on the Avandia issue see: "Avandia: Burn Her Anyway?" For more on the perils of data mining see "Lies, Damned Lies and P-Values". Here's the full text of today's FDA missive on pioglitazone:


FDA/CDER/Division of Drug Information (DDI)

The Division of Drug Information (DDI) is CDER's focal point for public inquiries. We serve the public by providing information on human drug products and drug product regulation by FDA.

The U.S. Food and Drug Administration (FDA) is reviewing data from an ongoing, ten-year epidemiological study designed to evaluate whether Actos (pioglitazone), is associated with an increased risk of bladder cancer. Findings from studies in animals and humans suggest this is a potential safety risk that needs further study.

Actos is used along with diet and exercise to control blood sugar or improve control of blood sugar in adults with type 2 diabetes mellitus.

Bladder cancer is estimated to occur in 20 per 100,000 persons per year in the United States and is thought to be higher in diabetics.

The drug manufacturer, Takeda, has conducted a planned analysis of the study data at the five-year mark, and submitted their results to FDA. Overall, there was no statistically significant association between Actos exposure and bladder cancer risk. However, further analyses were also performed looking at how long patients were on Actos and the total amount of the drug they received during that time. An increased risk of bladder cancer was observed among patients with the longest exposure to Actos, as well as in those exposed to the highest cumulative dose of Actos.

At this time, FDA has not concluded that Actos increases the risk of bladder cancer. Its review is ongoing, and the Agency will update the public when it has additional information.

For more information, please visit: Actos


Why is Media Coverage of Medical Science So Bad?

Not all of it is bad, of course. Dedicated health and science journalists often do a fine job of maintaining the necessary level of skepticism when probing the claims of new health research. The recent study of health care coverage "Does It Matter Who Writes Medical News Stories?" confirms as much.

Media though is a business and thus it should come as no surprise that a lot of what passes for journalism is simply the selling of the current product, the narrative that attracted customers in the first place, and the marketing of new and improved products fitting that narrative e.g. breathless reporting about miracle cures and health scares. It's therefore not surprising that TV human interest programs did the worst job of accurately reporting health news.

Takeaway: Stay skeptical, especially when it sounds too good, or too bad, to be true.



BPA: Theory v. Empiricism

Tonight the NYTimes has an article on bisphenol A (BPA) headlined: "In Feast of Data on BPA Plastic, No Final Answer". It's a very good overview of the current status of the research on BPA; which is to say that despite the hyperbolic claims of some there's precious little evidence that BPA causes harm in humans.

Is there a tsunami of literature showing that on a molecular level cells sometimes adapt to BPA? Sure. Is there evidence that these changes lead to any harm? Not so much.

So what's going on? What's going on is a very public battle between theory, hatched from conventional thinking about what constitutes an endocrine and how it should work, and what we witness in the world - i.e. dramatically decreased morbidity and mortality among inhabitants of modern economies exposed to lots of BPA - and no evidence of an excess of morbidity or mortality for those exposed to more, rather than less, BPA.

Much is at stake in this contest. Will the effort to turn scientific research into a sort of blue collar affair in which statistical tools supplant genius be successful? Or will modern research efforts, bounded by old paradigms, be overcome by the simple collection of observations and the unanticipated spark of induction that leads us to a wholly new understanding? It's part of a larger political battle and this time everyone's pulling out all the stops. One side or the other will lose and lose big. Moneywise I'm long on the empiricists.

Koch's Postulates Revised

Whenever epidemiological data is used to support a claim of causation in a toxic tort case a fight over causal inference invariably erupts and for the last twenty years or so that has meant an argument about whether Sir A. B. Hill's so-called causal criteria have been met. Long before Hill tried to prove that smoking causes lung cancer Robert Koch tried to find a defensible argument for the claim that microbes were the cause of anthrax. What he came up with are known as Koch's postulates and they've been around for well over one hundred years. Now, in an attempt to update them for a world in which often only bits of pathogens long gone remain an attempt to update the postulates has been published.

In "Microbe Hunting"  the author summarizes the problem of causal attribution when the responsible bacteria can't be cultured or even found and when their role in disease is the result of an incredibly complex interplay between host, uncounted trillions of microbes and the external environment. He discusses the methods for teasing out pathogens from dense webs of causation and proposes a refined set of causal criteria. It's well worth reading because if you do mass torts you'll be dealing with this issue for years to come.

"[O]ur Old Assumptions About Toxicants and How They Affect Our Bodies Are Being Changed ..."

There's a remarkable but necessary admission in this month's Environmental Health Perspectives.  It is that that a new (some would say old) paradigm has emerged; that pathogens, sometimes in concert with what for 40 years have been known as toxicants, are responsible for a very large portion of human suffering. Unable to deny any longer that diseases of nature inflict a staggering toll on humanity the "NIEHS Office of the Director will be working with division leaders to develop an initiative on infectious disease and environmental health—to incorporate infectious disease into the toxicological paradigm."

The editorial points to "A Niche for Infectious Disease in Environmental Health: Rethinking the Toxicological Paradigm" just published in the same journal. It's a call for the study of infectious diseases in environmental health research. Ultimately it's a recognition that the simple (and simplistic) models of many diseases are collapsing under the weight of modern microbiology. It's an admission that "the complexity of real-world exposures and multifactorial health outcomes" cannot be captured by the simple one-to-one associations that ruled environmental health research for the past four decades.

Years ago real insight, real genius (at least when it came to environmental illness) was replaced by a sort of blue collar approach to science in which grotesquely simplified statistical data dredges could be automated so that a never ending stream of putative causes of human suffering could be manufactured, studies and regulated. Some of the techniques were so malleable that clever researchers could not only manufacture causes, they could also decide in advance what the causes would be. Now, the real causes are uncovered and it often turns out that our ancient enemies, pathogens, were to blame all along.

The long war continues, but now the scales that covered environmental science's eyes for decades are falling.


How Does One Defectively Design a Molecule?

In Lance v. Wyeth the plaintiff alleged, among other things, that defendant had somehow negligently designed the molecule dexfenfluramine and that such negligent design was the proximate cause of her daughter's death. The defendant however prevailed on its motion for summary judgment having demonstrated to the trial court that plaintiff did not have any cognizable product liability claims under Pennsylvania product liability law as it applies to prescription drug cases. On appeal the trial court's rulings were affirmed one by one until the appellate court came to the negligent design claim.

The court held "that notwithstanding comment k in section 402A Restatement (Second) of Torts, this claim is actionable under Pennsylvania law." Noting that "a strict liability design defect claim is distinct from a negligence design defect claim" the court examined section 395 (negligent design of products) and found no exemption for unavoidably unsafe products like the one for things like pharmaceuticals found in comment k of 402A. Accordingly, the appellate court determined that "Appellant's negligent design claim is not precluded by comment k, and is a valid cause of action upon which relief may be granted. The trial court thus erred in entering summary judgment in favor of Wyeth on Appellant's negligent design defect claim."

In arriving at its conclusion the court relied on just three cases. For the proposition that plaintiff can lose on a strict liability design defect claim yet prevail on a negligent design defect claim the court cites Phillips v. Cricket Lighters. In that case though the court held that on the strict liability side the question went to whether the lighter was safe for the intended user, in that case an adult, whereas on the negligence side the question was whether the lighter's lack of a safety feature was a design defect given the fact that an unintended but foreseeable user, a child, started the fatal fire. In Lance, of course, the person for whom the drug was intended and its user were one and the same so I don't see the point.

For the proposition that their reasoning applies to pharmaceuticals the court pointed to Toner v. Lederle Labs. In Toner though the question wasn't whether a molecule like dexfenfluramine was defectively designed; the question was whether making a vaccine out of a whole library of bacterial molecules, most, necessarily, of unknown nature and function could have been done more safely by using just a few bits of B. pertussis rather than the whole cell. Ok, but that's mixing apples and sangria.

Finally, for further support for the proposition that you can lose a strict liability design defect claim and yet win on a negligent design defect claim for the same product the court points to Artiglio v. Superior Court. Artiglio was a petition for writ action seeking a reversal of a summary judgment in favor of defendant on plaintiff's strict liability claim in a breast implant case. Plaintiff's petition for writ was denied; the appellate court finding that "the entire category of medical implants available only by resort to the services of a physician are immune from design defect strict liability." The problem with Artiglio and of the case to which it points, Brown v. Superior Court, is that they appear to stand for the proposition that in a prescription drug case there's really no such thing, at least under California law as strict liability defective design case - not that it's somehow different and so equally actionable.

Putting all that aside for the moment, what would it mean to defectively design the small organic molecule that is the essence of any modern medicine?

Surely it doesn't mean that the molecule ought to have been made up of different atoms. If that were the case then it necessarily wouldn't have been the same drug - we're not talking alternate design; we're talking alternate universe. Even in Beaumont twenty years ago the judges tossed out plaintiffs' claim that benzene ought to have been designed with all of its useful properties but with a methyl group in place of one of its hydrogens. 

Maybe the claim could have to do with how the atoms are arranged in the molecule. It's true that some organic molecules can be made in two forms akin to our right and left hands and that each form may have different biological properties. Thus if say, to use a trivial example, D-glucose (right handed) was safe and effective and L-glucose (left handed) was unsafe though effective you might have something to talk about if you'd taken the L- variety. Such a circumstance is mainly hypothetical and the typical finding is that one form is ineffective so that it's never consciously made in the first place leaving at best a manufacturing defect claim, if anything.

Perhaps the claim goes to the design process itself. Drug design has gone from trial-and-error to a find it and bind it approach to mindbogglingly complex efforts involving huge numbers of simulations run on supercomputers. But if something's wrong with the approach then the complaint, again, is not with the product consumed but with the failure to discover a different and better product. Something akin to a loss of chance claim perhaps. Surely the courts won't impose a duty of scientific discovery on companies. If they do they could at least solve the problem of induction first.

All in all it's hard to see how this new negligent design defect claim changes things. Really, what can it possibly mean to say that someone defectively designed something she didn't design, or to say that she defectively designed one thing because she didn't design it like something completely different? Can a car be found defectively designed because it's not an airplane? I hope not.

How Quickly Do Bacteria Evolve? Lots Faster Than You Thought.

It has generally been assumed that bacteria evolve to overcome host defenses at a rate much slower than that of viruses. That assumption appears to be wrong. Helicobacter pylori, a cause of multiple cancers in humans, mutates at a rate similar to many viruses. See: "Microevolution of Helicobacter pylori During Prolonged Infection of Single Hosts and Within Families".

Just how fast can bacteria evolve? Would you be surprised to learn that within the lifetime of mice, bred to be free of gastrointestinal bacteria, bacteria from drinking water cannot only set up shop and colonize their hosts' guts but evolve into whole new species? If you are surprised, read: "Bacteria From Drinking Water Supply and Their Fate in Gastrointestinal Tracts of Germ-Free Mice: A Phylogenetic Comparison Study".

The long war continues, of course; and while we've been basking in our assumed victory over them for the last forty years our ancient enemies, descended from a long line of brilliant sappers, have been hard at work.

Experts Are Usually Wrong and Peer Review Makes Things Worse. Now What?

We've previously reported on the finding that when it comes to causal associations experts typically get it wrong. From the mass torts' perspective the problem seems to be that courts tend to confuse frequentist data with Bayesian degrees of belief - and thus did I get to see the late and truly very great trial lawyer John O'Quinn once convince a judge that any study showing an association that had a p-value less than .5 ought to be admissible and supportive of a verdict under Texas' "more likely than not" civil evidentiary standard. Armed with such a lax rule plaintiffs' experts ran wild. Heck, even today with a .05 threshold they are free to opine about causal associations that almost certainly do not exist.

The ability to manufacture health scares clearly serves a variety of political and economic interests but what to do about it? Doesn't peer review help? Not really. As long as the standard for statistical significance is low the odds are that even the most rigorous journals will publish more junk than science.

What about getting rid of peer review altogether? It's coming. In the future papers will be dipped in the acid bath of online skepticism and their ultimate worth will be determined by whether they survive the test. No longer will partisans, skirmishing behind the scenes, determine what gets published and what doesn't. See: "Scholars Test Web Alternative to Peer Review".

Saying "Sorry"

Out of thousands that I've handled, in only one toxic tort case did I think my client actually had done anything wrong. The demand was the same as usual, of course; cancer cases are valued on the basis of type and damages and, and to a lesser extent, causation - not fault. So what happened at the mediation of that case was completely unexpected.

After the usual presentations by the lawyers my client representative asked to address the widow directly. She rose and, as best as I can recall, said that the company had failed to fulfill its promise to one who worked for it,  said we're better than that, and then said "on behalf of those who founded this company and those who work there today I apologize to you and promise this will not happen again". The widow and my representative, both crying, then hugged.

The mediator suggested a break and that we retire to our separate rooms. Shortly thereafter the mediator knocked on our door and upon entering announced that he had a demand we couldn't refuse. The demand was indeed within our authority though at the high end. We accepted on the spot, my representative saying that to negotiate further would be faithless. Plaintiff's counsel was furious and said that day, and has said several times since, that he'd never bring his client to a mediation with that defendant again.

I'm not sure exactly why the apology worked so well. I've seen other apologies. Most did nothing to alter the parties' positions and a couple produced fireworks that made the cases impossible to settle. So if an apology is in order something about its content and how it's delivered will determine its success. But what?

Apparently you cannot craft a good apology unless you understand the plaintiff's own narrative of the story of her life. Once the "components" of your apology fit the apology predicted by that narrative you have a chance at forgiveness. Otherwise, apparently you're wasting your breath. See "When Apologies Work: How Matching Apology Components to Victims' Self-Construals Facilitates Forgiveness". Hat tip: Mind Hacks


Alzheimer's and the Confounding Arrow of Causation

Back in March we reported on the finding that beta amyloid, which accumulates in the brains of Alzheimer's patients, is a potent antibiotic; thereby generating the hypothesis that its presence wasn't the result of a malfunction but rather was an effect of an immune response to a chronic brain infection. If that's really the case, getting rid of it might not be the way to go.

Today Gina Kolata of the NYTimes is reporting on the dramatic failure of a drug designed to reduce the amount of beta amyloid in the brain. It's not that it didn't reduce beta amyloid; it did. The problem is that it made the subjects' Alzheimer's worse; and the more they took the more their conditions worsened.

None of this proves that the real killer is a chronic fungal infection which is slowed down by beta amyloid but it does prove that when it comes to biomarkers the "which is cause and which effect" trap can ensnare the brightest and the savviest.

Nano Nannies

The New York Times has an excellent write up of the new article "Human Milk Glycobiome and its Impact on the Infant Gastrointestinal Microbiota" in today's Science section. Two profoundly important points are made both by the write up and the article; they are (1) that we are each individually a "we", an emergent organism dependent upon billions of organisms within us to promote and to maintain good health; and, (2) just because we don't know what something does that doesn't mean it does nothing - it just means we haven't thought about it hard enough.

It turns out that mother's milk nourishes not only her baby it also selectively recruits and thereafter employs (with payment made in special milk sugars that baby can't digest) billions of tiny bacteria which earn their keep by aiding digestion, protecting epithelial cells and assisting the immune system in its never ending war against pathogens. And what of the other forms of milk sugars whose purposes have yet to be elucidated? The search for whatever they're feeding is now on in earnest.

As for the now refuted claim that the indigestible (at least by baby) milk sugars serve no purpose it's important to recognize just how many such claims are falling these days. It was a big deal three years ago when the ENCODE researchers announced "You know those non-coding DNA segments? You know, the flotsam and jetsam of millions of years of evolution that doesn't really do anything? Well, guess what?" These days stories about aspects of our selves once thought to be vestigial shadows of distant ancestors turning out instead to be critical determinants of health are common. As a result, we live in a time of great promise but also of great uncertainty.

So what does this have to do with mass torts? First, there's the issue of causality. The task of unraveling causation becomes fantastically complex once we understand the role that genetics, epigenetics, diet and social interactions, including swapping bacteria, play in disease. Second, there's the matter of what bacteria we host unconsciously due to the food that we eat and the impact on our health of the bacteria we will consciously choose to populate our bodies when we consume probiotics. So far the news on probiotics is quite promising but it must always be borne in mind that we are at the beginning of a great revolution whereby we come to understand ourselves as superorganisms and the complex interactions of genetics, epigenetics, diet, social networks and environment (and all that entails, known, unknown and unknown unknowns) that make us what we are. As with most revolutions, because of the uncertainty they bring, it's best to expect the unexpected.

PM2.5: Diabetes, Heart Attack, Lung Cancer, Premature Death, etc

Etc indeed. The list of maladies laid at the feet of inhaled particulate matter smaller than 2.5 micrometers (thus PM2.5) is long and growing. You can add diabetes to the list thanks to "Association Between Fine Particulate Matter and Diabetes Prevalence in the United States" and lung function deficits in early childhood too ("Effect of Prenatal Exposure to Fine Particulate Matter on Ventilatory Lung Function of Preschool Children of Non-Smoking Mothers").

Is it a certain type of ultrafine particle that's responsible? Some studies say yes and others say no. In vitro toxicity testing tends to suggest that altered function is due simply to particle size while epidemiology studies tend to cast blame on one sort of particle rather than another though the findings vary from study to study and often conflict (a common problem when looking for weak effect associations). Do the observed effects meet the so-called specificity criterion for causal inference? At first the reported ill effects of exposure were said to be cardiovascular but now everything's in play especially since several studies have linked PM2.5 and Premature Death - All Causes.

So, is PM2.5 a universal toxicant and among the leading causes of death? Or could it be that people who live in urban areas with higher PM2.5 levels tend to have higher rates of unhealthy living? Is there anything good to be said about PM2.5? For example, why do farmers, who are often exposed to high levels of PM2.5, especially from endotoxins (think bits of bacteria), often have lifelong protection from many allergies that afflict those exposed to lower doses?

There's also the question of what's to be done about PM2.5. Farmers produce lots of it what with their gravel roads, grain bins, diesel tractors and plowed fields. EPA intends to regulate PM2.5 down on the farm and much more strictly than in the past but at what cost? And for those who don't like cost-benefit analyses what if the changes needed to reduce farm PM2.5 simply causes generation of ultrafine dust to be shifted elsewhere; and to increased markedly? See "The Environmental Cost of Reducing Agricultural Fine Particulate Matter Emissions".

Finally, of course, there's the issue of whom shall be sued. The finding that a speck of cotton dust from your shirt is as toxic (or as non-toxic, depending on how things shake out) as soot from combusted diesel fuel is an obvious impediment to to the diesel litigation plus there's a new study of truck drivers demonstrating that their presumed PM2.5 mortality may not be due to their work but rather can be, at least in part, explained by ultrafine dust exposures in and around the home: "Long-Term Ambient Multi-Pollutant Exposures and Mortality". Efforts to target other deep pockets will have to wait until science produces more definitive answers about what's to blame and how it can be determined that the PM2.5 in question was the cause in fact of the plaintiff's demise - likely an impossible task since causation in such circumstances is almost certainly the result of a constellation of factors; a constellation to be explored by something called eco-epidemiology. More on that another day.

Arrogant, Overconfident or Just Like Everybody Else?

Overcoming Bias and Barking Up The Wrong Tree are both commenting about the recent article "Insightful or Wishful: Lawyers' Ability to Predict Case Outcomes". Four hundred eighty one lawyers were queried about their expectations for one of their cases set for trial. After all was said and done the actual outcomes were compared to the trial lawyers' ex ante predictions and they didn't too to well; not even the ones who'd been out for years. So what gives?

Do lawyers overpromise in order to attract business, as Robin Hanson suspects? I'm sure that's sometimes the case. I once had a front row seat, as it were, that afforded me the opportunity to witness (and my client, formerly the target defendant, to benefit from) another firm's Senior Trial Partner's  first jury trial (and believe me his client had to push him to the courthouse kicking and screaming). He was played like a fiddle by the plaintiffs' attorney and by the time he read his closing from across the room he looked like a cornered animal. The verdict was huge.

Yet I don't think overpromising by underperforming trial lawyers accounts for most of the poor predictions. Rather, they likely fell prey to all too common faulty heuristics. They've spent weeks building a narrative that fits the facts and the law and the story ends happily ever after for their clients. And the bad guys' story? They've spent as much time or more poking holes in it so that for every doubt imagined a counterpoint leaps effortlessly from memory. Why isn't this just the usual sort of programmed reasoning exercise that makes most people poor prognosticators?

Finally, how are cases resolved? If settled isn't it usually on terms that nobody's especially thrilled about? And if they're not settled the distribution of outcomes, at least in my experience, tends to be U-shaped with a bunch of $0 verdicts on the left and a bunch of "ring the bell" verdicts on the right and not too many "kiss your sister" verdicts in between. I had a young lawyer recently tell me what he estimated the expected value of his case ought to be. The problem, of course, is that once the bell rings twice (or whatever jurors do in your jurisdiction to signal they've got a verdict) you're about to be, like the owner of Schrodinger's cat, either really really happy or really really sad. When the possible is collapsing into the certain why should it be a surprise that lawyers usually get it wrong when the odds of getting it right are against them?

What's Behind the Rise in Food Allergies?

The incidence of food allergies in children is rising. Wheat, milk, egg, fish, peanut, walnut, shellfish and soy allergies have led to recalls of pork, turkey, cream of wheat mushroom soup, roast beef, ice cream and corn pasta in recent months. What's behind the increase in allergies?

There are at least two good hypotheses for which there's sound evidence. First, despite what our pediatrician told us, it's probably a good idea to introduce babies to e.g. cow milk sooner rather than later (see "Early Exposure to Cow's Milk Protein is Protective Against IgE-Mediated Cow's Milk Protein Allergy") and make sure they get a large enough dose to produce tolerance as it's apparently the low doses of say peanuts that lead to sensitization and allergy (see e.g. "Peanut Sensitization and Allergy: Influence of Early Life Exposure to Peanuts").

The second emerging hypothesis, another of the increasingly common "Grandma was right" sort of ideas, is that lack of sunshine is also responsible for the rise in food allergies in children. It turns out that vitamin D is crucial to a properly functioning immune system and without it you wind up with a gut full of the wrong sorts of bacteria behaving badly. (See "Potential Mechanisms for the Hypothesized Link Between Sunshine, Vitamin D, and Food Allergy in Children" and "The Role of the Gut Mucosal Immunity in the Development of Tolerance Versus Development of Allergy to Food").

And while we're on the topic of the consequences of this needless epidemic of vitamin D deficiency in the U.S. due to four decades of anti-sun/anti-reason activism you should also read: "North-South Differences in U.S. Emergency Department Visits for Acute Allergic Reactions", "Are Active Sun Exposure Habits Related to Lowering Risk of Type 2 Diabetes Mellitus in Women, a Prospective Cohort Study?" and "Vitamin D and Risk of Cognitive Decline in Elderly Persons" along with "Vitamin D: A Place in the Sun?" and "Vitamin D in Asthma: Panacea or True Promise?"

The takeaway here is that by overreacting to rare and likely uncontrollable risks we've been stampeded right into far more common and otherwise avoidable risks. So who should be liable to all of the eggshell plaintiffs manufactured out of junk science? Should it be the companies whose products would not have caused harm but for the activists or should the activists be called to account for what they have done?

"You Got Your Science In My Activism!" "You Got Your Activism In My Science!"

Yuck! Tastes like CBPR.

CBPR? I hadn't heard of it either; not until this week anyway. It's "community-based participatory research" and the idea is to get activists and scientists together to do research into things like the cause of higher than average breast cancer rates in some high net worth communities. The activists want the scientists to prove that pesticides are causing breast cancer in their communities; and so the scientists promptly set out to falsify the activists' claims. Why wouldn't it turn out well?

The answer can be found in "A Review of Advocate/Scientist Collaboration in Federal Environmental Breast Cancer Research". It turns out that "effective" CBPR requires a different sort of "inquiry paradigm." You see "[t]he positivist paradigm remains dominant in much scientific research, emphasizing objective knowledge that is separate from the knower and can only be uncovered through a scientific method of inquiry that is neutral and bias-free." "CBPR challenges this paradigm by contextualizing scientific research within particular communities, including and legitimizing advocates' knowledge, understandings, and priorities regarding issues by which they are personally affected." (Ibid at pg 17).

So researchers, schooled in the long history of how biases, prejudices and failures to challenge closely held beliefs have thwarted science and medicine in the past, are to drop everything they hold dear? Or is it that advocates are to drop their beliefs, acquiesce to all the money they lobbied for being spent on an effort to falsify those beliefs; and, after seeing them falsified, to say "Nevermind", take the hit to their reputations and set about constructing a new narrative for their lives? The former rejects the scientific method; the latter, human nature. 

Twenty Suspected Carcinogens

The American Cancer Society is calling for new research to settle the issue of whether or not twenty different agents do indeed cause the types of cancer in which they've been implicated. The twenty are:

(1) Lead and lead compounds; (2) indium phosphide (used in many flat screen TVs); (3) cobalt with tungsten carbide; titanium dioxide; (4) welding fumes; (5) refractory ceramic fibers; (6) diesel exhaust; (7) carbon black; (8) styrene oxide and styrene; (9) propylene oxide; (10) formaldehyde (does it cause leukemia?); (11) acetaldehyde; (12) formaldehyde; (13) methylene chloride; (14) trichloroethylene; (15) tetrachloroethylene; (16) chloroform; (17) PCBs; (18) DEHP (a phthalate); (19) atrazine (a herbicide and the subject of a coordinated attack by various activists groups resulting in a new EPA review); and, (20) shift work (the presumed exposure being "light at night" leading to a disruption of circadian rhythms and the most commonly associated malignancy being breast cancer).

You can find the press release here: Report Outlines Knowledge Gaps for 20 Suspected Carcinogens; and you can find the IARC report summarizing past rationale for assigning these suspected carcinogens to groups 2A - 3, the new evidence forming the basis for the recommendation that the status be updated and the sorts of epidemiological and mechanistic studies necessary to answer the question of whether they ought to be added to the list of 107 Group 1 agents known to be carcinogenic to humans, here: Identification of Research Needs to Resolve the Carcinogenicity of High-Priority IARC Carcinogens.

FDA Panel Votes to Keep Avandia Available for Type II Diabetes Patients

Despite a steady drip drip drip of out-of-context memos, disconnected snippets of depositions and dire predictions that rose to a torrent last night, the FDA's panel of experts today voted 20 - 13 to keep Avandia on the market, though, most suggested, with more warnings. More significantly, despite vicious ad hominem attacks on respected academics and physicians, the panel also voted to complete the TIDE study - one of those "gold standard" studies that should in 2015 produce at last some definitive data on the questions of whether Avandia poses an unacceptable risk for diabetics and whether it produces a better outcome than a competitor.

Of all the objectives of those who lead such attacks their effort to declare science settled and to outlaw further experiments that might falsify their claims is the most frightening. On the other hand, when you consider what happened with the breast implant litigation you have to understand that by waiting for the truth those who fed The New York Times stray documents and quotes from discovery would only put their businesses at risk. Fama, malum qua non aliud velocius alium.


The Doctor Doth Protest Too Much, Methinks

Today The New York Times, which dutifully fanned the flames of the 2007 "prescription drug crisis" started by those pushing for greater FDA powers and fewer new drugs, published "Caustic Government Report Deals Blow to Diabetes Drug". In essence it reports Dr. Thomas Marciniak's criticism of the RECORD study which in 2007 led the FDA, despite congressional histrionics, to vote 22 - 1 to keep Avandia on the market. What the NYTimes is talking about is this.

What sets off the alarms, to a mass tort lawyer anyway, is slide 22. Why, asks the good doctor, should you believe his numbers? After all he declares that he has "nothing to hide" and that "[n]either my job nor (for me) $100,000,000's are riding on the results." The other slides evidence an effort to dig, but not too much, into the data and upon finding seeming errors to imply, without saying so, that the manufacturer somehow managed to beguile honest researchers from around the world into signing off on bad science. It's the kind of drama you'd expect to see from a certain sort of expert witness testifying at the courthouse in Jefferson County; but hardly the sort of presentation typical of scientific gatherings. Then again the FDA has been hyper-politicized so maybe this is the new normal.

Anyway, the implication that this is the "Government Report" is highly misleading. It is in fact but one of many government reports (if by that we agree to mean presentations generated by government employees/contractors). Indeed, another "Government Report" addresses Dr. Marciniak's claims. You'll find it here.

Dr. Marciniak did the easy thing. Post hoc he rummaged around for evidence of errors that would undermine the RECORD study. When examining the outcomes of thousands of people based on many times that number of documents he found a few seeming inconsistencies. Anyone who does mass tort litigation knows that if a bad data point or two were enough to refute any study then we wouldn't have much to talk about down at the courthouse.

What's also interesting is the fact that throwing in the extra assumed heart attack episodes (even the ones for which there were no biomarkers confirming same) the study still only shows a small increase (1.38) that is statistically significant by the barest margin (C.I. 0.99 - 1.93) (i.e. "not") and still it does not support the Nissen hypothesis (to say nothing of the study of 227,000 Medicare patients that rejects it).

But most interesting of all is the data on the only question we really, ultimately, want answered. Do the people taking the medication live longer, or die sooner, than those who don't? By that measure, the protesting doctor's numbers still show that patients on Avandia were 14% less likely to die than those who weren't. By this most critical measure, even considering the data cherry picking of Dr. Marciniak, the results for Avandia are "reassuring" - according to that other government report.

Data Dredging For The Masses

Part of what Congress passed in response to the perceived prescription drug crisis of 2007, Title IX, Section 921 of the Food and Drug Administration Amendments Act 2007 (FDAAA) (121 Stat. 962), includes a directive that the FDA "conduct regular, bi-weekly screening of the Adverse Event Reporting System [AERS] database and post a quarterly report on the Adverse Event Reporting System Web site of any new safety information or potential signal of a serious risk identified by Adverse Event Reporting System within the last quarter."  A single sufficiently serious adverse event may be enough to constitute a "potential signal".  The number of potential signals being detected are increasing rapidly and on the most recent quarterly Potential Signals report thirteen additional drugs have been listed including such widely prescribed medications as Zithromax (the highly effective Z-pack) and Premarin for producing signals of serious risks of liver failure and angioedema respectively.

What will be the impact of releasing what is essentially raw data divorced from any analysis of its meaning or discussion of the plausibility of what it implies?  In other words, what good will come from this sort of transparency?  Other than tempting the unwary to the wrong conclusion thanks to the post hoc ergo proptor hoc logical fallacy it's hard to see any good coming of this; unless of course you're a personal injury lawyer.  Google "Zithromax lawyer" and you'll find that though the new potential signal report is only a few days old there are already numerous "Zithromax attorneys" anxious to represent you in your claim for liver failure.

At the end of the day it may well be that the so called prescription drug crisis that the Congress was responding to in 2007 was in fact merely a prophecy only now being fulfilled thanks to the AERS Potential Signals reporting system.

Tags: , , ,

Avandia: Burn Her Anyway?

Three years ago Dr. Steven Nissen published "Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes" in which he reported that those taking Rosiglitazone (Avandia) had about a 40% increased risk of acute myocardial infarction (heart attack). The study was a meta-analysis of published and unpublished data and drew extensively from the manufacturer's unpublished data.

The media simultaneously reported the findings and propagated the following narrative: The pharmaceutical companies collect vast quantities of data; publish only what supports their products; and are at best willfully ignorant of the risks of their products which risks are right before their eyes if only they would look. Dr. Nissen who "has a statistician’s zeal for drilling deep into clinical data, seeking signs that some widely used drugs pose undisclosed risks to patients" was made the hero of the drama. Congress got involved by beating the drums of safety and transparency and demanding that the FDA pay more attention to ensuring that pharmaceuticals are safe.

The FDA reassessed Avandia and panel members voted 21-3 to keep it on the market.

In February of this year, a Congressional investigation conferred near-martyr status on Dr. Nissen after it was revealed that he had secretly tape recorded a meeting with the representatives of the maker of Avandia. The same month, in European Heart Journal, he published the editorial "The Rise and Fall of Rosiglitazone". Meanwhile everyone was waiting for the results of a huge study of Avandia.

Last week that study, "Risk of Acute Myocardial Infarction, Stroke, Heart Failure, and Death in Elderly Medicare Patients Treated with Rosiglitazone or Pioglitazone" (interestingly, for several days JAMA made you click through an ad for a competitor's type 2 diabetes product to get to the article) was published. And what was the risk of heart attack among those elderly patients on Avandia? Essentially no different than those on the older medication - a statistically insignificant 1.06 increase - especially so given the fact that almost three quarters of all type 2 diabetics wind up dying of heart disease in any event. So, was Avandia cleared? Nope. In fact, the calls for removing it from the market have grown even louder.

The new study, while rejecting the original claim that Avandia causes heart attacks, raised the hypotheses that Avandia causes congestive heart failure and stroke and increases the rate of mortality overall. Yet even though the numbers of patients involved in the study (their records were simply culled from Medicare databases) was large (227,000) the increases were relatively small and by the time the increases appeared (after six to fifteen months of medication) only a small and rapidly decreasing fraction of the original cohort were actually still on one of the two medications.

At almost the same time Dr. Nissen published an updated meta-analysis ("Rosiglitazone Revisited: An Updated Meta-Analysis of Risk for Myocardial Infarction and Cardiovascular Mortality"  also free and also, though published in the Archives of Internal Medicine, only after an ad for the same competitor's type 2 diabetes product) that purports to show that Avandia increases the risk of heart attack but doesn't increase the risk of mortality. What? Hey, wait a second! How can ... So, siding with those advocating "safety over certainty" the New York Times quickly editorialized in favor of those demanding Avandia be removed from the market.

The LATimes went so far as to publish a piece calling for "an immediate moratorium on sales as soon as a credible study raises questions about safety." In that same piece the journalist was posed the following question by a researcher being interviewed: "Suppose a drug saves five people and kills one person. Do you keep it on the market?" His answer was "I know this: If that one person killed is my loved one - or yours - the answer is readily apparent."

Yikes. Really? If a data dredge shows any risk of a fatal outcome then the drug should be pulled from the market no matter how many people are killed in the process? We'll save for another day the question of how thinking about risk goes so badly astray. For now though consider how likely it is (yes likely - and in fact, if the number of endpoints examined are sufficiently large, how almost certain it is) that a risk will be found where none exists. Start with "Data Dredging, Bias, or Confounding: They Can All Get You Into the BMJ and the Friday Papers" and then for more on the perils of statistical deep drilling read "Your Intuitions Are Not Magic" and the links therein.

At the end of the day the issue isn't safety versus certainty. Claiming that pharmaceutical manufacturers are insisting upon certainty before warnings are issued or products are pulled is just a straw man argument. The usefulness of statistical analyses of medical outcomes has not been added to "death and taxes".The real question is which course is less uncertain - making vital judgments on the basis of large randomized controlled trials like the one due out on Avandia in 2015 or on the basis of data dredges? The answer ought to be obvious.

But It's Peer Reviewed! That Counts for Something, Doesn't It?

The acquisition of science, knowledge, is an accretive process. Thus Isaac Newton saw further only because he stood on the shoulders of giants. So how much of that modern peer reviewed science winds up being stood upon by those able to peer a bit further? Less than half and likely a lot less than half.

If you have any doubt read "We Must Stop the Avalanche of Low-Quality Research" at The Chronicle of Higher Education. Don't stop there though. Be sure to read "All Those Worthless Papers" too. And what about those impact factors; don't they count? Yes, but not towards reaching the goal of truth. See: "Lost in Publication: How Measurement Harms Science".

There are so many peer reviewed journals around nowadays that almost anything can be published and then be claimed to have been peer reviewed. So how do will we deal with the crush of junk science being published in peer reviewed journals? At this point I don't think anyone knows.





Stay Skeptical

If an article is peer reviewed, published in a decent journal and available for free from the National Institutes of Health you'd think there's a pretty good chance that the headline numbers recounted in it would be fairly accurate, right? Well then if you've read "A Brief Review of Silicosis in the United States", you'd likely believe that 94% of all death certificates in the United States listed silicosis as the underlying or contributing cause. On the other hand, you could stay skeptical and check the data at the U.S. Census Bureau.

It's always enlightening to go from advocacy site to advocacy site and to add up the number of bodies claimed by each and then compare the total to the actual number of recorded deaths. The cumulative total always significantly exceeds the actual number revealing that the activists are simultaneously fighting over corpses and overestimating the number of victims. Even so, the claim in this new silicosis paper that from 1966 to 2002 seventy-four million of a total of seventy-eight million U.S. deaths were due to silicosis has to take the cake.


California Declares Whooping Cough Epidemic

The California Department of Public Health (CDPH) is warning that the rate of pertussis (whooping cough) infection may be the worst in half a century. Five infants have died so far this year and 910 cases have been reported. The CDPH is urging parents to have their children immunized as soon as possible and reiterates that the vaccine "is safe for children and adults".

ABCNews is further reporting that while immunization rates in California are good overall the number of parents refusing to have their children immunized in some schools rises to 20% or more and in a few schools refusal rates exceed 70%.

Junk science like that peddled by activists who claim the DPT (P is for pertussis) vaccine causes autism isn't just a good way to wring money from companies, it's a killer.


A Little Perspective on the Macondo Well (Deepwater/BP) Oil Spill

While hanging out at the condo pool with the kids during my recent Destin vacation someone pointed out at the Gulf and said woefully "Look. I think the oil is coming in". It was seaweed. The next day there was a stir down on the beach when what appeared to be a piece of charred wood washed up. Some guy on a 4-wheeler rode up, proclaimed it a piece of charred wood, and tossed it into a nearby trash can. Onlookers seemed more disappointed than relieved.

On the way back to Texas we stopped in Louisiana and talked to, among others, an acquaintance who runs a vacuum truck service. Business is good as his trucks and drivers are under contract 24/7. The only bad thing, he said, is that the work is boring as there's been little to vacuum up; less than a gallon. On the other hand, the media livened things up from time to time by shouting things like "Where are you hiding the oil?" and "Who ordered you not to talk to us?" at his workers.

So where's all the oil? More than 100 million gallons have bubbled up according to the higher estimates. That's a lot, right? It is, of course. But according to an AP report, It wouldn't fill the Superdome. And that's not subtracting the oil that has been burned, dispersed and eaten by microbes.

The point of this post is not to minimize the spill or its impact but rather to explain why oil isn't all over the coast. It's a big Gulf (6,000 trillion gallons of water) and a long coast (over 1,600 miles long - just along the U.S. portion).


Vitamin D Deficiency and Multiple Sclerosis (MS): Who Pays?

In "The Lancet: Neurology" you'll find "Vitamin D and Multiple Sclerosis"  as well as "Vitamin D: Hope on the Horizon for MS Prevention?" Could it be that  the old mocked wisdom of "a healthy dose of sunshine" wasn't so silly after all? Could it be that the health panic precipitated by activists who demanded everyone stay out of the sun actually caused horrific and needless suffering? Could be.

I just got back from a vacation in Destin, Florida. While there I learned that there are people who cover every exposed surface of their kids with zinc oxide before letting them out in the sun. These parents think they're doing the right thing. Their kids seemed about as happy as I'd have been sent out into the world in a powder blue leisure suit. But while powder blue leisure suits don't cause rickets and MS, vitamin D deficiency does. When disease strikes will there be a viable cause of action against the scaremongers who caused it? It's an interesting question.

To Inform; But of What?

According to The New York Times, San Francisco has just passed a law requiring retailers to post readily visible information about the specific absorption rate (SAR) of the cell phones they're selling. This despite the fact that there's not only no credible evidence that cell phones pose a health risk but also that there's absolutely zero evidence that dose (here SAR) has anything to do with the anxiety-producing phantom menace.

People have been given a meaningless yardstick by which to judge their cellular purchase. Expect the unscrupulous to make $$$ selling overpriced "low SAR" phones to their usual prey - people struggling with decision-making under conditions of uncertainty, buffeted by the hyperbolic claims of activists.


How, and Why, Do Some Bacteria Facilitate Cancer Metastasis?

When you ask a physician or researcher how bacteria cause and/or promote cancer usually the only answer you get is "inflammation" and some hand waiving. It sort of makes sense. Lots of new and different stuff is going on, lots of new and different cells are running all around and lots of old cells are busily dividing and multiplying - surely a recipe for an accident.

But what if the bacteria are actively promoting the metastasis? That's the finding in "Bacteria Peptidoglycan Promoted Breast Cancer Cell Invasiveness and Adhesiveness by Targeting Toll-Like Receptor 2 in the Cancer Cells". Why, in the "what's in it for them" sense, would bacteria promote something that kills their host? Something to ponder over the weekend.

An Unusual Benzene/MDS Opinion

In Quillen v. Safety-Kleen Systems, Inc., 2010 WL 2044508 (E.D.Ky.) the court determined that plaintiff's expert, Dr. George Rogers, could properly attribute a case of myelodysplastic syndrome (MDS) to benzene by doing a differential diagnosis. That some courts have taken to using differential diagnosis to identify the root cause of say splenomegaly rather than to distinguish histoplasmosis induced splenomegaly from Hodgkin's disease induced splenomegaly would likely set many physicians' eyes rolling.  Yet, that's apparently what the 6th Circuit said in Hardyman v. Norfolk & Western Railway Co., 243 F.3d255 (6th Cir. 2001) and thus the thinking by the Quillen court.

The point of doing a differential diagnosis, of course, is to rule out possible causes until just one is left - it's a process of elimination. But just because every other cause of splenomegaly has been ruled out in the case of a male patient that doesn't mean that it makes sense to conclude that the cause must be the remaining possibility - a metastatic ovarian cancer. To be considered for elimination in the first place the putative cause has to be one that makes sense. In Quillen though there was no effort to demonstrate that plaintiff's experience with benzene was the sort that would make benzene a reasonably plausible cause of his MDS.

Finally, please ponder the following. In response to the defendant's objection that plaintiff's expert had not ruled out ionizing radiation  the court wrote: "Defendant points to nothing in the record demonstrating that Quillen was ever exposed to a statistically significant amount of such radiation." Somewhere an epidemiologist just fell out of her chair.

Something to Consider When Considering a State of the Art Defense

When putative cause and effect are perceived to be closer, in time, the degree of belief in the alleged causal nexus increases. That's one way to look at the results of a new study on how people perceive causal relationships in time being reported at MindHacks. But how does that explain the ease with which many juries find a link between a decades old exposure and a modern case of mesothelioma? Perhaps because too often defendants fail to adequately explain the long and difficult path from surmise and suspicion to confirmation of the link; and, because they often fail to demonstrate (sometimes due to judges who prevent it) in a detailed time line all of the intervening and nowadays usually superseding causes.


The Strange Case of the Report of the President's Cancer Panel

By now you've surely heard or read about "Reducing Environmental Cancer Risk: What We Can Do Now", which is the title of the recently released annual report of the President's Cancer Panel. You've also likely read that the American Cancer Society, no laggard when it comes to stirring up anxiety about cancer, has criticized the report's unfounded claims of wanton devastation from heretofore largely unknown, and still yet to be identified, man-made and occasionally natural carcinogens. But have you read the report itself? You should; it's quite an eye opener.

First, there's the advocacy. There are calls for massive new regulatory schemes based on what the authors think the "precautionary principle" means - i.e. that everything with a CAS number is suspect and can't be trusted until it has been studied not just alone but in every conceivable combination and concentration with the other 80,000 chemicals in use today. There are calls for "environmental justice". There are calls for expensive home water filtering, the end of plastic food and beverage containers, the consumption of "organic" food, lots more "black box" epidemiology and, most importantly for the plaintiffs' bar, "an environmental health paradigm for long-latency diseases .. to enable regulatory action based on compelling animal and in vitro evidence before cause and effect in humans is established". That last bit is all about the emerging "low dose" theory, the newest attack on "the dose makes the poison", explained in the introduction of the report as "harmful effects that may occur only at very low doses".

Then there's the science, or rather, the argumentum ad ignorantiam. The document is largely free of objective data. The authors admit at the outset that cancer incidence and mortality is falling, that not much is known about the causes of cancer and that the evidence concerning the consequences of "cumulative lifetime exposure" to most environmental contaminants is unknown and unstudied. Yet because so much is unknown, because environmental cancers are "grossly under-recognized" and because they are "shattering" and "devastating" the lives of so many Americans, congress must act. And what sorts of things are imperiling our well being? Electricity, clean water, cell phones and juice bottles to name a few. When data does make an appearance it is often put to embarrassingly bad purposes. For example, one section implies the following: ionizing radiation causes cancer; electromagnetic fields are a form of radiation; therefore, electric wires, cell phones and Wi-Fi routers need to be avoided and regulated as carcinogens. 

Early on the authors set out their premise: "Most environmental hazards with the potential to raise cancer risk are the product of human activity ..." The claim is never supported (which is not surprising as it conflicts with what is known about the causes of cancer today) but it's the one consistent theme that runs throughout the paper and it thoroughly accounts for its conclusions. At the end there's a laundry list of chemicals and things to worry about; it's long enough to keep toxic tort lawyers and their experts (some of whom were interviewed by the panel) busy for decades. So it goes.


Frontline Looks at the Vaccine War

I hope you caught tonight's Frontline episode: "The Vaccine War". For "vaccine" you could substitute "Bendectin" or "silicone" or any number of products made the target of baseless health scares over the years and get pretty much the same story; only this time Frontline sticks to the science and lays bare the empty claims and deadly fruits of the opponents of childhood vaccination.


No Fracking in the Catskills

There won't be any drilling for natural gas in the Catskills watershed according to the NYTimes. Of interest to some will be how opposition to drilling was shaped and propagated. Who wouldn't be opposed to flaming kitchen faucets?


Beware of Graphical Representations of Risk Ratios, Odds Ratios and the Like

Would you be surprised to learn that the graphical representations of association (RR, HR, OR) are seriously flawed not just in some but in most of the peer-reviewed articles published in JAMA, The Lancet and NEJM in 2008? I certainly was. Read about it in The Lancet at "Graphical Presentation of Relative Measures of Association". Apparently, rather than improving communication an awful lot of graphics are actually "distorting what the data have to say".


Lies, Damned Lies, and P-Values

In "Odds Are, It's Wrong", Tom Siegfried lays out the argument for the proposition that much of what you read in the scientific literature is wrong because many of the claims being made rely on statistical significance. You see, an impressive sounding statement like "the association between exposure and disease was highly significant (P<0.05)" does NOT mean (a) that there's a 95% chance that the association is causal; (b) that the absence of an association can almost certainly be ruled out; nor does it necessarily mean that (c) the finding is momentous, compelling or even important. It doesn't even say that if the test were to be repeated that its results would likely hold. A P-value, the arbitrary judge of "statistical significance", won't, and can't, have anything to say about the likelihood that a given hypothesis is or is not true.

The fact of the matter is that if you have a bunch of data and can't find at least one statistically significant association it in only proves one thing - that you're not trying hard enough. The magical P-value level of 0.05 is nothing but a trade-off; a balancing act between finding associations that don't exist (false positives) and missing true associations that do (false negatives). As a result, false associations are not only possible, they're guaranteed when you have enough data and slice it enough ways.

Now, lawyers are getting into the act. And while it's bad enough that "[a] lot of scientists don't understand statistics" (Steven Goodman quote from the "Odds Are, It's Wrong" article) it gets awful when lawyers try to deploy statistics to support or rebut claims. Law review articles are littered with claims resting on nothing more than small P-values. Some purport to show that certain appellate courts are biased against accident victims; others that tort reform is good for your health. And hardly a week goes by that I don't see a brief or a pleading asserting that Texas "jurisprudence" requires an epidemiological study with a risk ratio greater than 2 and a P<0.05 before a plaintiff can recover on a toxic tort claim. 

Apparently many lawyers, especially on the defense side, either forgot or never learned that it's easy to gin up false associations that meet the greater than 2 and less than 0.05 test. In fact, that's how most categories of toxic tort claims got started. Enshrining such a test in the law would turn out to be The Full Employment Act for toxic tort lawyers.

Causal inference from epidemiological statistical analysis is a crude method that nevertheless worked well for finding big effects like that of smoking on lung cancer risk and amphibole exposure on mesothelioma risk. On more subtle effects though, at the population level or molecular level, reliance on 20th century methods has produced so much bad science of late (bad only because statistics are routinely misused and abused and not because statistics aren't powerfully effective tools when properly used) that new methods of causal analysis are beginning to replace them. And these tools can answer the question of "how likely is it that drug A caused injury B?"

To see what the future of causal proof in toxic torts will look like read: "An Introduction to Causal Inference" by Judea Pearl.

Complex Writing Makes You Look Stupid

That's the conclusion of just one of eight studies demonstrating the power of simplicity as summarized by Psyblog. ht Mindhacks



No Association Between Paint Fumes in the Home and Fetal Growth

See "Non-Occupational Exposure to Paint Fumes During Pregnancy and Fetal Growth in a General Population"

Though about half of the mothers surveyed said they'd been exposed to paint fumes in the home while they were pregnant the data suggested that the more fumes to which they'd remembered being exposed the lower the risk that their baby would be underweight. What? This study probably has more to say about the use of interview data as a proxy for exposure than it does about the relationship being examined.

Long Term Smoking Significantly Reduces the Risk of Parkinson's Disease

A greater than 40% decrease in Parkinson's if you smoke more than 30 years? So it seems from this huge NIEHS study of 305,468 Americans. "Smoking Duration, Intensity, and Risk of Parkinson Disease".

Obviously the risk of getting lung cancer, emphysema, etc from smoking is much higher. Still, if you knew you had the genes that protects you from lung disease (whatever they are) but not the ones that protect you from Parkinson's (whatever they are) would you smoke?


It Doesn't Seem Logical But It Does Seem To Be So

If people with Type 2 diabetes are at a greatly increased risk of heart disease wouldn't it make sense to get their blood pressure and triglycerides down and their "good" cholesterol up? Quitting smoking and lowering "bad" cholesterol reduces the risk from very, very high to just very high so attacking these other presumed risk factors should help, right? Besides, pushing systolic blood pressure down closer to normal would obviously yield some benefits. And there's no way it could hurt. Right?

It turns out that these interventions, implemented on the basis of reasoning and not rigorous studies, either do no good, do no good and cause side effects, or do no good and increase the risk of heart attack by 50%. Be sure to read about the just published data and the reaction to it in an excellent write up by Gina Kolata in The New York Times.

How could this be? Well, what if the things everyone thinks are causes of heart disease in diabetics are really just other effects of the real cause? Or, and this is where it really gets scary, what if what everyone thinks is a cause in need of eradication is in fact part of the body's defense mechanism against the real cause? For a discussion about how obesity may be just such a protective mechanism see "One of the Scourges of Modern Life May Have Been Profoundly Misunderstood" in The Economist's Science and Technology section.

The takeaway from all this can be found in the first article. While these treatments seemed logical (and as noted in the article, at every meeting "some academic" would always be going on about how elevated blood sugar after a meal was dangerous and had to be lowered until eventually doctors had put thousands of people on these treatments) it turned out they were instead dangerous and ineffective. That'll always be the danger when we attempt to deduce solutions based on just the known variables of a complex and only partially understood system.

Popular Beliefs About Bisphenol A Have Been Repeatedly Falsified, Yet the Controversy Continues. Why?

Claims that bisphenol A causes hormone disruption have been refuted again and again by large, independent studies the results of which have been published in peer reviewed papers. Yet, based on nothing more than an uninspired theory (that estrogen-like molecules ought to do what estrogen does) and a few, small, poorly controlled studies the results of which can't be reliably reproduced elsewhere, the effort to ban a product that prevents bacteria from infecting much of the food you consume continues to accelerate. How could this be?

You can find Richard M. Sharpe's answer in "Is It Time to End Concerns over the Estrogenic Effects of Bisphenol A?" published in the journal Toxicological Sciences (free access!).

Like the autism/vaccine, limb reduction/Bendectin controversies the bisphenol A panic has spread like a virus. And if those past controversies are any guide it'll be several more years before civilization's immune response, empiricism, is able to bring us collectively back to our senses. In the meantime expect opportunistic infections to take advantage of the situation.

Hiding in Plain Sight

There are more than 18 million articles in PubMed and more are added in a day than you could hope to analyze in a month. Surely if someone had the time to digest it all new associations and patterns would emerge suggesting new hypotheses and generating new knowledge. But how?

Here's an article available free at PLoS One setting out one possible solution : "A PubMed-Wide Associational Study of Infectious Diseases" In the paper, a sort of proof-of-concept effort, the authors demonstrate that by running focused text mining software (not just key word searches or tabulating rankings of key words) over more than half a million infectious disease articles they could not only uncover cumulative knowledge already confirmed but also generate new hypotheses from this "hidden public knowledge".

Be sure to have a look at the associational network maps in the article. Then imagine what hidden relationships you might find if you could run similar software over the two million documents just produced in your case and the great demonstratives you could generate to prove them.


Will Your Jurors Decide the Case on Conduct or the Consequences of that Conduct?

Apparently it depends on where they fall along the powerful/powerless spectrum.

"In determining whether an act is right or wrong, the powerful focus on whether rules and principles are violated, whereas the powerless focus on the consequences." - from "How Power Influences Moral Thinking" published in the Journal of Personality and Social Psychology. Hat Tip: Barking Up the Wrong Tree.


You Know Those Mass Screenings for Prostate Cancer? Nevermind.

According to the Houston Chronicle the American Cancer Society has finally come to grips with mounting evidence that indiscriminate screening for prostate cancer causes more harm than good thanks to (a) the inevitable morbidity resulting from needless biopsies and surgeries due to false positive tests; (b) the realization that an awful lot of people who consider themselves "cancer survivors" would never have known they had cancer but for the screening test as their cancers would have gone away on their own or would have grown so slowly that they'd have died of something else before the prostate cancer became threatening; and, (c) the unfortunate fact that early detection, despite what everybody has been led to believe, does not mean that aggressive cancers can be cured - it just means that we get to be treated for them, and worry about them, longer.

Here's a link to the new screening recommendations: "Revised Prostate Cancer Screening Guidelines: What Has -- and Hasn't -- Changed"

Also of interest may be the readers' comments over at the Chronicle and elsewhere. Predictably there are two dominant camps. One sees this change as a nefarious plot by Big Pharma and Big Medicine to prevent early detection so they can make more money by making people wait until they need more expensive medicines and surgeries. The other one sees the new guidelines as a nefarious plot by Big Government to save money by preventing early detection so it can save money on treatment and hasten the deaths of Americans thereby saving money on Social Security payments as the cherry on top. I've run across veniremen able to hold both views simultaneously. But that's a discussion for another day.

What, Exactly, is an "Untainted Juror"?

In "Finding Untainted Jurors in the Age of the Internet" The New York Times examines one of the issues raised by former Enron CEO Jeffrey Skilling in his appeal to the U.S. Supreme Court - whether someone, anyone, can get a fair trial in an age when jurors can download a torrent of information once they get back home from a long day of tedious and vexatious "trial". The article presents no solution but that is, I suspect, because there isn't one.

No one comes to jury duty untainted by life; there are no blank slates to be found among your venire. If they find something relevant online it's just going to be something that confirms what they already believed. Your job then, as always, is to uncover their worldview in voir dire and to thereafter present a narrative that accommodates both their perspective and your facts.


JAMA Names 2009 Peer Reviewers

Inquiring defense lawyers will be wondering what article(s) plaintiffs' expert Phil Landrigan refereed for JAMA.


Peer Review: A Good Way to Detect Flaws in Methodology or a Good Way to Silence Heresy?

Back in the day, when the war against junk science was being fought in earnest, the U.S. Supreme Court wrote the following about peer review: "submission to the scrutiny of the scientific community is a component of 'good science,' in part because it increases the likelihood that substantive flaws in methodology will be detected". Today of course we know that most research reported as scientific knowledge in the journals is false. Worse yet, it is becoming increasingly apparent that the peer review process has been hijacked and turned instead to the purpose of safeguarding the prevailing dogma so that "truly original findings may be delayed or rejected" while "[p]apers that are scientifically flawed or comprise only modest technical increments often attract undue profile". These recent observations confirm what I've suspected in the two years since deposing a notorious expert witness who once was regularly excluded as a peddler of junk but who is now a referee for a prominent journal: somehow, some way, the barbarians have become the gatekeepers.

The end may be approaching for peer review, however. Of peer review it has recently been written: "Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever more elaborate and widespread peer review." In the blogosphere you"ll find that the frustration of those whose papers have been delayed or sabotaged by referees more intent on preserving the paradigm that supports them than getting closer to the truth is resonating. Good examples of those concerns and a call to end the ability of reviewers to hide behind their anonymity can be found at Marginal Revolution and Seth's Blog.

I for one support the move to disclose the identity of reviewers as well as the content of their reviews. As we noted in an earlier post, experts facing the prospect of having their opinions critically reviewed by their own peers tend to stay much closer to the truth.


How Can Something So Green Be So Bad for You?

Under current federal incentives for “green” energy sources, developers have been rushing to place wind turbines in many rural areas. Indeed, even before the Obama administration’s current emphasis on renewable energy, wind farm development has grown rather dramatically in the last decade, as shown by this time elapsed map of installed wind capacity.

In a recent ABA Journal article, the author notes that many neighbors of proposed wind farms have been challenging the placement of the giant turbines near their homes. Most such challenges have occurred before local land use authorities or state public utility boards, but some have been filed in court.

Most interesting from the mass tort perspective is this: at least some challenges to wind turbine developments have been based upon the purported human health effects of the turbines. Nearby homeowners claim that the low level sound, vibration and shadow flicker from the spinning turbines cause a host of non-specific complaints, such as sleep disruption, headaches, nausea and fatigue.

Continue Reading...


Pretend you're not a tort lawyer but instead a criminal lawyer. The judge is going to decide whether your client should be committed or set free. Her decision will turn on the likelihood of your client committing an act of violence in the future. You and the prosecutor reach an agreement on the factors to be weighed and a risk assessment is thereafter produced. It shows that your client has a 26% chance of future violent behavior.

Question: How should you frame your case

(a)   there's only a 26% chance that he'll ever commit an act of violence;

(b)   there's a 74% chance that he'll never commit an act of violence; or

(c)   it doesn't make any difference?

If you answered either (a) or (c) you might want to read "The Effect of Framing Actuarial Risk Probabilities on Involuntary Civil Commitment Decisions" just published in the journal Law and Human Behavior.

Tags: ,

Will Your Jurors Find Your Expert to be Knowledgeable and Trustworthy?

It's likely to depend on whether his or her opinion supports, ultimately, the values of each juror. And if the scientific consensus supports the expert's opinion it's likely to have less persuasive effect than you imagine. It's not a matter of your juror rejecting that consensus. Rather, it's a consequence of your juror assuming, perhaps because instances of agreement come readily to mind, that most experts actually support whatever conclusion about the issue would be deduced from that juror's values. That's my take anyway from the new paper (hat tip: The Situationist) "Cultural Cognition of Scientific Consensus" by Dan M. Kahan, Hank Jenkins-Smith and Donald Braman (part of the Cultural Cognition Project.)

At the conclusion of the paper the authors make a recommendation to those tasked with communicating risk that should be heeded equally by trial lawyers. "It is not enough to assure that scientifically sound information - including evidence of what scientists themselves believe - is widely disseminated: cultural cognition strongly motivates individuals - of all worldviews - to recognize such information as sound in a selective pattern that reinforces their cultural predispositions. To overcome this effect, risk communicators must attend to the cultural meaning as well as the scientific content of information". Swap "trial lawyers" for "risk communicators" and you'll get my point.

One last thing. I wonder what role reputation, and by that I mean the reputation of the subject and not that of the expert, plays in such matters. Here's why I ask. I know people who you'd predict from the authors' "hierarchical individualist" vs "egalitarian communitarian" distinction to fall into the "vaccines don't cause autism" camp instead being fervent anti-vaccination zealots. And I've found the reverse to be true as well. What actually seems most determinative is a like-minded group of friends. From my wholly unscientific observations some views about risk seem to spread among a group of friends or social acquaintances more like a virus. A new concern begins to be discussed, is seen as quirky, slowly spreads, then one day comes a tipping point and almost everyone in that group is announcing they won't be having their kids vaccinated (in the other group calls for the water-boarding of Jenny McCarthy are surprisingly typical) and the rest are feeling like Donald Sutherland in "Invasion of the Body Snatchers" (before the end anyway), and trying to change the subject.

Tags: ,

More Insight Into the Power of Metaphor

That metaphors help us communicate new ideas by casting them in the terms of something similar yet familiar is well known. But did you know that your body not only tends to act out metaphors (e.g. leaning into the future) but also to impose them on the brain (shifting one's judgment about the personality of someone based on whether the perceiver's hands are cold or warm)? Well, it's true, and I was writing up a summary of the recent work when I came across an excellent article by Natalie Angier in The New York Times that does it better than I could. You can find it here: Abstract Thoughts? The Body Takes Them Literally

There's a mind/body debate in there that can wait for another day but in the meantime be conscious of the unconscious impact of metaphors.


Lancet Fully Retracts Paper Linking Autism to MMR Vaccine

Today The Lancet announced: "it has become clear that several elements of the 1998 paper by Wakefield et al are incorrect, contrary to the findings of an earlier investigation. In particular, the claims in the original paper that children were "consecutively referred" and that investigations were "approved" by the local ethics committee have been proven to be false. Therefore we fully retract this paper from the published record."

Though this final, and finally complete, retraction is based on a determination of ethical lapses on the part of the author the fact remains that the results reported in the article have never been replicated. Nevertheless, contrary to what most lay people might assume published scientific papers aren't retracted just because the "science" within them turns out to be wrong.


On to a Fifth Age? How About We Finish the Second?

In a 1971 paper that profoundly influenced how scientists and policy makers approached public health issues Abdel Omran set out his theory of "The Epidemiologic Transition". He hypothesized that societies went through three different ages, or phases, that defined their experience with regard to mortality and life expectancy. In the first, the "age of pestilence and famine", life expectancy is low and episodes of widespread death are common. In the second, the "age of receding pandemics", infectious diseases are overcome and life expectancy increases dramatically. Finally, in the third, the "age of degenerative and man-made diseases", diseases of aging and self-inflicted suffering becomes the predominant determinant of mortality. Eventually others, noting the dramatic increase in life expectancy due to the rapid decline in deaths due to heart attack and stroke, posited a fourth age; essentially the same as the original third age but with cardiovascular disease removed from the "degenerative disease" category.

Now in an editorial in this month's JAMA  Dr. Michael Gaziano asserts that we may be entering a fifth phase, or age, of the epidemiologic transition. We are now, he writes, entering the "age of obesity and inactivity" in which ailments due to gluttony and sloth predominate on death certificates. The editorial references two new articles in the same issue purporting to show Americans are fat and getting fatter; especially the children.

But wait a minute. The age of man-made diseases barely materialized. Certainly there have been many many cases of people suffering terribly as a result of some man-made health hazard. Look no further than the cases of mesothelioma among the men who served aboard amosite laden Navy ships. And smoking continues to exact its terrible toll. Yet if you throw all the deaths due to occupational diseases and every last lung cancer/COPD death into the same category you can't get to 10% using worst case estimates. More sober estimates put the percentage of deaths due to man-made diseases at considerably less than one. Nevertheless, this powerful meme - that most of our woes are self-inflicted and due to some failure to live in a natural way - still propels not only mass tort litigation but also much scientific and political thinking.

However, there's more than just AIDS to demonstrate that we never really saw the "disappearance" of infectious diseases. Go to and do some searches on helicobacter pylori and humanpapilloma virus and you'll see just how many cancers are now being attributed to just these two organisms. Investigate mollicutes and you'll find that all sorts of microbes are suddenly being found associated with disease and they're only now being found because the technology to identify them is only now being refined.

Finally, remember to read the fascinating journey of Barry Marshall and Robin Warren from authors of an abstract rejected as one of the year's worst to winners of the Nobel Prize in Medicine for the very same work. In the end, the view, supported by the work of one of the world's preeminent public health researchers, that peptic ulcers were caused by that most modern of man-made insults, stress, only gave way to the understanding that the cause was in fact a bacteria when the evidence was irrefutable.

Facts Don't Have Much Impact on Values

By now you've likely heard that Andrew Wakefield, the British doctor whose 1998 paper published in The Lancet linked autism to the measles, mumps and rubella vaccine, has been found by that country's medical supervisory board to be guilty of "unethical" research, dishonesty, financial impropriety and "serious professional misconduct". And if you've been following the story you know that the paper has been partially retracted by The Lancet, disowned by most of Wakefield's co-authors and its findings have been refuted by subsequent and far more rigorous research. You might even know that the vaccine scare precipitated a sharp drop in vaccinations leading to a 20 fold increase in measles cases and at least 11 unnecessary deaths.

But what you might not know is that for an awful lot of people, none of it matters.

Despite the needless deaths, despite the revelation that Wakefield received $100,000 to conduct his test from lawyers hoping to sue vaccine makers and despite studies of millions of children who received the vaccine (as opposed to the 12 studied by Wakefield) showing no link to autism, as the verdict against Wakefield was read by the board's chairman he was "repeatedly heckled by distraught parents who support Wakefield..." And if you read the comments about the verdict at The Times you'll see that there are an awful lot of people who think that Wakefield is a victim of an elaborate plot to silence him orchestrated by the drug companies (out to make money) and the government (out to save money).

So what gives? One explanation is that our perception of risk is shaped largely our values. In a recent post at The Situationist you'll find a link to a video from the National Science Foundation in which Dan Kahan discusses the "cultural cognition thesis" - the idea that people perceive risk through the lens of their beliefs about what is and isn't good for society. By way of example he discusses the HPV vaccine Gardasil and the aversion to its administration by people who typically support vaccination. Apparently, for some at least, the perceived risk of green lighting sexual activity in young women outweighs the known risk of cervical and head and neck cancers.

What values then compel so many people to cling to the scientifically unsupported belief that  vaccines cause autism? That profiting from preventing disease is morally wrong? That mandating vaccination of children is a violation of rights? Something else? Whatever the answer just remember that you won't ever change a juror's values; not in time for the verdict anyway. Instead, find a way to present the facts so that they fit, or at least do not conflict, with those values and if that's not possible then frame the issue so that some other, shared value decides the question.


Tags: ,

Simplify, Simplify, Simplify Simplify

The hardest thing a trial lawyer does is also the most important thing a trial lawyer does. It is to distill her case down to its essence so that it can be clearly and easily communicated. Yet simplifying doesn't just ensure that your jurors understand your position; simplifying makes it much more likely that your jurors will believe your account to be true.

In a discussion of their recently published findings about how ease of understanding affects judgments about the information being conveyed authors Song and Schwarz report that something as seemingly minor as the font in which a statement is printed can have a profound effect on people's judgments about whether that statement is correct. Judgments about risk are affected by ease of communication as well. For example, a food additive with an easy to pronounce name was repeatedly perceived to be less risky than one with a difficult to pronounce name, despite the fact that the rest of the information, much more substantive information, about the two additives was identical.

Lastly, from this and other research, the authors conclude that easily communicated information will benefit in one other aspect. Specifically, from our tendency not to scrutinize those things which we "get". So if you want your narrative nitpicked be sure to use big words and complicated demonstratives. On the other hand, if you want your jurors instead to be digging through the testimony for facts that confirm your account be sure to communicate simply and clearly in every avenue of communication.


Is That "Science"?

Imagine: Someone makes a claim about how things work and assigns to that clam a 90% or greater probability of it being true. The sole evidence for the assertion is a copy of a nine year old popular science magazine in which a telephone interview of someone making the claim was reported. No data. No calculations. No experiments. Nothing.

Would you think that was "science"? Most people wouldn't.

But what if you didn't know the facts. What if all you knew was that the scientists making the claim all had fine educational pedigrees and had won prestigious awards and that their report was supported by the U.S. government, the UN and most of the rest of the world? Well in that case, figuring that someone else had already done his or her job of fact checking, you'd assume the science was sound and you'd get back to doing your own job after updating your understanding of the world with this new knowledge.

The New York Times is reporting today that the scientific claim that Himalayan glaciers will disappear within 25 years leaving hundreds of millions of people without water is based on just that sort of "science". 

The revelation that one claim in the 1000+ page IPCC 2007 report is without foundation does not of course mean that the rest of the report is faulty. Nevertheless, the fact that such a profoundly important claim made its way into the final report, based on nothing more than one person's nine year old hunch, will undoubtedly make many people wonder whether the scientists who wrote it were doing the sort of critical thinking that unshackles minds or rather were, as most people, merely seeking confirmation of their beliefs - blinded to whatever flaws or errors that come with it.


The Malleability of Memory

Neuronarrative has a write up on a study that goes a long way towards confirming what is often suspected about plaintiffs' exposure testimony in latent disease cases - that showing people pictures of the activity being considered causes them to subsequently remember things that never happened. But how else would you explain the testimony of a witness who swears he used Acme Asbestos Widgets in 1955 on the XYZ jobsite but can't recall for the same year where he lived, his phone number, the brand of toothpaste he used or, in one case, the name of his then wife?

ht The Situationist


The Power of Negative Thinking

Let's say that a telecom tower was built in your neighborhood to broadcast microwaves at a new frequency. Then, after it was up and running, neighbors began claiming all sorts of ailments including rashes, headaches, nausea, tinnitus, sleep disruption especially among children and gastrointestinal upsets. Finally, following protests from "a residential community filled with children exposed to uninvited microwaves", the company shut off the tower. Thereafter your neighbors reported that their symptoms had improved dramatically or had disappeared altogether. What conclusions would you draw about the new microwave tower and the risks it poses?

Now let's say that the company announced, and proved, that the tower hadn't been "ON" in the first place. Would that change your conclusions? Well, that's exactly what happened here.

We used to have all sorts of fun like this back in the days of multiple chemical sensitivity and similar litigation. The plaintiffs' lawyers though learned to discover everything there was to discover about a defendant's operations first and only then present their witnesses, prepared for all such revelations and ready with new and unfalsifiable claims, for deposition. Alas.


More Evidence That Vitamin D Prevents Cancer

For a number of years my great grandmother's admonition to get out of the house and get some "healthy sunshine" as soon as winter eased its grip has been at odds with the consensus in the medical community. Sunlight is officially expected to be a human carcinogen. And there's no safe level for carcinogens, right? So stay out of the sun and slather up any uncovered skin with sun block whenever you're forced to venture into the perilous outdoors. That's what the science said anyway.

The problem is that science, which is to say the business of generating knowledge, is as addicted to fads as Madison Avenue. So often when a hot new idea comes along, especially one that confirms part of a dominant narrative, most scientists and physicians seem to immediately buy into it. Thereafter, rather than investigate further whether sunshine is indeed an insidious carcinogen to be avoided at all costs, they investigate ways to stop it, or discover where it is brightest (and so riskiest), or find medicines that might ameliorate its dire effect.

At the same time though a vast and uncontrolled experiment gets carried out on the people who buy into the fad. In this case they're depriving their bodies of the Vitamin D manufactured in their skin when sunshine falls on it. Is that something they should worry about?

Well, it's beginning to look as though the good health of Mediterraneans has a lot more to do with getting plenty of sunshine than it does with getting plenty of dolmades and wine. The number of papers demonstrating vitamin D deficiencies in Americans and the relationship between vitamin D deficiency and increased risk of cancer is astounding. The newest I've found, "Plasma 25-hydroxyvitamin D Levels and the Risk of Colorectal Cancer: the Multiethnic Cohort Study" demonstrates a 37% to 46% decrease in colorectal cancer among Hawaiians of Japanese, Latino, African-American, White and Native ancestries with the highest levels of 25-hydroxyvitamin D in their blood.

And vitamin D deficiency isn't just associated with cancer. In another new paper it's associated with infections leading to more illness, costly treatments and long hospital stays. The authors conclude: "Vitamin D deficiency is intimately linked to adverse health outcomes and costs in Veterans with staphylococcal and c. difficile infections in North East Tennessee".

My great grandmother worked in her garden almost to the end of her days. She made it to 101 without a walker or even a cane then died in her sleep after a fall. Simply an anecdote proving nothing about sunlight, I know. Nevertheless, we might do well to consider tradition and the thoughts of the wise before jumping aboard every bandwagon that rolls by.


Don't Pitch the Water Softener

Have you been worrying that your water softener is significantly increasing your risk of dying from a heart attack? I didn't think so. But just because you haven't been feeling vulnerable around your water softener doesn't mean the WHO hasn't been fretting for you.

Thanks to epidemiological studies going back a decade or more (e.g. "Magnesium and Calcium in Drinking Water and Death from Acute Myocardial Infarction in Women") a worry arose that we were killing ourselves by eliminating the minerals naturally found in most drinking water. Yet subsequent studies have failed to confirm the finding including the just published "Effect of water hardness on cardiovascular mortality: an ecological time series approach". So what gives?

Well, what gives is that most of what gets published in peer reviewed journals is probably false; and when it comes to causal inferences drawn from epidemiological studies "the apparently indiscriminate indentification of particular aspects of daily life as dangerous to health" is, as witty programmers say, a feature, not a bug.


Why Read Old Journal Articles?

The New York Times has published another report by Gina Kolata on the Forty Years' War against cancer. In it you'll find part of the answer to the question I've posed.

"Dr. Barnett Kramer, associate director for disease prevention at the National Institutes of Health, recently discovered a paper that startled him. It was published in the medical journal The Lancet in 1962, about a decade before the war on cancer was announced by President Richard M. Nixon. In it, Dr. D. W. Smithers, then at Royal Marsden Hospital in London, argued that cancer was not a disease caused by a rogue cell that divides and multiplies until it destroys its host. Instead, he said, cancer may be a disorder of cellular organization.

'Cancer is no more a disease of cells than a traffic jam is a disease of cars,' Dr. Smithers wrote. 'A lifetime of study of the internal-combustion engine would not help anyone understand our traffic problems.'

Dr. Kramer said: 'I only wish I had read this paper early in my career. Here we are, 46 years later, still struggling with issues this author predicted we’d be struggling with.'"

There may be lots of science going on these days but review any journal and you'll quickly see that there's often not a lot of deep thinking behind it. Most science is derivative and most of it is false.

What people think of as science nowadays is in fact a vast jobs program, the main purpose of which, after the employment of academics, is to maintain and expand the pardigms on which its various parts rest. Accordingly, research outside the prevailing paradigms is typically starved for cash and efforts to falsify dominant theories don't just go unrewarded, often they are punished. I suspect therefore that the prayer of many if not most scientists today is, to paraphrase St. Augustine, "Lord grant me critical thinking and skepticism, but not yet."  

Reading an old journal article is one way to step back from the minutiae of microarrays and data dredging and to consider big ideas from a time when no one had the ability to sequence the genes of a malignant cell or unleash sophisticated software to find never before noticed confirmatory associations among mountains of numbers. A time when ideas were, perhaps, more likely the spark of sudden insight rather than the product of of a self-replicating system.



A Surprising Number of Americans Fear the Flu Shot is Unsafe

Reuters is reporting on the results of a new poll conducted by the Harvard School of Public Health into the attitudes of Americans towards getting their children vaccinated against swine flu. Slightly more than twenty percent of the parents surveyed had decided not to immunize their children and the main reason disclosed was fear about the safety of the vaccine.

The CDC has been monitoring those who have been vaccinated and has a web page up about the safety of the vaccine, the weekly updated Vaccine Adverse Event Reporting System report and just about anything else you'd want to know about vaccines in general or this one in particular. Nevertheless, and in spite of the fact that by all measurements the vaccine appears to be safe and effective, a sizeable number of Americans fear the vaccine more than they fear a virus that has sickened millions and killed over 10,000. Why?

Part of the answer can be found in a 2002 study in which researchers compared their subjects' reactions to scientific evidence from reliable scientists that debunked a health scare versus inaccurate non-scientific emotional appeals from activists that merely raised the possibility of an adverse health effect. "The surprising result is that when we presented both positive and negative information simultaneously, the negative information clearly dominated. This was true even though the source of the negative information was identified as being a consumer advocacy group and the information itself was written in a manner that was non-scientific." The authors concluded that "even though the scientific evidence is favorable, claims by opponents, even if they are inaccurate and only suggest potential risks, will tend to reduce consumer demand". Hat tip TheGoodTheBadTheSpin

Tags: ,

Why Read Old Books?

"Every age has its own outlook. It is specially good at seeing certain truths and specially liable to make certain mistakes. We all, therefore, need the books that will correct the characteristic mistakes of our own period. And that means the old books. ...

We may be sure that the characteristic blindness of the twentieth century—the blindness about which posterity will ask, "But how could they have thought that?"—lies where we have never suspected it, and concerns something about which there is untroubled agreement between Hitler and President Roosevelt... None of us can fully escape this blindness, but we shall certainly increase it, and weaken our guard against it, if we read only modern books. Where they are true they will give us truths which we half knew already. Where they are false they will aggravate the error with which we are already dangerously ill. The only palliative is to keep the clean sea breeze of the centuries blowing through our minds, and this can be done only by reading old books.  Not, of course, that there is any magic about the past. People were no cleverer then than they are now; they made as many mistakes as we. But not the same mistakes. They will not flatter us in the errors we are already committing; and their own errors, being now open and palpable, will not endanger us. Two heads are better than one, not because either is infallible, but because they are unlikely to go wrong in the same direction."

- C.S. Lewis, Introduction to "Athanasius: On the Incarnation"

The question of whether we should read old books is the subject of a spirited debate over at Overcoming Bias.


Systems. Errors.

If you've dealt with the Chemical Safety Board or the National Transportation Safety Board or any similar organization following an explosion or accident you know about their emphasis on systems and their reluctance to blame individuals for the events being investigated. Obviously, the working assumption that most errors are committed by good people trying to do the right thing has been a sound one and an emphasis on improving systems so those people can in fact do the right thing has yielded some impressive results. But you've likely asked yourself "does there ever come a point when everyone has been trained enough; when safety systems are redundant enough; when an individual needs to be held accountable rather than some unaccountable 'system'"? Apparently, at least in the case of medical errors, there does.

The New York Times is reporting on the growing concern that "a blame-free culture carries its own safety risks." When hand sanitizers are ubiquitous and training about hand sanitation is incessant, yet some doctors and nurses still fail to wash their hands something's wrong. And it's something, according to the researcher interviewed, that can't be fixed by tweaking the system. Someone has to be held accountable.



Is Your Drinking Water Safe? If Not, Why Not?

The New York Times has run an extensive article about the nation's drinking water claiming that our public water is contaminated with thousands of chemicals, hundreds of which are "associated with a risk of cancer". It even has a link to the articles on which the claims are based. Unfortunately, or fortunately depending on your point of view, the evidence cited for the proposition that tap water is putting the citizenry at risk of cancer is pretty thin if that's all there is.

For example, under "Studies Regarding Illnesses and Drinking Water" (of which there are only eight) the only one to make a broad claim of tap water carcinogenesis is a 28 year old study titled "Cancer and Drinking Water in Louisiana: Colon and Rectum" which used the 1970 census to compare 692 rectal cancer deaths from 1969-1975 by where along the Mississippi River they and controls got their drinking water. The authors noted a small increase in risk as the source got closer to the Gulf of Mexico and suggested that the finding may have something to do with an increasing concentration of industries along the river as it approaches the Gulf. More importantly they wondered whether by-products of chlorination might have something to do with the finding.

The link suggests just four other papers that have cited the study and of those only two studied drinking water. Of the two, the first is a Canadian paper from 2000 (the first true Y2K victim I've ever run across - note: "Received 1999 Accepted 1900") which found no association with rectal cancer but a small one for colon cancer among males who drank "chlorinated surface water for 35-40 years".

Following the papers that cited the Canadian paper you quickly find another drinking water paper that finds a small protective effect for all leukemias combined, a large protective effect for chronic lymphocytic leukemia and a small but significant association with chronic myeloid leukemia.

The other papers are similarly all over the place and there appears to be no consensus that U.S. drinking water is a cause for concern about cancer from the perspective of chemical contaminants.

On the other hand, there's a growing body of literature associating drinking water contaminated by microorganisms with cancer. There's a new one discussing the waterborne transmission of helicobacter pylori to be published in next month's Journal of Water and Health and then there's this alarmingly titled paper in the same journal: "Free-living amoebae, Legionella and Mycobacterium in tap water supplied by a municipal drinking water utility in the USA

It's unclear why the NYTimes focused on trace levels of chemicals as a cause for concern when there does appear to be something to be worried about when it comes to bugs in our water.

Fun With Statistics

The new jobless data show that for every educational category of worker (college graduates, those with some college, high school grads and dropouts) the unemployment rate today is higher than it was at the peak of the 1982 recession. The same data show that the overall unemployment rate today is lower than it was in 1982. How can this be?

Try this example of an unnervingly common flaw that can arise when you reason from percentages alone. There are two treatments for kidney stones, Treatment A and Treatment B. Each treatment is tried out on two different types of kidney stones - small stones and large stones. Here are the results:

Treatment A   small stones - 93% effective    large stones - 73% effective

Treatment B   small stones - 87% effective   large stones - 69% effective

Which treatment do you think would be most effective overall among small and large stones? As it turns out:

Treatment B was effective 83% of the time for  either small or large stones

Treatment A was effective 78% of the time for either small or large stones

Huh? Here are the actual numbers:

Treatment A   small stones - 81 out of 87 effective       large stones 192 out of 263 effective

Treatment B   small stones - 234 out of 270 effective   large stones - 55 out of 80 effective

Thus the overall success rate for Treatment A is (81 + 192) / 350 = 78% whereas the overall success rate for Treatment B is (234 = 55) / 350 = 83%

This effect, where the results seem to switch between subcategories and overall rates is known as Simpson's Paradox. I don't think it's as much a paradox as it is a problem that arises out of an all too common problem people, including lots of expert witnesses, have with percentages - specifically, thinking of percentages as something independent of the data from which they were generated. The result of this flawed thinking is often a classic, but sometimes hard to perceive, apples to oranges comparison failure.

Here's a good discussion of the issue as it relates to the unemployment conundrum at The Wall Street Journal. For further discussion, including a take on why comparing unemployment among education categories over time is even dicier than comparing different treatments for different types of kidney stones there's another good write up at Andrew Gelman's Statistical Modeling, Causal Inference, and Social Science blog.



Negotiation: Which Side Should Set The Starting Price?

According to an excellent write up of this Galinsky paper at Mind Hacks the short answer is: The side that wants to get the best deal.

The first number offered acts as an anchor. Individuals then reflexively tend to look for information that is consistent with, and so to them confirms, that the number is indeed legitimate. People tend to take the first number as a working hypothesis and so, as with any hypothesis that does not produce an immediate rejection, this first number too often induces people to recall and magnify supporting data while failing to recall or underestimating contradictory data. Is there a way to protect against this effect?

The author suggests that 1) despite the advice of many books on negotiation that recommend waiting for the other side to go first, several studies demonstrate that the buyer, or defendant, will do better by going first with a low offer; 2) when faced with a high demand the buyer (defendant) should focus on information inconsistent with the first offer; and, 3) the initial focus should be on the buyer's / defendant's ideal price by recapitulating the basis for the buyer's / defendant's valuation.



Are Big Punitive Awards in HRT Cases Justified? is reporting that Philadelphia juries have awarded a total of $103 million in punitive damages alone to two women in separate breast cancer product liability trials. The women claimed that hormone replacement therapy (HRT) was responsible for their subsequent development of breast cancer.

In light of the recent controversy over the use of Bayesian decision-making approaches to mammography and Pap testing in which probabilities of outcomes are estimated and benefits are then weighed against costs (including other bad outcomes) I thought it might be of interest to see if such an approach had been applied to HRT. Sure enough, "Bayesian Meta-analysis of Hormone Therapy and Mortality in Younger Postmenopausal Women" was just published in The American Journal of Medicine.

So what does it show? It shows that across a number of randomized controlled trials of HRT in postmenopausal women under 60 those women had a reduced overall mortality compared to those postmenopausal women under 60 who weren't on HRT.

As is often the case in these modern times science does not yield a cure but does allow one to pick one's poison as it were; not to avoid death but to influence the odds of whether you die of stroke instead of breast cancer.

A Fun Lecture About Good Decision-Making

Over at you'll find an entertaining and enlightening lecture by Dan Gilbert, a professor of psychology at Harvard University.


And Now, New Guidelines for the Pap Test

The New York Times is reporting early this morning that a panel of the American College of Obstetricians and Gynecologists is recommending: a) that women not be tested until 21; b) that beginning at 30, and assuming three consecutive negative test results, screenings be reduced from every year to every third year; and, c) that testing can end altogether after age 65 with three straight tests without an abnormality in the last ten years.

First the PSA test, then mammograms and now Pap tests. An appreciation of the limitations of these tests, combined with the realization that many of the lesions detected by them never posed a risk, is responsible for this seismic-seeming shift. Changing a decades-long culture of screening early and often to catch cancer "when it's treatable" won't be easy and, as is apparent from the mammography fracas, won't happen without a fight.


Tags: ,

Figure the Odds

We ended yesterday's post by promising to show you how to more easily understand the debate over breast cancer screening.  Here's a handy way to calculate odds like the ones being discussed in the breast cancer debate. 

First, let's start with another test. Assume the following:

a) The accuracy rate of mammography is 95%

b) The false positive rate for mammography is only 3%

c) Only 1% of women over 50 have breast cancer

d) A woman over 50 has a positive mammogram
Question:  What are the odds that she actually has breast cancer?
Before we give you the answer let's talk a little bit about percentages.  First most people think they understand them, second they don't, and third even when they do most people tend to have a very difficult time reaching the right answer to a question like the one above.  On the other hand, people tend to do better when dealing with rates or frequencies.  So before we introduce you to Bayes' theorem (not today) let's try solving the question using rates.

If 1% of women over 50 have breast cancer that means that out of 10,000 women 100 of them will have breast cancer.  If all 10,000 women are screened by mammography and the accuracy rate is 95% then the test will detect 95 of the 100 cases.

However, if all 10,000 women are screened and the false positive rate is 3% then, of the 9,900 who don't have breast cancer, 297 (3% x 9,900) of them will have a mammogram indicating that they do have breast cancer.

The total number of positive mammograms thus equals the 95 who actually have breast cancer and whose cancers were detected plus the 297 who don't have breast cancer but who had a positive mammogram for a total of 392 possible cases of breast cancer.  So, if only 95 of the 392 women with positive mammograms actually have breast cancer what are the odds that your hypothetical patient is one of them?

Well, 95 is only 24% of 392 (95 / 392) - slightly less than a one in four chance that she actually has breast cancer.  So how did you do?

Risk is hard because it's counter-intuitive.  Comparing percentages is inevitably an apples to oranges trap.  Instead of thinking that a 95% accuracy rate is really high and 3% false positive rate is really low, maybe try asking yourself whether you'd rather have 95% of $100 or 3% of $10,000.

Tags: ,

Doctors and Screening Tests: Usually Wrong but Rarely in Doubt

The controversy over new breast cancer screening guidelines continues unabated today. There are already more than one thousand comments and letters to the editor addressing the issue at The New York Times alone.

Especially interesting are the statements from some of the physicians in both the articles and the comments. Many express a degree of confidence in the ability of mammograms to detect cancer well beyond what the literature would justify. How typical then is this discrepancy between medical opinion and what the numbers actually reveal? Quite.

Here's a test given to a group of obstetricians from a study published in 2006 :

There's a blood test available that can detect Down's syndrome in the fetuses of pregnant women. If the baby has Down's syndrome there's a 90% chance the test will catch it. The test has a false positive rate of only 1%. Just 1 in 100 fetuses are likely to have Down's syndrome. A pregnant woman walks into your office; she's had the blood test and it's positive for Down's syndrome. What advice do you give her about whether or not her baby actually has Down's syndrome?

Fifty seven percent of obstetricians got it wrong. Of those who got it wrong most got it spectacularly wrong, putting the odds of the baby having Down's syndrome at anywhere from 80% to 100%. And those who were most wrong were the most confident that their diagnosis was correct.

In fact the odds are (52.4%) that the woman's baby DOESN'T have Down's syndrome. Think about what advice that woman would probably get. That's a very real and chilling example of the inadvertent harm inflicted on women by doctors who put too much faith in even the most accurate diagnostic tests.

Want a handy way to figure out the odds in such cases? More on that tomorrow.

Tags: ,

No Mammogram Until 50? Let's Get Ready to Rumble.

Gina Kolata at The New York Times is reporting on new breast cancer screening recommendations by the United States Preventive Services Task Force. They are: a) routine mammograms for most women shouldn't begin until 50; b) even then they should occur only every two years; c) they shouldn't continue past 74; and, d) self examination is of no benefit and should be discontinued. The recommendations are based on analyses of a series of studies showing the cost (including the physical and emotional harm done to women overtreated due to false positives and unalterable cancers) of mass yearly screenings far outweighs the benefits.

That these recommendations will provoke a fight is obvious. But which side will prevail? In one corner: the probabilities and statistics. In the other: our beliefs, hopes, fears and intuitions. If this were Texas Hold 'Em we'd know which side would win. But this is breast cancer and so one side comes with all the psychological, sociological and political weight that tends to make many people poor judges of fights like this.

Expect a litany of logical fallacies, from supporters on both sides, in the comments section - chief among which, sadly, will be of the ad hominem variety. 


Tags: ,

State of the Art: Turning a Shield Into a Sword

Why is our blog subtitled "State of the Art"? For several reasons; chief among which is our fascination with the effect on mass tort litigation of our error-inducing mental shortcut responsible for what is called hindsight bias.

It turns out that mass tort defendants' claims of "I didn't know" or "I couldn't possibly have known" not only tend to fail but tend to backfire. Plaintiffs' counsel have learned how to exploit our tendency to Monday morning quarterback other peoples' decisions and now hindsight bias has become a problem in every mass tort case. Unwary defendants risk having "I couldn't possibly have known" turned into a belief that "they had to have known and thus their failure to act is evidence of gross neglect."

What is the hindsight bias? Here's an excellent introduction.

Continue Reading...

It's A Proven Fact: Your Body Language Speaks as Loudly as Your Words

When preparing for a jury trial, lawyers are more often focused on what to say rather than how they look when they say it. A recent study on human brain function and its interpretation of the words and gestures, reminds us that the gestures you make before a jury are as equally important as the words you say.


When Your Opponent Muddies the Water

Here’s some good advice about what to do when your opponent makes numerous arguments for ruling against your client. The short version is of course don’t take the bait because, as Perelman advises, the very act of acknowledging an argument is tantamount to telling your jurors that it’s worthy of their consideration.


Bayesian Trials 77030

John Cook at The Endeavor posted this comment by Mithat Gönen of Memorial Sloan-Kettering Cancer Center about a recent paper concerning Bayesian clinical drug trials of chemotherapeutics.

"While there are certainly some at other centers, the bulk of applied Bayesian clinical trial design in this country is largely confined to a single zip code."

That zip code is 77030 and it’s the zip code for M.D. Anderson Cancer Center. Here’s a great article about M.D. Anderson just published in the New York Times by Gina Kolata.

Bayesian decision-making approaches are proving their worth every day in a wide variety of fields and more than a few courts are starting to grasp and apply probabilistic decision rules. Expect to see more and more Bayes decision theory as courts take an increasingly modern approach to the question of causal inference in mass tort cases.

Snatching Defeat from the Jaws of Victory

When should you poll the jury? Generally speaking, when you lose. But how about when you win? Maybe it's not such a great idea. Juries makes the point today in a post about a Washington state criminal trial in which a defendant’s attorney decided to have the jury polled after it had returned its verdict – a “not guilty” verdict. The very first juror polled by the judge stated that she did not in fact agree with the verdict and so the jury was sent back for further deliberations. Upon further review the jury came back with a unanimous verdict of “guilty” on a charge of vehicular assault. Oops.


Overdiagnosis is Pure, Unadulterated Harm

That's a quote from an article in today's New York Times by Gina Kolata, the finest journalist covering medical issues out there IMHO.

It appears that the American Cancer Society is about to take a stand, at long last, on excessive screening for breast and prostate cancers. It's a practice that results in few, if any, cures of otherwise lethal cancers but which is responsible for a vast amount of unnecessary morbidity. People in whom cancers which would never spread nor even produce symptoms are, thanks to modern techniques, diagnosed with the dread disease thereafter precipitating needless surgeries and treatments, not to mention worry and stress.

This is vindication for a woman I sat next to on a plane back from Washington D.C. several years ago. She was a researcher at one of the best cancer facilities in the country and she'd gone to present evidence to Congress that screening young women for breast cancer did little if anything to arrest the course of aggressive cancers but did lots of serious and needless harm to thousands of women annually. For her troubles she was accused by some breast cancer advocates of being a lackey of the insurance companies as they thought, I presume, that the only reason anyone would oppose mass screenings would be the costs involved. It's nice to see science prevail over emotion.


The Case of The Missing Title

A recent post on Winning Trial Advocacy Techniques entitled "Does your case have a title?" reminds us of the importance of simplifying our trial narrative and that developing a title can provide a shortcut to framing your entire case.

With a title in an instant you convey a hint of what your case is about while helping to raise an inquiring attitude among your jurors that will help them follow your examinations and arguments and understand how they fit within the framework you’ve developed.


The Same Facts Seen From Different Perspectives

Deliberations reminds us that facts are rarely just “facts”. How facts are perceived depends very much on the perspective of the person assessing those facts so that the very same fact may drive two different people to draw two different conclusions from that fact.

Always be mindful that your perspective on how a particular fact impacts the narrative of your case may not be shared by your jurors. A billionaire isn’t just someone with a billion dollars; a billionaire is someone who has succeeded wildly or someone who has exploited the system. She’s someone to be admired, or envied or despised. Facts then, even seemingly simple numeric ones, are often laden with emotive potential and peril.


And Now For Something Completely Different

Yesterday was the 40th anniversary of the first broadcast of Monty Python's Flying Circus. Today the NYTimes gave me a reason to boast about owning all the shows on DVD. In an article titled "How Nonsense Sharpens the Intellect" the author discusses recent research into how people make sense of the world; how they find meaning in the absurd; and, how incomplete and dissonant bits of stories caused their brains to work overtime to find patterns otherwise invisible or nonexistent. One result of the exercise was to tune up their pattern-recognizing abilities so that they thereafter performed better on tests of implicit learning: knowledge gained without awareness.

I'm not sure that this means you should work a dead parrot into your next voir dire but you should probably be aware of the tendency of jurors to fill in holes in your narrative in ways that you might find nonsensical.


It Takes a Villain

The New York Times is now reporting on the biggest risk to the water supply - nasty little microbes.

"[D]airy owners, some of whom are perceived as among the most wealthy and powerful people in town" are said by some to be behind the contamination.

Post Hoc Ergo Propter Hoc

Many millions of Americans will get sick this year. A couple of million will get very sick. Hundreds of thousands will die. It happens every year. But this year tens of millions will get the swine flu shot. Shortly thereafter some will come down with whatever it was they were going to get anyway. And many of them will blame the swine flu shot. After all, they will have been healthy, then have gotten the shot, and then have gotten sick. Obvious, and obviously wrong.

Wrong and obviously dangerous. If a vaccine health panic erupts people may be frightened into not getting vaccinated and that's the real danger. So public health officials have developed a plan to deal with the expected outbreak of bad causal analysis. You can read more about it here.


The 411 On An Old Health Scare Revived by Congress

Senator Tom Harkin (D-IA), the new head of the Senate Committee on Health, Education, Labor and Pensions promised on Monday to probe deeply into any potential links between cell phone use and cancer. This issue has been extensively studied, particularly in Scandinavian countries where cell phone manufacturers such as Nokia and Ericson are headquartered. Each study to date has found no statistically significant association between cell phone use and cancer, including brain cancer.

However, there are still some who attribute brain cancer to cell phones on the theory that radio waves, a form of radiation, damage brain cells. The debate comes on the heels of the 1980's and 1990's controversy regarding the potential adverse health effects of electromagnetic fields EMFs emanating from power lines. While studies cleared EMFs they implicated population mixing likely via some sub-clinical infection as a cause of cancer in children. More on population mixing to come.

How They Know What Isn't So

Today The New York Times has a very interesting article about reasoning.  It includes a discussion of what is called motivated reasoning which is "processing and responding to information defensively, accepting and seeking out confirming information, while ignoring, discrediting the source of, or arguing against the substance of contrary information". 

Here the authors of a paper linked to in The Times' article examined a particular mental shortcut, the situational heuristic, in which cues as to how to judge a contention are drawn from the nature of the actions in question.  Their hypothesis is that because going  to war is an important decision people believed there had to have been important reasons for having done so and went so far as to assume the existence of reasons that didn't exist or weren't suggested.

More evidence that the typical juror is similarly likely to believe, unless disabused of the notion, that lawsuits aren't filed unless there are strong grounds for doing so.


Sugar Pills More Potent Than Ever

Wired has a new article about the placebo effect and evidence that placebos are becoming more increasingly more potent.

Years ago I was thinking about going to medical school and so hung on every word from a friend's father when he talked about what he did for a living. I remembered being fascinated by his stories about sugar pills and the patients to whom he prescribed them. He said that he'd learned years before that for some patients, the ones without any objective signs of treatable illness, he did his best work by being part priest and part witch doctor.

He'd listen to their stories, affirm their suffering, advise them to live better, forgive them their faults and prescribe powerful new magic - "penta-methyl-tri-something-or-another-cis-this-and-that". And it worked. He and his pharmacist friend had to be creative though as patients would compare pills and often return to demand stronger magic citing a neighbor's far milder symptoms. So they wound up having a number of different sugar pills in varying shapes and sizes. I don't recall him saying much about color other than that one lady, upon discovering that she'd been prescribed a very large red and white pill, returned to the office to say that she wasn't nearly as sick as the doctor apparently thought she was.

So what does this have to do with mass torts? Well, particularly in the realm of adverse effects from the use of psychotropic drugs, there's a huge risk of (or opportunity for) getting the causation arrow pointed in the wrong direction especially when so little is known about the causes of mental and emotional disorders and the mechanisms by which they are alleviated. After all, trial lawyers thrive in conditions of uncertainty.

An Explanation of the Trolley Problem?

At Overcoming Bias there's a post, Moral Rules Are To Check Power, that may shed some light on yesterday's trolley problem. Could it be that we judge more harshly the person who shoves the fat man under the trolley than the one who flips a switch to divert the trolley so that it kills an innocent pedestrian because the former has physical power and directed it at someone over whom he could exert that power whereas the latter, who flipped a switch that anyone can flip merely made a choice that anyone could make? If that's the case, then the context of a defendant's action, including its ability to impose its will on another, may be strictly scrutinized and mere utilitarian arguments will fall flat.

So be cognizant of the power relationships in the narratives you construct for your cases and never assume that the good things your client did will be seen to outweigh the suspicions and biases raised when the outcome at issue resulted from an exercise of that power.

Our morals may exist to contain the powerful.

Don't Throw Your Client Under the Trolley

When defending a client's actions it's important to remember that those actions will be judged not solely on the basis of outcome but also on the basis of your client's freedom to have varied from the choice it made, its relative power vis-a-vis the plaintiff, its intentions and, I suspect, whether any benefit accrued as a result of those actions.

Why this is so is explained, perhaps, by findings related to the so called trolley problems.  In the standard trolley problem a runaway trolley is running down the tracks and will crash and kill all five people on the trolley unless something is done.  A bystander has the option to throw a switch which, if thrown, will divert the trolley onto another track where it will run over and kill a pedestrian but save the five people on the trolley.  Various permutations of the problem involve varying the bystander's range of  choices, intentions and physical actions in diverting the trolley.  These permutations elicit, from those judging the bystander's conduct, widely differing views of her culpability though in each iteration, from a strictly utilitarian perspective, five are always saved and one is always lost if she chooses to save the five.

In their new paper "Pushing moral buttons: The interaction between personal force and intention in moral judgment" Joshua Greene, et al. examine the impact on how an action is judged when personal force is applied.  Hat tip: MarginalRevolution which has a nice discussion of the issue.  Perhaps not unsurprisingly though the outcome is the same (net four lives saved) people's judgment of the life-saving action seems to vary with the degree of force applied.
Continue Reading...

Choose Your Words Carefully

Do people listen to an argument and then decide what they think of it or are there mechanisms in the brain which, when triggered by the use of value-laden language, prevent further consideration? 

In "Right or Wrong?  The Brain's Fast Response to Morally Objectionable Statements" the authors found that words that are inconsistent with the reader's values almost immediately caused the reader to make a judgment about the statement being made.  In fact, the readers often made sense of, or in other words judged, the statement before the sentence containing the value-laden word(s) was even completed.

The take away is that one misplaced word can destroy the best argument if it triggers a moral objection.

Will Empathize For Food

Men don't read emotions as well as women, right? Maybe not. In a study being discussed at Overcoming Bias it appears that men indeed can read emotions - they just don't - unless there's something in it for them. Hmmmmm. Here's a link to the article.

An Epidemic of Depression?

Antidepressants are now the most commonly prescribed class of medications in the U.S. The use of antidepressants has doubled in the last decade. And the rate of use by women is double that for men. What's going on here? What's behind the increase?  Who's to blame? Is anyone to blame? Discuss.

Hat tip: