Discretizations

Some months ago we decided to put Discretizations on hiatus while we tried to figure out what to do about the tsunami of scientific papers washing up on PubMed (which was already piled deep in the flotsam and jetsam of publish or perish) now that China, India, Korea, etc. are getting in on the fun. On the one hand we're sorely tempted to take advantage of a situation that presents us daily with awesome posting opportunities like: Hay fever causes testicular cancer! On the other hand we know that most statistically significant findings are in fact false - in no small part because "it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not desire". So, rather than being a part of the problem by propagating noise and not signal we've decided to limit our Discretizations to papers that 1) report an actual observation and not just the spectre of one drawn from data dredging and statistical analysis; 2) reflect an effort to reproduce a previously reported finding (e.g. testing a hypothesis previously drawn from statistical inference but in a different population); or, 3) extend (marginally) some previously established finding. Here goes:

Your word for the day is "immunobiotics": good bacteria that can aid in the production of certain blood cells

Obesity is a risk factor for lumbar radicular pain and sciatica - and it may impair healing mechanisms

Poor folks still have (on average) poor ways (which explains a large portion of the disparity in life expectancy between those in the bottom quartile of socioeconomic status and everyone else) 

Hospital staff and visitors can spread nosocomial pathogens to nearby businesses

How does metformin help patients with diabetes? Maybe by making the gut a happier place for A. muciniphila

No evidence found to support the hypothesis that parental smoking (and the benzene exposure that goes with it) during pregnancy increases a child's risk of developing acute lymphoblastic leukemia (ALL)

Does the debate about segregating cystic fibrosis patients by lung/bronchial microbiome, going so far as to prevent some/most/all-but-one CF sufferers from attending indoor CF Foundation events, presage future debates as more and more diseases are found to have bacterial components? Here are the pros and cons of the CF debate.

The Human Cost of Bad Science

Because of the aggressiveness of a disease, its stage when detected and/or the requirement that patients enrolled in clinical trials not simultaneously pursue multiple treatments "patients with progressive terminal illness may have just one shot at an unproven but promising treatment." Too often their last desperate shots are wasted on treatments that had no hope of success in the first place. Two new comment pieces in Nature highlight the extent of the problem.

In Preclinical research: Make mouse studies work, Steve Perrin demonstrates that just like cancer patients, ALS/Lou Gehrigs' patients are betting their lives on treatments that showed great promise in lab animals only to find that they do no good in humans. So why are 80% of these treatments failing? It's not a story of mice and men. It's a story of bureaucratic science. Of going through the motions. Of just turning the crank. And of never, ever, daring to critique your methods lest you find, to take one example, that the reason your exciting new ALS treatment works so well in mice is because your mice didn't have ALS to begin with - you having unwittingly bred the propensity to develop it out of your lab animals.

Then read Misleading mouse studies waste medical resources. It continues the story of how drugs that should have been discovered to be useless in mice instead made their way into clinical trials where they became false promises on which thousands of ALS patients and their families have pinned their hopes.

We hope those courts that have bought into the idea that reliable scientific knowledge can be gained without the need for testing and replication are paying attention.

Tags:

A Memorandum Opinion And The Methods That Aren't There At All

You'd think that courts would be leery about dressing their Daubert gatekeeping opinions in the "differential etiology method". After all, as you can see for yourself by running the query on PubMed, the U.S. National Library of Medicine / National Institute of Health's massive database of scientific literature, apparently nobody has ever published a scientific paper containing the phrase "differential etiology method". Of the mere 22 articles ever to contain the sub-phrase "differential etiology" none use it in the sense - to rule in a heretofore unknown cause - meant by the most recent court to don its invisible raiment. Even mighty Google Scholar can manage to locate only 6 references to the "method" and all are law review articles resting not upon some explication and assessment of a scientific method known as differential etiology but rather on the courtroom assertions of paid experts who claimed to have used it.

You'd also hope courts would understand that scientific evidence is no different than any other kind of evidence. It must be still something that has been observed or detected albeit with techniques (e.g. nuclear magnetic resonance spectroscopy) or via analyses (e.g. epidemiology) beyond the ken of laymen. Yet, while they'd never allow into evidence (say in an automobile case) the testimony of someone who had witnessed neither the accident nor the driving habits of the Defendant but who was prepared to testify that he thought the Defendant was speeding at the time of the accident because Defendant looks like the sort of person would would speed and because he can't think of any other reason for the wreck to have occurred, some courts will allow that very sort of testimony so long it comes from a PhD or an M.D. who has used the "weight of the evidence method". Can you guess how many references to scientific papers using the "weight of the evidence method" PubMed yields? The same 0 as the "differential etiology method".

Nevertheless another (memorandum) opinion has joined the embarrassing procession of legal analyses bedecked in these ethereal methods; this time it's a radiation case styled  McMunn v. Babcock & Wilcox Power Generation Group, Inc. 

Plaintiffs suffering from a wide variety of cancers allegedly caused by releases of alpha particle emitting processed uranium from the Apollo, PA research and uranium fuel production facility sued Babcock and other operators of the site. Following battles over a Lone Pine order and extensive discovery the sides fired off motions to exclude each others' experts. The magistrate to whom the matter had been referred recommended that plaintiffs' general causation expert Dr. Howard Hu, specific causation expert Dr. James Melius, emissions and regulations expert Bernd Franke and nuclear safety standards expert Joseph Ring PhD be excluded. The plaintiffs filed objections to the magistrate's recommendations, the parties filed their briefs and the District Court rejected the magistrates recommendations and denied defendants' motions.

Dr. Hu had reasoned that since 1) ionizing radiation has been associated with lots of different kinds of cancer; 2) alpha particles ionize; and 3) IARC says alpha particles cause cancer it makes sense that the allegedly emitted alpha particles could cause any sort of cancer a plaintiff happened to come down with. It's not bad as hunches go though it's almost certainly the product of dogma -specifically the linear no-threshold dose model - rather than the wondering and questioning that so often leads to real scientific discoveries. But whether a hunch is the product of the paradigm you're trapped in or the "what ifs" of day dreaming it remains just that until it's tested. Unfortunately for Dr. Hu's hunch, it has been tested.

Thorotrast (containing thorium - one of the alpha emitters processed at the Apollo facility) was an X-ray contrast medium that was directly injected into numerous people over the course of decades. High levels of radon could be detected in the exhaled breath of those patients . So if Dr. Hu's hunch is correct you'd expect those patients to be at high risk for all sorts of cancer? They're not. They get liver cancer overwhelmingly and have a five fold increase in blood cancer risk but they're not at increased risk for lung cancer or the other big killers. Why? It's not clear though the fact that alpha particles can't penetrate paper or even skin suggests one reason. Look for yourself and you'll find no evidence (by which we mean that the result predicted by the hunch has actually been observed) to support the theory that alpha particles can cause all, most or even a significant fraction of the spectrum of malignancies whether they're eaten, injected or inhaled and whether at home or at work. Be sure to check out the studies of uranium miners.

But let's assume that alpha particles can produce the entire spectrum of malignancies, that the emissions from the facility into the community were sufficiently high, and that the citizenry managed to ingest the particles. What would you expect the cancer incidence to be for that community? Probably not what repeated epidemiological studies demonstrating that "living in municipalities near the former Apollo-Parks nuclear facilities is not associated with an increase in cancer occurrence" concluded.

Dr. Hu attacked the studies of uranium miners and of the communities around Apollo by pointing out their limitations. This one didn't have good dose information and that one had inadequate population data. Perfectly reasonable. It's like saying "I think you were looking through the wrong end of the telescope" or "I think you had it pointed in the wrong direction". He's saying "your evidence doesn't refute my hunch because your methods didn't test it in the first place."

Ok, but where's Dr. Hu's evidence? It's in his mind. His hunch is his evidence; and it's his only evidence. He weighed some unstated portion of what is known or suspected about alpha particles and cancer in the scales of his personal judgment and reported that CLANG! the result of his experiment was that the scale came down solidly on the side of causation for all of plaintiffs' cancers.

At the core of the real scientific method is the idea that anyone with enough time and money can attempt to reproduce a scientist's evidence; which is to say what he observed using the methods he employed. Since no one has access to Dr. Hu's observations and methods other than Dr. Hu his hunch is not science. Furthermore, there's no way to assess to what extent the heuristics that bias human decision-making impacted the "weighing it in my mind" approach of Dr. Hu.

Given that there's no way to reproduce Dr. Hu's experiment and given that none of the reproducible studies of people exposed to alpha particles demonstrate that they're at risk of developing the whole gamut of cancer Dr. Hu's argument boils down to that of the great Chico Marx: "Who are you going to believe, me or your own eyes?" Alas, the court believed Chico and held that "Dr. Hu's opinions have met the pedestrian standards required for reliability and fit, as they are based on scientifically sound methods and procedures, as opposed to 'subjective belief or unsupported speculation'".

Next, having already allowed plaintiffs to bootstrap alpha emitters into the set of possible causes of all of the plaintiffs' cancers, the court had no problem letting letting the specific causation expert, Dr. Melius, conclude that alpha emitters were the specific cause of each plaintiff's cancer merely because he couldn't think of any other cause that was more likely. At first that might seem sensible. It's anything but.

Don't you have to know how likely it is that the alpha emitters were the cause before you decide if some other factor is more likely? Obviously. And where does likelihood/risk information come from? Epidemiological studies in which dose is estimated. Of course the plaintiffs don't have any such studies (at least none that support their claim against alpha emitters) but couldn't they at least use the data for ionizing from say the atomic bomb or Chernobyl accident survivors? After all, the court decided to allow plaintiffs' experts to testify that "radiation is radiation".

Well, just giving a nod to those studies raises the embarrassing issue of dose and the one lesson we know plaintiffs' counsel have learned over the last several years of low-dose exposure cases is to never, ever, ever estimate a dose range unless they're ordered to do so. That's because dose is a measurement that can be assessed for accuracy and used to estimate likelihood of causation. Estimating a dose thus opens an avenue for cross examination but more devastatingly the argument that runs: "Plaintiff's own estimate places him in that category in which no excess risk has ever been detected."

Fortunately for plaintiffs the court held that Dr. Melius' differential diagnosis or differential etiology method does not require that he estimate the likelihood that radiation caused a particular cancer before he can conclude that radiation is the most likely cause among many (including those unknown).

First the court held that it was not its job to decide which method is the best among multiple methods so long as the method is reliable. For this it relies upon In re TMI Litigation. When In re TMI Litigation was decided (1999) the Nobel Prize in Physiology or Medicine for the discovery that Helicobacter pylori and not stress was the cause of most peptic ulcers was six years in the future. The method of observational epidemiology and application of the Hill causal criteria had generated the conclusion that peptic ulcers were caused by stress. The method of experimentation, observation and application of Koch's postulates established H. pylori (nee C. pyloridis) as the real cause; and, for the umpteenth time, experimentation as the best method. So could a court allow a jury to decide that the peptic ulcer in a plaintiff with a H. pylori infection was caused by stress at work? Apparently the answer is "Yes"; scientific knowledge be damned.

Second, citing In re Paoli and other Third Circuit cases, the court held that differential etiology (without knowledge of dose) has repeatedly been found to be a reliable method of determining causation in toxic tort cases. As we've written repeatedly this a claim wholly without support in the scientific literature. Are there studies of the reliability of differential diagnoses made by radiologists? You bet (here for example). Are there studies of immunohistochemical staining for the purpose differential diagnosis in the case of suspected mesothelioma? Yep. Here's a recent example. There are (as of today) 236,063 pages of citations to articles about differential diagnosis on PubMed and none of them (at least none I could find via key word searches) suggests that a methodology such as Dr. Melius' exists (outside the courtroom) and none represent an attempt to test the method to see if it is reliable.

Third, the court held that since Dr. Melius' opinions were the result of his "qualitative analysis" the fact that plaintiffs were living in proximity to the facility during the times of the alleged radiation releases and the fact that Babcock failed to monitor emissions and estimate process losses to the environment was enough to allow a jury to reasonably infer that plaintiffs were "regularly and frequently exposed to a substantial, though unquantifiable dose of iodized [ionized?] radiation emitted from the Apollo facility." How such reasoning can be anything other than argumentum ad ignorantiam is beyond our ability to understand.

Worse yet is this sentence appearing after the discussion about the absence of data: "A quantitative dose calculation, therefore, may in fact be far more speculative than a qualitative analysis." What would Galileo ("Measure what is measurable, and make measurable what is not so") the father of modern science make of that?  Yes an estimate could be wrong and a guess could be right but the scientist who comes up with an estimate makes plain for all to see her premises, facts, measurements, references and calculations whereas the expert peddling qualitative analyses hides his speculation behind his authority. Besides, dose is the only way to estimate the likelihood of causation when there are multiple, including unknown, alternate causes. Then again, in addition to everything else Galileo also proved that questioning authority can land you in hot water so we'll leave it at that.

Finally, in the last and lowest of the low hurdles set up for Dr. Melius, the court found that he had "adequately addressed other possible cause of Plaintiffs' cancers, both known and unknown." How? By taking looking for "any risk factor that would, on its own, account for Plaintiffs' cancers", reviewing medical records, questionnaires, depositions, work histories and interviewing a number of plaintiffs. Presumably this means he looked for rare things like angiosarcoma of the liver in a vinyl chloride monomer worker and mesothelioma in an insulator, commoner things like lung cancer in heavy smokers and liver cancer in hepatitis C carriers, and hereditary cancers (5% to 10% of all cancers) like acute lymphoblastic leukemia in people with Down syndrome or soft tissue sarcomas in kids with Li-Fraumeni Syndrome. You can make a long list of such cancers but they represent perhaps one fourth of all cases. Of those cancers that remain there will be no known risk factors so that once you're allowed to rule in alpha emitters as a possible cause ("radiation is radiation") and to then infer from both "qualitative analysis" and the absence of data that a "substantial" exposure occurred you've cleared the substantial factor causation hurdle (which at this point is just a pattern in the courtroom flooring). Having gotten to the jury all that remains is to make the argument plaintiffs' counsel made before Daubert: "X is a carcinogen, Plaintiff was exposed to X, Plaintiff got cancer; you know what to do."

We're living through an age not unlike Galileo's. People are questioning things we thought we knew and discovering that much of what the Grand Poohbahs have been telling us is false. There's the Reproducibility Project: Psychology the genesis of which included the discovery of a widespread "culture of 'verification bias'" (h/t ErrorStatistics) among researchers and their practices and methodologies that "inevitably tends to confirm the researcher's research hypotheses, and essentially render the hypotheses immune to the facts...". In the biomedical sciences only 6 of 53 papers deemed to be "landmark studies" in the fields of hematology and oncology could be reproduced, "a shocking result" to those engaged in finding the molecular drivers of cancer.

Calls to reform the "entrenched culture" are widespread and growing. Take for instance this recent piece in Nature by Regina Nuzzo in which one aspect of those reforms is discussed:

It would have to change how statistics is taught, how data analysis is done and how results are reported and interpreted. But at least researchers are admitting that they have a problem, says (Steven) Goodman [physician and statistician at Stanford]. "The wake-up call is that so many of our published findings are not true."

How did we get here? A tiny fraction of the bad science is the result of outright fraud. Of the rest some is due to the natural human tendency to unquestioningly accept, and overweigh the import of, any evidence that supports our beliefs while hypercritically questioning and minimizing any that undercuts it (here's an excellent paper on the phenomenon). Thanks to ever greater computing power it's becoming easier by the day to "squint just a little bit harder" until you discover the evidence you were looking for. For evidence that some researchers are using data analysis to "push" until they find something to support their beliefs and then immediately proclaim it read: "The life of p: 'Just significant' results are on the rise." For evidence that it's easy to find a mountain of statistical associations in almost any large data set (whereafter you grab just the ones that make you look smart) visit ButlerScientifics which promises to generate 10,000 statistical relationships per minute from your data. Their motto, inadvertently we assume, makes our case: "Sooner than later, your future discovery will pop up."

Of the remaining bad science, i.e. that not due to fraud or cognitive biases, apparently a lot of it arises because researchers often misunderstand the very methods they use to draw conclusions from data. For example, read "Robust misinterpretation of confidence intervals" and you'll get the point:

In this study, 120 researchers and 442 students - all in the field of psychology - were asked to assess the truth value of six particular statements involving different interpretations of a CI (confidence interval). Although all six statements were false, both researchers and students endorsed, on average more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.

We suppose such results would have surprised us before we got out of law school. Nowadays we just nod in agreement; our skepticism regarding the pronouncements of noted scientific authorities becoming complete after recently deposing an epidemiologist/causation expert who didn't even know what "central tendency" meant. He also had never heard of "The Ecological Fallacy" which explains why he committed the error throughout his report. He couldn't estimate how likely it was that the chemical in question was causative nor did he know the rate of the disease in plaintiff's age/gender bracket. No matter. His opinions came wrapped in the same non-existent scientific methods and the court has been duly impressed with his extensive credentials and service on panels at NCI, IARC, etc. So it goes. 

Hopefully courts will begin to take notice of the rot that sets in when scientists substitute personal judgment, distorted by cognitive biases to which they are blind and intuitions which can easily lead their causal inferences astray, for measurement and experiment in their quest to generate scientific knowledge. That and the fact that the method some experts parade about in are not in fact a way of doing science but rather just a way of shielding their unscientific opinions from scrutiny.

Too bad about the method though. If it worked we could hire the appropriate scientific authorities to come up with a cure for cancer. They would ponder the matter, render their opinions as to the cure and testify as to why their "qualitative analysis" which points to the cure is obviously the correct one. The jury would pick the best treatment among those on offer and the court would enter a judgment whereby cancer was ordered to yield to the cure. That's not how it works and it wasn't how it worked in the early 1600s when the sun refused to revolve about the earth, heedless of the pronouncements of scientific authorities and courts of the day. Maybe history will repeat itself in full and scientific knowledge, the product of observation, measurement, hypothesis and experiment will again mean just that and not the musings of "experts" bearing resumes and personal biases rather than facts. Maybe if we're lucky the rot will be cut out and the tide will finally turn in the long war on cancer. We'll see.

Overwonked

Back in the day expert witnesses attacked with their credentials and parried with their jargon. Nowadays they're "wonkish". The result is typically something like this recent affidavit of Dr. Arthur Frank - a beyond encyclopedic recitation of the scientific literature and the conclusions he believes obviously flow from it. Yet Dr. Frank is a piker when it comes to wonky reports. Plaintiffs' expert in a recent benzene/AML case generated a report consisting of nearly 400 pages of densely packed text and calculations. Having become somewhat cynical after 25 years of litigation I can't help but suspect that this trend serves mainly the ethos component of rhetoric by seeming to demonstrate, with an eye on lazy gatekeepers, a deep understanding of the topic at hand. Well, that and making life much more difficult for anyone trying to tease apart and analyze all the pieces that make up these towering works of sophistry.

Take for instance the report of Dr. Frank above. The big issue in asbestos litigation today and the one he supposedly set out to opine about is what to make of de minimis exposures both in the context of "but for" causation and substantial factor causation. Instead he sets up the straw man of a serious dispute about whether heavy amphibole exposure can cause a variety of asbestos-related diseases and pummels him to the ground. Page after page he sits on the straw man's chest punching him in the face and for nearly seventy pages the straw man stubbornly refuses to tapout. Finally Dr. Frank gets to the point but his answer is nothing new and nothing we haven't discussed here a dozen times.

The question of what to do about de minimis exposures is a public policy issue that science cannot resolve. Dr. Frank and I had a very nice exchange recently at a conference in San Francisco where we both spoke and he gets it. He asked me "so what happens when you have someone with mesothelioma whose disease was caused by the sum of several de minimis exposures? Is he left without a remedy?" To which I replied "the only difference between that case and Palsgraf is that each of the micro-events (each being harmless without all the others) are the same in the case of asbestos and different in the case of Mrs. Palsgraf. But why would like-ness change the answer as to whether a micro-event was de minimis or not?" We agreeably agreed to think about good arguments for and against it.

You can find a copy of my PowerPoint from the conference here.

Tags:

Five Beagles Refused To Die

Thinking about Harris v. CSX Transportation, Inc. and trying to understand how a court could come to believe that an educated guess that has never been tested, or one that has been repeatedly tested and serially refuted, could nevertheless constitute scientific knowledge I thought I'd reread Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation by Carl Cranor. It was published earlier this year in a Wake Forest Law Review issue devoted to advancing the thinking that produced Milward and now Harris. In it he sets out his argument that (a) "[t]he science for identifying substances as known or probably human carcinogens has advanced substantially" over the last quarter century and (b) where science leads, courts should follow. (Cranor you'll recall is a philosopher and "the scientific methodology expert for plaintiffs in Milward".)

Cranor begins by asking you to imagine having been diagnosed with "early stage bladder cancer that had been caused by MOCA (4,4'-methylenebis(2-chloroaniline)" following exposure to the chemical in an occupational setting. He then reveals that though "IARC classifies MOCA as a known human carcinogen" many judges would nevertheless deny you your day in court because they don't understand the "new science" for identifying the etiology of cancer. You see, while IARC concluded that "[t]here is inadequate evidence in humans for the carcinogenicity of 4,4'-methylenebis(2-chloroaniline)" its overall evaluation was that "4,4'-methylenebis(2-chloroaniline) is carcinogenic to humans (Group 1)" notwithstanding! And its rationale? MOCA (sometimes bearing the acronym MBOCA just to confuse things) is structurally similar to aromatic amines that are urinary bladder carcinogens (especially benzidine), several assays for genotoxicity in some bacteria and fruit flies have been positive, in rats and dogs MOCA can form DNA adducts, mice and rats and dogs exposed to MOCA develop cancer (dogs actually develop urinary bladder cancer), one of those DNA adducts found in dogs exposed to MOCA was found in urothelial cells from a worker known to have been exposed to MOCA, and finally an increased rate of chromosomal abnormalities in urothelial cells have been observed in some people exposed to MOCA.

At that point I stopped re-reading Cranor's paper and started looking into MOCA.

If MOCA really is a human urinary bladder carcinogen and if thousands of people have been exposed to MOCA in their work for many decades why is there no evidence of an increased risk of malignant urinary bladder cancer among them? Cranor claims the reason IARC concluded that there's "inadequate evidence" for MOCA being a human carcinogen is because "there are no epidemiological studies". Are there no such studies? If workers exposed to MOCA develop the same DNA adducts demonstrated in dogs and if four out of five dogs exposed to MOCA develop bladder cancer then where are all the human cases? And what's the story with the dogs?

It turns out there is an epi study of MOCA-exposed workers. The study was initiated in 2001 and its results were published four years ago. Only one death from bladder cancer was identified and it was not known whether the man was a smoker (a leading cause of bladder cancer). There was one bladder cancer registration for a man who had survived his cancer but he was a former smoker. Finally, there was one case of noninvasive, or in situ, bladder carcinoma; that case was excluded from analysis as there is no reference population that has been screened for benign tumors from which a background rate can be generated (take note of this case of a benign tumor, the significance of which no one can say as it will shortly become important). None of the findings allowed the researchers to reject the null hypothesis, i.e. that MOCA doesn't cause bladder cancer in humans. 

Then there's "Bladder tumors in two young males occupationally exposed to MBOCA". This study was conceived because "[i]n addition to their chemical similarity MBOCA and benzidine have similar potency to induce bladder tumors in beagle dogs, the species considered to be the best animal model for humans", because 9,000 to18,000 workers were being exposed to it, because it was not regulated as a carcinogen and because they had a group of 540 workers - workers whose smoking history was known, who hadn't been exposed to other bladder carcinogens and who had been screened for signs of bladder cancer and monitored for exposure since 1968. Why were they screened and monitored?. Benzidine and 2-naphthylamine are aromatic amines that long before 1968 were known to consistently produce very high numbers of malignant bladder cancers among exposed workers (with the incidence of malignant bladder cancer reaching 100% in one study of very highly exposed workers) so it was reasonably conjectured that all aromatic amines might cause malignant bladder cancer. 

Of the 540 workers none had died of or had symptoms of bladder cancer. However, two workers had been identified for follow up after screening and biopsies on each revealed non-invasive papillary tumors of the bladder. Because, again, there is no reference background population it was impossible to say whether the finding meant the risk of non-malignant asymptomatic tumors among MOCA workers was higher, lower or the same as expected among the unexposed. Nevertheless, after returning to muse about those MOCA-exposed beagles that had developed papillary carcinomas of the bladder the authors concluded that "[t]he detection of the two tumors in young, nonsmoking males is consistent with the hypothesis that MBOCA induces bladder neoplasms in humans."

And that's it for evidence of bladder cancer in humans from studies of humans exposed to MOCA - which is to say that nobody has ever found anyone exposed to MOCA to be at an increased risk of dying from bladder or even at an increased risk of developing clinically apparent bladder cancer. But at least it kills beagles, right?

Before I looked at the animal studies on MOCA I assumed there'd be lots, that they used modern techniques and that they'd been replicated; likely several times. Not so. IARC cited no rodent study done since the 1970s. It's summary of testing listed one study on mice, five for rats and just one for dogs. The mice were fed lots of MOCA for 2/3 of their lives and many developed hemangiosarcomas (mice and dogs get it, you don't and the finding probably isn't relevant to humans in any event) and liver tumors. The rats were fed lots of MOCA in different regimens that varied by dose and protein sufficiency. In one study they got common liver and lung tumors. In the next only liver tumors. In the third lung, liver and even mesotheliomas. Lung, liver and mammary gland tumors were found in the fourth. Plus lung, mammary gland, Zymbal's gland, liver and hemangiosarcoma in the fifth. The rates were often strongly affected by protein insufficiency status. Neither mouse nor rat developed bladder cancer. But those beagles sure did. Well, four beagles did anyway. But that's not what killed them.

In the late 1960s six one year old beagles were started on 100 mg of MOCA three days a week. Six weeks later the dosing was increased to five days a week. Another six beagles which were fed no MOCA served as controls. Time passed. Man landed on the Moon, Watergate brought down a President, the Vietnam War ended, the PC was launched and the first fiber optic cable laid and yet five of the six MOCA beagles carried on (one having died along the way of an unrelated infection). Eventually, as the dogs approached their second decade of life, urinalysis suggested possible neoplasia so one was killed and dissected. It was healthy (other than the usual ailments associated with being 65 dog-yrs old) but it did have a papillary bladder cancer - one that had neither metastasized nor invaded surrounding tissues. Eight months later, having enjoyed it is hoped seventy happy dog years of life, the remaining four beagles, undaunted and asymptomatic to the last, were also killed and dissected. Three of the four had non-invasive, non-malignant bladder cancer. None of the controls, which met a similar fate, had non-invasive, non-malignant bladder cancer.

And that's it. Five dogs in a single study (never replicated) that began more than 45 years ago and ended before PCR, etc. were fed MOCA for a lifetime and never noticed, and perhaps never would have noticed, any effect but for having been "sacrificed" once their biggest mortality risk became old age.

OK. Four of five dogs fed MOCA in a study done long ago developed a non-malignant, non-invasive bladder cancer whereas none of the unexposed dogs developed the condition. Two humans out of 500+ exposed developed the same non-malignant, non-invasive disease and in a different study one human exposed to MOCA had a DNA-adduct like that of an exposed dog. Setting aside the growing skepticism about the usefulness of animal studies let's assume this decades-old study proves that exposure to MOCA causes non-invasive and non-malignant bladder cancer. So what to make of it?

First there's the issue of screening bias. You need look no further than the Japanese and Canadian childhood neuroblastoma screening studies to understand that lots of people, including children, get otherwise deadly cancers that simply go away on their own and that screening in such cases merely increases morbidity without decreasing mortality.

Second there's the whole rationale for labeling MOCA a human carcinogen because its metabolites look like those of benzidine (which does cause malignant bladder cancer). So it walks like a duck and quacks like a duck. It still doesn't cause malignant bladder cancer. Shouldn't that give IARC pause? If you decide MOCA's a carcinogen because metabolism turns it into the same things that benzidine gets turned into shouldn't you be scratching your head when it doesn't cause malignant bladder cancer in mice, rats, dogs or humans? Isn't a little humility in order?

Finally, what do you do with such a causal claim in a toxic tort case? If you don't know how many people get non-malignant, non-invasive bladder cancer how do you know whether MOCA increases, decreases or has no impact on the risk of contracting it? In other words, if you don't know what the background rate for non-malignant, non-invasive bladder cancer is, and you don't know by how much MOCA increases the risk, how can you ever say it's more likely than not a cause of any bladder cancer, much less a particular plaintiffs?

You can't, and that's why the Milward plaintiffs lost when they got back to the trial court. They could make no argument other than to conflate risk with causation. Shorn of the fallacious assertion that any risk is post hoc necessarily causative they couldn't say why benzene was more likely than not the cause of decedent's acute promyelocytic leukemia. They simply couldn't make a coherent argument as to why one putative cause among many was the most likely in the absence of any evidence about which cause was the most common.

In the end Milward serves only as a bad example. An example of what happens, of the time and money wasted, when the law tries to outrun science.

Tags: ,

The West Virginia Supreme Court of Appeals Doesn't Get The Scientific Method

Milward v. Acuity has spawned another troubling anti-science opinion: Harris v. CSX Transportation, Inc. Whereas Milward held that credentialed wise men should be allowed to testify that an effect that has never been observed (indeed one that could not be detected by any analytical method known to man) actually exists, Harris holds that such seers may further testify that an effect that would be observable (if it existed) and which has been repeatedly sought in the wake of its putative cause yet invariably not observed actually exists nonetheless.

How are plaintiffs pulling it off? By convincing some judges that testing, the essence of the scientific method, need not be done in laboratories and need not be independently reproducible. These courts have decided that biomedical discoveries can reliably be made, at least in the courtroom, merely by having an expert witness test in his mind a suggested causal association by running it through whatever causal criteria he thinks appropriate and weighing the resulting evidence according to his subjective judgment. Really, it's that bad. Two courts have now granted paid-for hypotheses a status equal to or higher than (depending on the jury's verdict) that of scientific knowledge (conjectures that have been severely and repeatedly tested and have repeatedly passed the tests). Now, we could point out that hypotheses generated by the process endorsed by these courts pan out perhaps 1 time in 100  - i.e. the method's rate of error is 99% - and so flunks a key Daubert factor, but that ignores the real ugliness here - an attack on the scientific method itself.

It has been said that the point of the scientific method is to let Nature speak for herself. By observing, measuring and recording scientists listen to her. By generating hypotheses about the order in which the observations occurred they attempt to make sense of what she's saying. By testing their hypotheses (i.e. by attempting to reproduce the observed effect in a controlled setting) scientists ask if they've misunderstood her. By publishing their results scientists communicate what they've learned and invite others to try to reproduce and build upon it. This method of objectively assessing a phenomenon, guessing at what it implies about how the world works, testing that guess and then reporting the results along with the method, materials and measurements involved ushered in the world we know today. It also dislodged those who in the past had sought to speak for Nature; those whose power and place had been derived from their ability to explain the world by way of plausible and compelling stories that served some narrative. They were dislodged first because the scientific method proved a better lodestone and second because the method, once applied to ourselves, revealed human intuition and judgment to be woefully prone to bias, fear, superstition and prejudice.  

Luckily for the would-be oracles who made their living as expert witnesses it took a long time and extreme abuse of the word "scientific" before the law came to terms with the scientific method. Finally Daubert accepted the central tenet of the scientific method - i.e. that to be scientific a theory must be testable - and thus necessarily accepted that the law could not lead science as it was obviously unequipped and ill-suited for testing theories. The law would have to follow science.  Other opinions refined the arrangement until we got to where we are today (at least where we are in Texas). Now an expert's opinion must be founded on scientific knowledge and may not reach beyond what can reasonably be inferred from it (i.e. the analytical gap between what is known and what follows from that knowledge doesn't require much of a leap - it's really just a matter of deduction). A case that's kept us busy the last month provides an example.

The plaintiffs' decedent had died of acute myelogenous leukemia (AML) and his family blamed benzene exposure. The battle was fought not over whether benzene can cause AML (though there are some interesting arguments to be made on the subject) but rather over whether plaintiff was exposed to enough and whether the risk posed by the exposure was considerable. The experts did have some leeway on issues like retrospective exposure estimation and whether the latency period was too long as on both sides there were scientific studies demonstrating the effect in question. Yet in the main the experts' opinions mostly overlapped; differing only according to the testimony of the fact witnesses on which their side relied. The jury thus was to decide which of two competing pictures of plaintiff's workplace made the most sense and not whether benzene causes AML. Surely that's the sort of case for which trial by jury was designed.

However, many still chafe against Nature's tyranny and argue for the old ways; for human judgment unconstrained by measurement, testing and thus the embarrassing possibility (likelihood, actually) of having their beliefs publicly refuted. So some argue that Nature is too coy and that she refuses to reveal what they're sure must be true. Others just don't like what she has to say. And of course there's the whole financial angle given that a lot more lawsuits could be filed and won if Nature could be made to speak on command or if the subjective judgment of experts could be re-elevated to the status of pronouncements by Nature.

So what to do? One solution is to adopt the "if you can't beat 'em, join 'em" motto" and bank on the truism that "if you can't find a way to generate a statistically significant association between an effect and what you suspect is its cause then you're too stupid to be a scientist." But that plan first ran afoul of the courts when it was recognized that, for example, improper epidemiological methodology had been employed to generate the results (see e.g. Merrell Dow v. Havner) and more recently as it has become evident that there's a crisis in the biomedical sciences - that many if not most statistically significant results cannot be reproduced and it's because many and probably most reported findings involving small effects (generally an increased risk of 3-fold or less) are false.

What to do, what to do? You need a method a court will say is valid and you need a test that can't be mathematically demonstrated to generally produce bad results and that also can't be run by someone else (lest she falsify your theory and ruin all the fun). What about equating a decision-theory process like weighing the evidence or applying the so-called A. Bradford Hill "criteria" to say significance testing of statistical inferences (e.g. epidemiology) or to fluorescent labeling of macromolecules for quantitative analysis of biochemical reactions? Now you're on to something! Because the weights assigned to bits of scientific evidence are necessarily matters of judgment experts can now "test" their own theories by weighing the evidence in the scales of their own judgment. And any theory that passes this "test" gets to be called "scientific knowledge" and, best of all, can never be refuted. A jury can then decide which of two competing pictures of say the anthrax disease propagation process (e.g. miasma vs germ theory) is the correct one. Robert Koch would be appalled but the Harris court bought it.

The decedent in Harris worked for a railroad and claimed his multiple myeloma (MM) had been caused by exposure to diesel fumes. The problem was that every epidemiological study of railroad workers, i.e. every known test of the potential relationship between working for a railroad and MM, failed to show that MM was associated with railroad work. In fact, every study designed specifically to test the theory that MM follows diesel exhaust exposure by railroad workers has failed to demonstrate an association, much less causation. Plaintiff tried to reframe the question by saying there's benzene in diesel exhaust smoke and that benzene has been associated with MM but the problem was that there's benzene in cigarette smoke too; far more in fact than in diesel smoke, and yet MM risk is not increased with cigarette smoking. Plaintiff then re-reframed the question by arguing that some molecules found in diesel exhaust had been associated with cancer (lung) and "oh, by the way," some of the chromosomal changes found in Mr. Harris' pathology were sometimes seen in people (with a different disease) exposed to benzene. In sum, there was no evidence drawn from observations of the world, i.e. the scientific method to demonstrate that diesel exhaust was a cause of MM in railroad workers; and the trial court excluded the experts' opinions.

On appeal the West Virginia Supreme Court of Appeals latched onto the following quote from Milward which I'll break into its three component sentences:

1) "The fact that the role of judgment in the weight of the evidence approach is more readily apparent than it is in other methodologies does not mean that the approach is any less scientific."

This is where the need for independently verifiable testing is deleted from the scientific method.

2) "No matter what methodology is used, an evaluation of data and scientific evidence to determine whether an inference of causation is appropriate requires judgment and interpretation."

This is where the need for a theory to have passed a serious test; i.e. that the effect has been observed to actually follow the putative cause in an experiment or retrospective epidemiological study, is eliminated as a requirement for a theory to constitute "scientific knowledge."

3) "The use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis, which we have repeatedly found to be a reliable method of medical diagnosis."

This is the punch line. A method for ruling out known diseases to infer the one from which a patient is actually suffering is transformed into a way to rule in by human judgment heretofore unknown causes of that disease without any objective evidence that, to paraphrase Hume, whenever the putative cause occurred the effect has routinely been observed to follow.

Given the foregoing as the court's (mis)understanding of the scientific method it should come as no surprise that it concluded "the experts in the instant case did not offer new or novel methodologies. The epidemiological, toxicological, weight of the evidence and Bradford Hill methodologies they used are recognized and highly respected in the scientific community." The effort to conflate statistical hypothesis testing and pharmacokinetic assays with subjective human judgment was complete and the trial court's ruling was reversed.

So now, in West Virginia, it's enough for an expert to say in response to the question: Has your hypothesis been tested? "Yes, I have weighed the data that gave rise to the hunch in my brain pan and I can now report that it convincingly passed that test and may reliably be considered 'scientific knowledge'". Ugh.

Avandia's Posthumous Pardon

We've been covering the Avandia witch hunt story for more than three years now (see e.g. Avandia: Burn Her Anyway? and Avandia: A Fair Cop? )  and would like to take this opportunity to say "We told you so" and point you to this month's New England Journal of Medicine where you can read her posthumous pardon (such as it is) in The Cardiovascular Safety of Diabetes Drugs — Insights from the Rosiglitazone [Avandia] Experience. The story will be a familiar one made all the more appalling by the fact that the FDA again played the role of the fool who was tricked into thinking he was a superhero - just as it had been in the breast implant affair. 

In Act I, a meta-analysis of prior epi studies showed an increase in risk of cardiovascular events among those taking Avandia (rosiglitazone). Yet those first epidemiological studies hinting at a problem post-marketing "had substantial methodologic shortcomings, including multiplicity, which meant that a statistically positive finding might be a false positive result". Meanwhile, analyses of pre-marketing data was "relatively insensitive in assessing cardiovascular risk" in large part because nobody suspected that improving glycemic control (an approach taken by a whole class of diabetes drugs) would increase the risk of cardiovascular events. Nevertheless, Avandia was suddenly deemed to be dangerous.

In Act II the RECORD study, one that could actually uncover any increased risk and which in fact had demonstrated none, was denounced as hopelessly flawed "corporate science" based largely (or at least most stridently) on a claim of conflict of interest given that it was done at the behest of the manufacturer.

In Act III The New York Times and other news outlets decided Avandia was a perfect example of a narrative they were advancing; specifically that new drugs were no better than older generic versions, were only designed and marketed in order to gouge health care consumers and were more dangerous too. For months they hyped the story often claiming that tens of thousands of Avandia users had already suffered serious cardiovascular events and often death due to the drug. Eventually The NYTimes would quote the author of the meta-analysis that started it all as as calling the licensing of Avandia "one of the worst drug safety tragedies in our lifetime."

Act IV, meant to be the last, began with an emboldened FDA severely restricting the use of Avandia despite the growing need for effective type 2 diabetes treatments. Then the FDA killed  the TIDE trial - "a large cardiovascular-outcome trial designed to evaluate the benefit of rosiglitazone and pioglitazone (a/k/a Actos - Avandia's cheaper rival) as compared with placebo ... and the safety of rosiglitazone as compared with pioglitazone". It was the one study that could have answered the question of whether or not Avandia was better and safer than its generic rival (which btw itself is now the subject of considerable litigation ). So, Avandia having been hauled off to the stake and the evidence that might have acquitted her having been forbidden from being gathered it looked like the story would have a heroic and happy ending. A suddenly cocksure and vastly more powerful FDA, heedless of uncertainty, rushes in and rescues the helpless consumer from the clutches of Big Pharma.

But like all true stories this one didn't end so cleanly. Over the usual objection that anything, including data, ever touched by a corporation is forever corrupted the FDA did what it's actually supposed to do. It was curious and it asked a question. What, it asked, would the RECORD trial data reveal if it was handed off to and reanalyzed by a wholly independent group researchers with impeccable credentials and reputations. The answer came earlier this year and it "provided reassurance that rosiglitazone (Avandia) was not associated with excess cardiovascular risk."

So what had the FDA really done? Because almost nothing was known about the cardiovascular risk posed by other diabetes drugs " the FDA decision may have had unintended consequences. The intense publicity about the ischemic cardiac risk of rosiglitazone may have diverted attention from the better-established risk of heart failure that is common to the drug class. Restricted access led patients to switch from rosiglitazone to other diabetes drugs of unproven cardiovascular safety." In short the FDA had snatched consumers from uncertainty and delivered them into greater uncertainty.

The authors of the perspective conclude their piece hopefully and delicately. "Perhaps the recent experience with rosiglitazone will allow the FDA to become more targeted in its adjudication of the cardiovascular safety of new diabetes drugs, focusing the considerable resources needed to rule out a cardiovascular concern only on drugs with clinical or preclinical justification for that expenditure." We can only hope.

Tags:

Bostic Oral Argument: Plaintiffs Play A Clever Tune

Notwithstanding the briefs of Georgia-Pacific and numerous amici, appellant Bostic, appellee Georgia-Pacific and (seemingly) most of the justices appeared by the end of oral argument on Monday to reach at least partial agreement on the big issue. Specifically, that somehow or another it ought not be the law when multiple defendants create conditions each independently capable of causing plaintiff's injury that none of them be held liable merely because it would be impossible for plaintiff to prove that any one of them was the "but for" cause of her injury. It wasn't exactly a Kumbaya moment but it was close. And the refrain was of Ford v. Boomer and of Merrell Dow v. Havner; and it made us very worried about the future of one of the most important defense victories ever - Borg-Warner v. Flores.

Until the last third of the proceedings a sensible guess as to which way things might go was hard to come by. Much of the briefing and a fair bit of the argument was hopelessly confused due to the tendency of the various parties to attribute decidedly different meanings to identical causal language. It's hardly the fault of the attorneys. Most causal distinctions made in legal opinions still consist, however solemnly invoked, of little more than "moonshine and vapor" (see "Proximate Cause in California" by William L. Prosser, 1950 - a really fun paper btw). Maybe the court in whatever opinion it authors will adopt a modern lexicon of causality with readily translatable and transportable ideas like "necessary cause" and "sufficient cause". We can only hope. But in any event Bostic's counsel cleverly suggested a compromise, the Davidson rule with a little Havner on top, and suddenly, alarmingly, everyone seemed in agreement.

The Davidson rule, like that of Ford v. Boomer, settles the question of where to draw the line for the outer limit of liability for asbestos-related disease by requiring that plaintiff's exposure from any potentially liable defendant's product (or premises) be in and of itself sufficient to have caused the disease. In Texas (see Havner) that would mean given the current state of uncertainty about the causal mechanisms underlying asbestos exposure and mesothelioma that plaintiff would have to show whatever exposure she attributes to a particular defendant doubled her risk of developing mesothelioma. Any defendant whose contribution could be shown to satisfy the risk doubling requirement would have to face a jury; and any defendant whose contribution did not double the risk could have its summary judgment and go home. Seems fair; what's not to like?

Plenty. Remember that causation is necessary but not sufficient for the determination of substantial factor causation. It is not enough for legal causation to show that grabbing a stumbling passenger's arm dislodged a plain brown package containing fireworks that fell to the ground, exploded and caused some distance away a scale to bonk Mrs. Palsgraf on the head.  The risk posed by the act of grabbing the stumbling passenger's arm must have been of such a degree as to generate a duty for the ordinarily prudent railroad employee to have done something different. Unless we missed it, not once during oral argument did anyone utter "legal cause" or "de minimis risk". Why? We suspect it's not because everyone forgot that the substantial factor test of legal causation requires causation plus a non-de minimis risk but rather because everyone assumes a 100% risk increase must be substantial - an all too common consequence of our cognitive blind spot for percentages.

Something that increases the risk of mesothelioma by 100% hardly sounds de minimis. But think about it this way, your odds, absent asbestos exposure, of developing mesothelioma are 1:1,000,000 (one in a million). Doubling the risk increases the odds to 1:500,000. Those are your odds of being struck by lightning this year according to National Weather Service estimates. And that risk is tiny compared to your risk of having say the wind drop a tree limb on your head.

So should a 1:500,000 risk be big enough for the imposition of a duty? That's for the courts to decide by way of public policy analysis but they ought to go into the exercise understanding what it would mean to say that someone who creates a 1:500,000 risk of death can be subject to liability and even punitive damages. Consider these examples of activities that increase the risk of death by 1:500,000: two days of snow skiing, going horseback riding 4 times or eating one peanut butter and jelly sandwich per year (aflatoxin). If that doesn't make the point consider this: 1:500,000 odds are less than those of flipping a fair coin 18 times and having it come up heads every time.

So here's to hoping the court keeps in mind that whether or not a duty should be imposed in a given case is gauged by whether the defendant has created a substantial risk and not whether a risk has been altered substantially. Otherwise we'd wind up with the absurd conclusion that doubling an infinitesimal risk creates a substantial one.

P.S. If you're interested in how the counterfactual or "but for" view of causation survives a multiple sufficient causes problem like that seen in Bostic see chapter 10 of Judea Pearl's book "Causality".

"The sophisticated user doctrine is thus not an exception to the duty to warn, but an application of it."

That quote is from an opinion issued at the end of July that we wanted to bring to your attention. Dehring v. Keystone Shipping Co. involved an able-bodied seaman who lost his thumbs to a winch. He brought negligence and unseaworthiness claims against the ship owners and product liability claims against the company that designed and manufactured the winch. The manufacturer moved for summary judgment on the product claims and plaintiff sought a partial summary judgment on the ship owners' contributory negligence defense. The motions were referred to the magistrate who recommended that the defendant's be granted and the plaintiff's denied. Rather than some dull, perfunctory order adopting those recommendations Judge Ludington authored a gem of an opinion. Concise and insightful it contains an excellent account of the evolution of thinking about product design defect claims as well as a useful reminder about the nature of, and purpose behind, the duty to warn.

Why for example do courts often resort to negligence-like language even when addressing a design defect claim within a strict liability framework? Because design defect was never supposed to be rolled into strict liability. Design defect claims inevitably involve questions of was there a risk, was it a risk worth taking, was it foreseeable (ie an appreciable risk), and was there some feasible, risk-reducing alternative. Those after all are also the questions at the heart of the negligence inquiry.

And what must plaintiff show in a design defect to carry his burden of proving that the risk posed by the product outweighed its usefulness? Is it enough to show that what happened was foreseeable and that the resulting harm outweighed the cost to make the product safer? No. Doing so skips the vital question of what was the probability of the harm occurring in the first place. The foreseeability inquiry comes after the "how likely was it" question, not before.

Finally, are sophisticated user cases premised on the "no duty" argument that so many courts reject out of hand? No. There is a duty but [t]he purpose of the duty to warn is to inform the audience of a product's non-obvious risks. What risks are non-obvious depends on the audience - risks that may not be obvious to a layman may be obvious to the skilled professional. The sophisticated user doctrince is thus not an exception to the duty to warn, but an application of it."

The winch in question had been in use on the ship for 55 years and plaintiff had been working aboard her for several years when the accident occurred. Nevertheless he could point to no other accidents in which it, or even one of similar make on some other ship, had ever caused an injury. His only possible evidence of risk (with himself as the sole data point) was that it extremely small. And by testifying that he knew (from prior experience and training) that moving the switch on the winch control panel would cause it to begin the operation of a device used to moor with 7/8 inch steel cables a 767 foot freighter the plaintiff put himself in the category of persons for whom what followed was the opposite of "non-obvious". The winch-maker accordingly prevailed.

 

Tags:

Alas, The Maryland Court of Appeals Has Reversed Ford v. Dixon

Sound science and, more importantly, sound reasoning about science have slowly been making their way into appellate decisions for two decades now, but last year's Ford v. Dixon was something special. We called it the best causation opinion of 2012 and without saying so thought it a Palsgraf for this age of Big Data. Acknowledging the data available for estimating a given asbestos exposure and the risk attendant to that exposure the opinion simply asked of plaintiffs that they 1) demonstrate that the risk complained of is not merely de minimis; and, 2) explain how they estimated the risk.

The opinion recognized the centrality of risk in so-called substantial factor causation analysis. It also held that establishing the existence of an infinitesimal risk cannot suffice to carry plaintiff's burden of showing a substantial one. Finally it required that plaintiff estimate the risk by reasonable inference drawn from sound quantitative science. Essentially it replaced crude (and easily distorted) proxies for risk like Lohrmann's frequency, proximity and regularity test (now more than a quarter century old) with a modern approach based on the data and the statistical inferences that it warrants. Its analysis was, as the dissent in last week's opinion stated, excellent.

But it didn't reflect the law in Maryland according to the state's highest court (Dixon v. Ford). Reciting that it had held as recently as 2011 that the "'frequency, regularity, and proximity' test remains 'the common law evidentiary standard used  for establishing substantial-factor causation in negligence cases alleging asbestos exposure'", and that a decade before that it had declined to hold that a plaintiff must "present expert testimony as to the amount of respirable asbestos fibers emitted by a particular product", the court found that the plaintiff had satisfied her burden by showing that her husband had done 1,000 brake jobs and by her expert's conclusion that the resulting in-home exposures were "high" and so "a substantial factor" in the cause of her mesothelioma.

Apparently Ford's main argument was that plaintiff's expert had hung her hat on the dubious "every fiber / every breath" theory. The court found that argument disingenuous as the evidence established years of work on asbestos-containing brakes and routine contamination of the home from the clothing of plaintiff's husband. Ford's better argument (described by the court as "a fallback"), was that the failure (or more likely refusal) of plaintiff's expert to estimate the risk from such exposures meant that the question of whether the exposures were a substantial factor (i.e. "but for" cause plus a not insubstantial risk) went unanswered. So the testimony, lacking any information about risk, could not possibly assist a fact finder to determine whether the risk was substantial. The only answer the court could muster to this objection was that they already had a substantial factor test - the frequency, proximity and regularity test; and it was satisfied by evidence of an asbestos-related disease, 1,000 brake jobs and an exposure opined to be "high".

Ironically the court then went on to render many more pixels in a discussion of the Dixon plaintiffs' damages. Imagine what would happen if a plaintiff were to try to hold an award for lost wages when his evidence consisted only of the following: a) "I planned to keep working"; b) "I worked six days a week"; and, c) "I was a very hard worker." Without some numbers behind these statements no jury could sensibly answer a binary question like "Has she lost more than $50,000?" much less "How much has she lost?" And even if an economist showed up to support whatever sum plaintiff's counsel intended to blackboard/whiteboard/PPT/etc no court would let him testify unless he could at least opine about the incomes of people who have jobs like the plaintiff's.

So why in a time when when it's easy to find data about the asbestos exposures of people who had jobs like the plaintiff's don't we demand that experts say what they are? And why in a time when it's easy to estimate the risk posed by a given exposure don't we demand that experts say what it is? We know why the plaintiffs don't want to have to quantify dose and risk in low dose cases - either the calculated risk is too small or it's too easy to call BS on the way it was calculated if it's high. But why so many courts continue to resist quantitative data on the question of substantial factor causation in asbestos cases remains to us a mystery; especially when there's so much data available.

In the end Ford v. Dixon sought to introduce the law to the sort of decision-making tools that are revolutionizing everything from medical diagnoses to weather forecasting in hopes of making justice a little less rough and a little more just. If it had a flaw it was that it was ahead of its time.

Tags:
Older Entries