Mass Torts: State of the Art

Mass Torts: State of the Art

Tracing Listeria Through Time

Posted in Uncategorized

The aspect of the recent Listeria monocytogenes outbreak that is likely to have the biggest impact on pathogen transmission litigation going forward is the ability to identify victims who acquired the infection years before the outbreak was finally recognized and the source identified. Thanks to the fact that in recent years some state health departments have begun preserving samples from patients diagnosed with certain infectious diseases the CDC  has realized, now armed with the ability to use of PFGE to “fingerprint” bacteria, that though the ice cream contamination wasn’t suspected until January of this year and wasn’t confirmed until last month people have been getting sick from it since as far back as 2010.

L. monocytogenes infections are acknowledged to be far more widespread than what’s reflected in the CDC outbreak statistics. Most cases produce nothing more than short term, mild, flu-like symptoms and go undetected as patients rarely get to a physician before they’re feeling better and so diagnostic tests aren’t even run. In the very old, the very young and the immune-compromised however it can produce a systemic or invasive infection with a significant mortality rate. It is these cases, assuming the infection is detected and it’s estimated that at least 50% of all such cases are accurately diagnosed, that get the attention of state health departments and the CDC. The silent tragedies are the miscarriages and stillbirths caused by L. monocytogenes. Expectant mothers can acquire the infection and experience nothing other than the vaguest sense of being under the weather while the bacteria is launching an all out attack on her child. The cause of those deaths regularly go undetected. This whole thing renders jokes about husbands being sent out on late night runs for pickles and ice cream soberingly unfunny.

From the legal perspective the creation of databases of the genetic fingerprints of pathogens will obviously increase the number of plaintiffs in the future as more silent outbreaks are discovered and previously unknown victims from the past are identified. It will also create some interesting legal issues. Take for instance Texas’ two year statute of limitations in Wrongful Death cases. There’s no discovery rule to toll the claim in large part because death is an easily appreciated clue that something has gone wrong and those who suffered the loss have two whole years to figure out the cause. Here though the ability to discover the cause didn’t exist in 2010. But of course if we start to draw the line somewhere else the debate over where is quickly overrun by the horrid thought of people digging up Granny who died of listeriosis back in 1988 to see if the genetic fingerprint of her killer matches one from a growing list of suspects. And let’s not forget about L. monocytogenes’ aiders and abettors, the other types of bacteria with whom it conspired to form the biofilm that protected L. monocytogenes from the disinfectants used to clean the food processing equipment. The promiscuous bugs likely acquired their special skill thanks to horizontal gene transfer not just among their own phyla but any passing bug with a helpful bit of code (like one that protects against the chemical agents, scrubbing and high pressure sprays used to disinfect food processing equipment) – and none of them were spontaneously generated at the ice cream factory. They all came from somewhere else.

Ultimately what makes claims arising out of the transmission of pathogens so different from other mass torts is that there is none of the usual causal uncertainty because, for example, the only cause of the 2010 patient’s listeriosis was Listeria monocytogenes that came from a particular flavor of ice cream that came from a particular plant. So what makes today’s news important is not that science can now answer “where did it come from and how many were infected?” but rather that science now asks in reply “how far back do you want to go?”

Discretizations

Posted in Epidemiology, Microbiology, Molecular Biology

Listeria monocytogenes is estimated to cause the loss of 8,800 years of life annually in the U.S. due to poor health, disability and premature death

Fighting L. monocytogenes in refrigerated milk products with viral drones

A method for relatively rapid (a few hours) and accurate (97%+) detection of L. monocytogenes in milk

And another

L. monocytogenes is a problem for seafood too thanks to biofilms

A coating to make food-handling surfaces resistant to bacterial adhesion and biofilm formation

A tool (PFGE) for tracing L. monocytogenes from food processing equipment to food

PFGE, the gold standard for tracing L. monocytogenes

 

 

Dubious About Bringing Scientific Peer Review to Scientific Evidence

Posted in Reason, The Law

Bloomberg’s Toxics Law Reporter recently published a paper by Professor David L. Faigman titled “Bringing Scientific Peer Review to Scientific Evidence” that sets out an idea worth thinking about. Specifically, that the quality of scientific testimony presented to juries would be improved (and presumably the likelihood that justice is done would be increased) if the job of screening proposed testimony was shifted to, or at least augmented by, experts in the relevant field. Surely it would be better than the current system in which the task falls solely to someone who’s often sitting where he or she is for the very reason that science and math weren’t his or her thing. However, my objection goes to the premise lurking within his idea –  that the courtroom is in the first place a proper venue for publishing, testing and ultimately deciding the scientific status of an hypothesis.

Name some established scientific theories that were first published in a courtroom. How about an analytical technique developed for and refined through the adversarial process at the courthouse that has made its way into widespread use beyond the bar? Cue the crickets. On the other hand, if asked for examples of theories and techniques deemed “scientific” at the courthouse yet found to be embarrassingly unscientific and the cause of widespread injustice you can start with the National Academy of Science’s 2009 scathing report on forensic science and update it with the weekend’s news that the FBI now admits that its hair analysis, admitted in hundreds of trials, was almost always “scientifically invalid”. Courtroom-produced science has a long and dismal record yet courts continue solemnly to admit into evidence opinions of experts that would be laughed at if presented in a venue geared toward the scientific enterprise.

Rather than speculate (again) about the cause of the problem I’d like to remind anyone who’s reading that there’s a simple solution and it doesn’t involve peer review (besides, let’s not forget there’s a reason why most peer reviewers do worse than coin flipping when it comes to separating the scientific wheat from the chaff). The solution is to admit only theories that have been borne out by observation of predictions made by the theory. That’s it. I propose treating scientific evidence like any other evidence.

Imagine a case involving a car wreck at an intersection with a traffic signal and a witness who will testify as to having seen Defendant enter the intersection against a red light. No difficult admissibility issue here. Now imagine the same facts except the witness didn’t actually observe the accident but nevertheless is prepared to testify that Defendant looks like the sort of person who’d run a red light and so probably did. Again, there’d be no angst over the (in)admissibility of such testimony. But when an expert converts “looks like the sort of person who’d run a red light” into the jargon of psychological science and converts “probably did” into something like “assuming independence among the variables I would estimate that there is less than a 1 in 2,000,000 chance that Defendant did not run the light” judges too often allow what would otherwise be seen as obvious baloney to be admitted as filet mignon. Yet all a judge need do to avoid the mistake is to first ask “have you or anyone else actually observed Defendant run a red light?”

We tend to lose sight of the fact that science and math are practical endeavors – the purpose of which is to make accurate predictions about questions like “where will this cannonball land if fired at an angle of elevation of 45 degrees and with a muzzle velocity of 1,054 ft/sec?” Practicality being paramount, conjectures about ballistics or any other aspect of nature are always tested the same way – subsequent observations are compared to predictions and whichever theory comes closest wins. There’s no trophy or ribbon for second place.

All “science” really is is a formalization of the way we’ve learned to understand the world: trial and error. It’s the way babies learn how to make Mom come running and it’s how toddlers learn to walk. Eventually we come to know how cars travel, how intersections are laid out, how things are perceived when at a distance and how a certain range of frequencies in the visible light spectrum means a traffic signal is red. Get it? We’re all scientists.

Now should the expert reply that the individual pieces that form the foundation of his theory have been tested and that ought to be enough remind him that that is no test at all of his theory that Defendant is a red light runner. Just as no judge would put her family on a type of airplane that has never been tested, irrespective of the evidence for the airworthiness of its component parts, the fact that the pieces of evidence that generate a hypothesis are sound doesn’t mean the hunch, however clever, will be borne out. One of the essential elements of the scientific revolution was the recognition that Nature is stranger than we can imagine; so much so that her secrets cannot yield even to the keenest mind precisely because we know so little about how the world works. Thus it was that originally through tragedy and investigation and later through extensive pre-maiden flight testing we learned that an airplane is more than the sum of its parts and that it has its own unique resonance vibration characteristics which if not recognized and controlled for can lead to disaster.

My argument then is as follows: if the prediction entailed by an expert’s opinion has never been observed outside the courtroom it is not evidence, because evidence is simply that which has been observed. If the response is that the harm alleged by plaintiff is in fact the observation that proves the predictive power of the expert’s hypothesis your reply should be that it is only evidence of our ability to string cherry picked pieces of the past together into a narrative to explain some other thing that lies in the past and so beyond the reach of science. Real science means placing a bet, on your reputation if nothing else, on a forward looking experiment the results of which you do not know in advance. If on the other hand the opinion follows from some hypothesis that has indeed been tested (and I’ll save for some other day a discussion about why low power studies of small effects don’t constitute tests) then it ought to be admitted.

We’re seeing fewer and fewer cases tried and the enormous costs associated with expert witnesses and all the battles we have to fight over them just to get to the courthouse plays a very big part in it. Costs associated with experts increasingly force cases to be settled and others that might have merit never to be filed in the first place. Something needs to be done about it and it seems to me that the simple rule of requiring scientific evidence to be made out of the same thing as any other evidence, an observation that confirms the predictive power of the theory, would go a long way toward untangling the current mess and would at the same time have the salutary effect of putting “evidence” back into “scientific evidence”.

Discretizations

Posted in Microbiology, Reason, Risk, The Law

The Supreme Court of Connecticut adopts the duty triggering rule “if an outcome made possible by the act is foreseeable then any outcome made possible by the act is foreseeable” – and in the process proves that it doesn’t understand physics (the case involved a child dropping an 18 lb piece of cinder block from the third floor of a building on the head of another child standing below. If you remember that force is equal to mass times acceleration you’ll understand that none of the what ifs proposed by the court as potential risks from the cinder block resting on the ground or in the hands of a child generate anything close to the force involved in this scenario. There’s a reason fall protection is required for workers on the third level of scaffolding but not for those standing on the ground).

From endoscopes to ultrasound probes here’s an excellent overview of potential reservoirs of deadly pathogens and a discussion of the ability of some bacteria to escape detection by entering a state of suspended animation known as “viable but non-culturable”

Pouring salt into wounds – your body does it to battle infections and this article does the same thing to those suddenly shy public health advocates who once demanded that you go on a low salt diet

The evidence that screening healthy people for diseases like cancer saves no lives is approaching the point of being overwhelming. In the same journal be sure to read the additional commentary here, here and here.

We need to do more to fight urinary catheter infections

 

A Seat Belt Case Illustrates Why Risk Beats Causation as an Explanation of Tort Liability

Posted in Reason, Risk, The Law

The Texas Supreme Court has finally done away with the prohibition on seat-belt evidence in auto accident cases. See Nabors Well Services, LTD. et al v. Romero, et al. I recall thinking in law school just how odd it was that a defendant couldn’t introduce evidence of plaintiff’s failure to wear his seat belt. After all, the elements of negligence are duty, breach, causation and damages. In a hypothetical case where “but for” plaintiff’s failure to wear his seat belt he probably wouldn’t have impacted the windshield thereby causing the head injury for which he brought suit, surely he breached the duty he owed himself by willfully refusing to use a readily available device that would greatly reduce the risk of smashing his head on the windshield. But that wasn’t the law.

Moving one link up the causal chain the court four decades earlier had reasoned that liability hinged upon the immediate cause of the cars’ collision rather than the cause of plaintiff’s injury. The thought was that if defendant’s car hadn’t struck plaintiff’s then the plaintiff wouldn’t have hit his head; seat belt or no. But you’re not entitled to a recovery, irrespective of defendant’s reckless driving, just because you’ve been in an accident. You have to show that defendant caused you to suffer damages. So the real question is what was the cause of plaintiff’s damages, and here the liability question ought to encompass all those acts or omissions that bear upon the question of why plaintiff hit his head on the windshield.

Another question is why start the causal inquiry either at the immediate cause of the collision or at the immediate cause of plaintiff’s head hitting the windshield? What if someone had mistakenly called plaintiff to tell him he needed to come back downtown to work when he otherwise wouldn’t? What if someone else had been texting at a red light causing the plaintiff in the car behind him to miss the light thereby 10 minutes later bringing plaintiff into the path of defendant’s vehicle (which of course wouldn’t have been there when plaintiff passed that point had me made the first light 11 minutes earlier)? Why doesn’t legal causation extend to these other “but for” causal links? Nobody has a good answer because there’s not one when courts try to base legal causation on “but for” causation alone. All they can do is pick a link or two in the causal chain and pretend not to notice, or worse yet try to explain away, all the other “but for” causal links in it.

If on the other hand you decide that risk is the better determinant of legal causation you get not only a coherent explanation of tort liability, you also get better public policy. Imagine a simple rule like: everyone who creates a substantial risk has a duty to minimize it. Driving is one of the riskiest things you can do, accounting annually for tens of thousands of deaths. As the court demonstrates with National Highway Traffic Safety Administration data you can dramatically decrease that risk by wearing your seat belt. On the other hand the risk of head trauma posed by a single commute or a missed green light ranges from negligible to infinitesimal. Thus basing fault on the creation of, or failure to minimize, a substantial risk rather neatly it seems to me sorts out all the causes in the chain leading to plaintiff’s injury into those to which liability may attach and those to which it cannot while simultaneously disincentivizing and incentivizing conduct that respectively creates or abates a substantial risk.

All in all it’s a good opinion (hooray for the Palsgraf mention!) but it wasn’t the “risk as the unifying theory of negligence jurisprudence” opinion for which I’ve been waiting.

 

Discretizations

Posted in Uncategorized

Are farm workers at risk from airborne MRSA?

Warren Buffett understands risk better far better than your average public health professional

Legal scholars continue to push inference to the best explanation as the form of reasoning which our rules of evidence are designed to serve. Just remember that when inference to the best explanation is applied to probably false but nevertheless admissible risk factor studies it  turns them into verities (which is why plaintiffs have been pushing it) 

Is plaintiff’s lung cancer primary or does it represent a metastasis? There’s a new Roggli paper for that.

A vaccine for norovirus! No more cruise ship outbreak cases for you.

Meta-analysis suggests weak but consistent association between hormone replacement therapy and ovarian cancer

So Much for the Epidemiological Transition Part IV

Posted in Causality, Epidemiology, Reason

The idea (known as the epidemiological transition) that infectious diseases had been or soon would be conquered and that chronic and degenerative diseases, often if not mostly the result of man’s vices and industry’s alleged toxins, would be the primary cause of human mortality has got to rank among the worst ideas of the last 50 years. Coupled with null-hypothesis statistical significance testing (and its propensity for generating false positives in risk factor epidemiology studies) it was the bad idea that launched crusade after crusade against everything from eggs to fat to salt to electricity to vaccines to cell phones. Meanwhile, today’s news that herpes zoster, the virus that causes chicken pox and shingles, was found in 74% those who died of giant cell arteritis but only 8% of those who died of other conditions, strongly suggests that our ancient predators were anything but conquered.

Herpes zoster is increasingly being implicated in cerebrovascular disease and cerebrovascular disease in turn in dementia. And there’s direct evidence that herpes zoster encephalitis produces dementia. Could it be that a common virus, another member of the herpesvirus family, is responsible for that terrible scourage of an aging population – Alzheimer’s? (See: Intracerebral propagation of Alzheimer’s diesase: strengthening evidence of a herpes simplex virus etiology ).  It’s too early to tell of course but at least they’re looking and so far there are a number of indications that an infectious process lies at the heart of this degenerative process (see Moving Away from Amyloid Beta to Move on in Alzheimer’s Research just published in Frontiers in Aging Neuroscience). It’s a shame it took so long to look. hat tip – LKD

Dreadful Sentence of the Week

Posted in Causality, Epidemiology, Reason

“… 62 of the plaintiffs … had statistically significantly higher rates of genitourinary and reproductive illness and procedures compared to the rest of the county.”

That’s from Whitlock v. Pepsi Americas, a hexavalent chromium case, and it was part of the reasoning that went into the court’s decision to grant plaintiff leave to supplement her expert report based on this “new scientific information.” I’ll explain just why the reasoning is deeply flawed shortly but first I’ll answer the question of why you should care. If the sort of risk factor epidemiology on which the court rests its opinion is really science, and if the sort of data dredging that that went into the study from which the “new scientific information” was inferred is really the scientific method, then anything can always be shown scientifically to cause everything and Daubert has been finally and thoroughly eviscerated.

Whitlock’s underlying facts are typical of those mass tort cases that follow the factory closing of a sparsely populated county’s largest employer. Toxins are identified and the lawyers file suit on behalf of dozens or hundreds of clients with conditions that might be associated with exposure. Here approximately 1,000 toxic tort cases blamed pollution from Remco Hydraulics, Inc.’s Willits, CA manufacturing plant for a host of ailments. Those cases have spawned numerous interesting orders and opinions, Whitlock being only the most recent.

The district court had previously found the proposed exposure and causation testimony of Plaintiff’s experts to be unreliable and accordingly granted summary judgment in favor of Defendants but the U.S. Court of Appeals, Ninth Circuit in an unpublished opinion held that the trial court had abused its discretion; and plaintiff was back in business. Meanwhile, a study to identify possible risk factors associated with living in Willits was being updated but the results arrived after Plaintiff’s deadline to amend her experts’ reports. This iteration of Whitlock then the court’s determination that the study update, what it deemed “newly discovered evidence in support of her claims”, constituted good cause for amending her experts’ reports.

The study, Longitudinal analysis of health outcomes after exposure to toxics, Willits California, 1991-2012: application of the cohort-period (cross-sequential) design  looks at the incidence of groups of ailments and/or procedures defined by “body system” noted at the time of any patient discharged between 1991 and 2012 sorted by the decade in which each patient was born (’40s, ’50s, ’60s, ’70s or ’80s). Then the rate of each grouping, for each decade of birth, for patients with a residential address containing the Willits ZIP-code, is compared to the rate of each grouping, by decade of birth, for patients who lived in the same county but didn’t have a Willits ZIP-code (a/k/a ROC, or “rest of the county”) generating thereby a relative risk. The authors also calculated the relative risk (Willits ZIP vs ROC) for hospital admissions, discharges and days spent in the hospital.

Willits men and women, sorted this way, were more likely to be hospitalized and to have spent more time in the hospital than non-Willits ZIP-code residents of the same county. And as for “body systems” Willits women were “at increased risk for all measures” whereas men “were at increased risk for all measures except genitourinary system diagnoses and procedures, and gender based procedures and cancer.” The authors conclude from their study that the people of Willits were at increased risk of “poor health”, that the burden on the community is “incalculable”, and that the cost of to the public is “enormous.” If you think those are reasonable inferences given this data you’re about 20 years late to the scientific community’s realization that risk factor epidemiology isn’t science, generates more false leads than promising hypotheses and is easily exploited.

In 1994 the late Petr Skrabanek wrote The Emptiness of the Black Box.  It wasn’t the first journal article to call BS on risk factor epidemiology but it was the best; and coming from a leading epidemiolgist and public health advocate it was also the most powerful of its time. Seven years later, reflecting on the fact that risk factor epidemiology had not only failed to uncover the cause of “a disease which showed an epidemic rise in industrialized countries” but had falsely indicted certain exposures thereby impeding attempts at prevention and cure, the new editors of The International Journal of Epidemiology wrote Epidemiology – is it time to call it a day? In it they discuss the failures, the lack of rigour in the discipline and the already obvious decline in the use of risk factor epidemiology to identify causes of health problems in groups of people. Over the last ten years (as we’ve chronicled repeatedly) the status of risk factor epidemiology has only fallen further. Imagine the money wasted, hopes dashed and time lost in the largely fruitless search for reliable markers of cancer prognosis despite the fact (or actually because of the fact) that Almost All Articles on Cancer Prognostic Markers Report Statistically Significant Results 

Now, the fact that most risk factors identified by risk factor epidemiology turn out to be false does not lead necessarily to the conclusion that having a Willits ZIP-code and having been born sometime between 1940 and 1989 doesn’t put you at greater risk of being hospitalized some time between 1991 and 2012 (though it ought to lead you to be intensely sceptical of such a claim). Furthermore, I’ve no reason to believe that the authors engaged in the sort of post hoc rationalizing, p-hacking, multiple comparison testing and selective publication responsible for much of the now widely recognized crisis of unreproducible “science”. But what I think I can demonstrate rather easily is that any inference about the cause of Whitlock’s ailment that is drawn from this data is fatally flawed.

Remember that business about sorting patients’ reason for hospitalization not by the ICD 9 disease codes but rather by “body systems”? A little rummaging around on the web turned up the “Level 1 of the Multi-level Clinical Classification Software” that does the sorting along with a handy appendix. It turns out that the purpose of such sorting hasn’t anything to do with discovering the cause of diseases but rather everything to do with analyzing and predicting healthcare costs. That doesn’t mean it can’t be (somehow) used to discover the causes of illness but it does make it an odd choice. You can find it at Healthcare Cost and Utilization Project – HCUP: A Federal-State-Industry Parthership in Health Data .

In any event go to appendix C1 and scroll down until you get to body system 10 – Diseases of the genitourinary system. Body system 11 is Complications of pregnancy; childbirth and the puerperium. The list of the procedures by body system can be found in Appendix D1. Operations on the urinary system are found in category 10 and operations on the female genital organs is category 12. These are the systems and categories of procedures to which the court was referring when it wrote that “… 62 of the plaintiffs had statistically significantly higher rates genitourinary and reproductive illness and procedures …”.

The plaintiff’s argument then goes like this: A peer reviewed and published study has shown a statistically significant increased risk of being hospitalized between 1991 and 2012 for treatment of a genitorurinary system problem among women with a Willits ZIP-code who were born between 1940 and 1989. I have a Willits ZIP-code, was born between 1940 and 1989 and had a genitorurinary system ailment. Therefore my ailment was caused by living within the Willits ZIP-code. Somehow from there must come “and living in the Willits ZIP-code meant I was exposed to hexavalent chromium so hexavalent chromium caused my genitourinary problem!” Since there was no data collected on any of the patients to determine whether they were actually exposed to hexavalent chromium, lived downstream, upstream, worked in or ever drove past the Remco factory the analytical gap between ZIP-code and hexavalent chromium exposure/dose would appear unbridgeable. But let’s assume it is because the argument is still demonstrably absurd.

If you’ve looked through the list of conditions and operations you know what I mean when I write that it’s full of cross-examination gold. However, given the highly personal and sensitive nature of these subsets of body systems and procedures I’ll use another category that was also statistically significantly elevated among patients with a Willits ZIP-code – Infectious and parasitic diseases (which is body system 1). In fact, Willits women had a slightly higher risk of infectious and parasitic diseases than of diseases of the genitorurinary system. And the last clue you need to figure out what’s going on here is the discovery that Willits women were at a statistically significantly increased risk for all of the categories of body systems and procedures for almost all years.

So let’s take, in honor of the 110th anniversary of Robert Koch’s Nobel Prize in Physiology or Medicine for his discovery of Mycobacterium tuberculosis, 1.1.1 – Tuberculosis from body system 1 and plug it into a hypothetical plaintiff’s argument.

1) A peer reviewed and published study has shown a statistically significant risk of being hospitalized for infectious and parasitic diseases among women with Willits ZIP-code born any time between 1940 and 1989.

2) A woman with a Willits ZIP-code born between 1940 and 1989 has been afflicted by tuberulosis, a member of the set of infectious and parasitic diseases.

3) A Willits ZIP-code and exposure to hexavalent chromium are (somehow) the same thing

4) Therefore, hexavalent chromium exposure caused plaintiff’s tuberculosis (Koch’s postulates, M. tuberculosis and the Nobel Prize notwithstanding)

Hopefully I’ve made my first point.

My second arises out of the sentence that launched this post. People don’t have rates of disease. They either get a disease or they don’t. Populations have rates of disease. And when you go from data about populations to inferences about individuals you commit a logical fallacy known as the ecological fallacy. The court’s reasoning is a perfect example of it.

Any finally my third point. If you’ve ever worked on one of these plant closure / toxic tort cases in a down and out county you know why the people who lived near the plant have more hospitalizations and procedures. They disproportionately had the best jobs in the county meaning more money and more access to health care. In other words, as is so often the case in these risk factor studies, the authors have probably pointed the arrow of causation in the wrong direction. Living in Willits didn’t cause poor health and hospitalizations. Not living in Willits meant disproportionately poor access to health care dispensed by hospitals.

 

“It is indicative of a lack of understanding of the scientific method among many scientists”

Posted in Causality, Molecular Biology, Reason

For several years now we’ve been trying to spread the word to the legal community that a great many people who hold themselves out as scientists, including more than a few who’ve published papers in the most prestigious peer reviewed journals around, aren’t really doing science. They’re not coming up with hypotheses and testing them. Instead of avoiding that pitfall which humans are particularly prone to falling in, the one whereby we become so enamored of our clever hypotheses that we simultaneously become blind to any holes and hostile to those who dare point them out, too many scientists are fooled by the ability of statistical analysis to readily generate spurious associations that, with a little bit of post hoc narrative editing, look just like causal associations.

The combination of vast amounts of data quickly sliced and diced by powerful modern computers plus multiple statistical methods which are poorly understood but easy to use has led to the current crisis in bio-medical science whereby only a shockingly small fraction of “scientific discoveries” turn out to be true. The essence of the problem is well put by the quote that appears in the subject line of this blog post. It’s from Donald Berry, a biostatistician at MD Anderson Cancer Center, and he made it during a discussion of the issue at last January’s meeting of the President’s Council of Advisors on Science and Technology. You can watch that portion of the conference dealing with irreproducible science here; it’ll take less than an hour of your time and is well worth it.

If you watch the webcast linked above you’ll hear concerned scientists explaining that a lot of other well-meaning scientists fail to comprehend the scientific method, are fooled by statistical tools they don’t understand, or both; and that more and better education is the answer. This idea, that with a little more of the right sort of education we’d get better science, assumes that nobody is trying to game the system. We’re to assume for example that: (1) no one is hatching his hypothesis after the computer has found the inevitable statistically significant associations that arise from looking at any bucket of data from multiple perspectives (if you doubt that finding something statistically significant in any random batch of numbers is easy then spend 60 seconds on An Exact Fishy Test); (2) no one is p-hacking his way to confirmatory evidence for his favored hypothesis by turning random noise into seeming proof; (3) no one consciously uses a test that is biased in favor of validating his method; and, (4) no one is exploiting the decision-making heuristics of peer reviewers and editors to sneak bad science into leading journals. If a articles in this January’s The Cancer Letter are any indication we shouldn’t be too sure of such assumptions.

You need to read Duke Officials Silenced Med Student Who Reported Trouble in Anil Potti’s Lab  and  Duke Scientist: I Hope NCI Doesn’t Get Original Data (h/t Error Statistics) for several reasons. First, it’s the story of a brave young man who risked his career by refusing to participate in and attempting to expose research practices that were shoddy at best and fraudulent at worst. Second, it’s about how an article published in Nature Medicine went from revolutionary to retracted. Third, it details how an institution dedicated to education was willfully blind to the rot that had set in at one of its most prominent laboratories even after the rot was pointed out. Fourth, it reminds us that bad science isn’t a victimless crime – that desperate cancer patients endure worthless and time-robbing clinical trials as a result of it. Finally, the article reminds us of the power of our adversarial legal system and the good it can do by bringing truth to light. Though the Institute of Medicine had investigated and Dr. Berry and others had pointed out the flaws in the since retracted article in the end everyone, perhaps out of a sense of collegiality, put the failings down to sloppy work and it looked like the worst thing Potti was guilty of was resume inflation. But then came the lawyers for the patients. They uncovered the emails and audio recordings showing, if intent can be inferred from conduct, that the data dredging, cherry-picking and non-test testing used to construct Potti’s revolutionary finding and to justify the clinical trials was done quite deliberately.

So enjoy the read, remember that bad science can be hard to spot, that provenance is no guarantee of good science, and maybe take a little pride in the fact that the tort system once again has helped to advance the cause of truth.

Chance Is Not A Thing: So you can’t lose it

Posted in Causality, Reason, The Law, Uncategorized

Conceptually the loss-of-a chance doctrine recently reaffirmed in Rash v. Providence Health & Services appears to make sense. The typical facts in such cases include (1) a usually fatal disease (e.g. certain cancers); (2) that was diagnosed later than was possible with proper care (or that a less effective treatment was used); and where (3) the limited chances of survival decline further with each successive stage of the disease’s progression. Not wanting to “provide a ‘blanket’ release from liability for doctors and hospitals any time there was less than a 50 percent chance of survival, regardless of how flagrant the negligence“, yet unable to come up with a sound reason why a plaintiff ought to be able to recover for an act or omission which probably did not cause the course of her disease to be altered, some courts made the erosion of the chance of survival the harm rather than the subsequent death. With that the causation dilemma seemed to disappear. Meanwhile, a mechanism for disincentivizing  (via the imposition of tort liability) the provision of anything less than optimal care, even to those unlikely to benefit from it, is created. One problem with the approach is that chance, especially in this setting, is not a thing that can be lost. Another comes from encouraging doctors to treat probability distributions instead of people.

Chance is a word imbued with powerful meanings. Often wrapped up in it are ideas about fate, destiny, fairness and even justice. Take the case of a simple coin flip that settles controversies from who kicks-off to who owns a $125,000 car. We may dispute the circumstances of the flip but never the outcome. Somehow, once in the air and spinning, fate, destiny, justice, karma or whatever hands down its unappealable judgment which is promptly revealed for all to plainly see. This idea of chance as a proxy for justice (or perhaps as a ward against injustice) is a particularly old one. Consider Jonah 1:7 :

Then the sailors said to each other, “Come, let us cast lots to find out who is responsible for this calamity.” They cast lots and the lot fell on Jonah.

Of course in the age of “Big Data” chance is supposed to be about the attempt to quantify our uncertainty. When we say “the odds are 50-50″ what we’re really saying is that we don’t have access to any information that would lead us to believe that one side is more likely to come up than the other. In this sense chance may be considered a measure of our ignorance of the mechanisms and/or variables that determine which side comes up.

Now the fact that it’s unappetizingly about uncertainty and ignorance wouldn’t be a good reason not to compensate someone who lost a chance like the one depicted in the coin toss scene from “No Country for Old Men”. There’s one chance you wouldn’t want to lose. Only in such a pure instance of chance can it become a thing you can lose; and that must be the concept of chance imagined by courts like the one that authored Rash. Unfortunately that’s not at all the sort of chance we’re talking about when we talk about the chance of surviving cancer.

Where do estimations of the chance of surviving cancer for five years come from? Obviously from other people and not the newly diagnosed. And did those other people all experience identical survival intervals? No. Even the graph of late stage pancreatic cancer patients has a long tail of the very lucky few. Consequently, any estimation of the central tendency of those other people, usually the median but sometimes the average survival time, homogenizes the experience of all the patients and produces a mathematically “typical” patient with an experience unlike any of the individual patients. Whereas the gas station cashier in the coin toss scene had the opportunity to save his life by choosing “heads”, to seize the opportunity presented by the graph of the survival experience of patients undergoing a new treatment the cancer patient would somehow have to be able to choose to be the “typical” patient; and that would mean being able to choose to have whatever currently unknown genetic and epigenetic makeup is responsible for the slightly improved “typical” survival time – which is impossible. You can’t buy that chance, and neither can you lose it.

The remaining argument for the loss-of-a-chance doctrine is that disincentivizing doctors from providing anything other than the treatment with the longest “typical” survival time at the earliest possible date would save some unidentifiable lives and so produce a benefit to society as a whole. This is where we wade into the widening controversy swirling around the use of statistics, despite (or rather because of) ignorance of underlying mechanisms and variables, to determine treatment. On one side are those who hold the view that “it is obsolete for the doctor to approach each patient strictly as an individual; medical decisions should be made on the basis of what is best for the population as a whole“. The idea here is that if earlier or a newer treatment has shifted the survival curve in the direction of longer survival in a subset of people with the disease then earlier or newer treatment across all people with the disease will surely save lives.

On the other side are those who point out that the medical journals (and law books) are littered with examples of treatments which demonstrated a pattern of better outcomes in a small population but which showed no benefit or worse outcomes once they were widely prescribed. That many researchers, doctors and pharmaceutical companies “find some pattern in their data and they don’t even want to consider the possibility that it might not hold in the general population” is a well-known phenomenon.

As for our take on the controversy all we can say is that until the underlying mechanisms of cancer are elucidated inferring treatment from statistics is pretty much all we’ve got … but often that ain’t sayin’ much. Hopefully in the not too distant future physicians will look back on our current era and shake their heads at the thought of the primitives who settled upon cancer treatment options essentially by casting lots. That being said, to anchor liability on the claim that the slight positive shift in the probability distribution calculated for a small sample of likely terminal patients (in turn premised on the dubious assumption that patients can be thought of as so many balls in a quincunx machine getting chemotherapy an infinite number of times) will also be seen in a much larger sample of completely different likely terminal patients seems more than just a bit of a stretch.

Consider also the following: if the loss of the (imagined) chance is the harm, why don’t the people who lost the chance at the new treatment, but who responded to the old treatment anyway, have a claim? They lost a chance and that’s a harm after all. And what would the damages be for the harm? They’d be the same as they would be for the person who lost a chance at a treatment that probably wouldn’t have made a difference anyway, right? So why is it that some who are harmed have a claim while others who sufferer the identical harm do not? Because the loss-of-a-chance doctrine is incoherent.

In Rash the appellate court ultimately affirmed the dismissal of plaintiff’s claim because her expert couldn’t quantify the chance she had lost. That’s just another example of a court falling into the trap of believing that assigning numbers to things, even to things that are not things, makes them “scientific” so that, as here, damages may be “accurately” calculated. Yet it’s vital to the assumption that doctors are able to sell and patients are able to buy the “typical ” (mean or median) outcome of a treatment that actually yielded a wide range of outcomes, none of which were precisely “typical”. And the illusion of accuracy created by multiplying the quantified chance of the “typical” patient from small study by the value of someone else’s life to determine her damages is just that. But so it goes with the loss-of-a-chance doctrine.

However far science pushes back the shadows to reveal how the universe really works, chance retains its place as a somehow essential and inescapable aspect of our lives. Perhaps, as ably argued by a colleague recently when we were outlining this post, Garth Brooks nailed it when he sang “I’m glad I didn’t know, the way it all would end, the way it all would go. Our lives, are better left to chance, I could have missed the pain, but I’d have had to miss the dance”. Or maybe chance, once revealed as uncertainty, is actually the driving force behind mankind’s quest for truth. That’s my take. But whatever it is it’s not something you can buy at the doctor’s office.

Lexblog