Mass Torts: State of the Art

Mass Torts: State of the Art

Robust Misinterpretation of Confidence Intervals by Courts

Posted in Reason, The Law

“How are courts doing when it comes to interpreting the statistical data that goes into their decision-making?” That was a question posed by someone in the audience at a presentation I gave recently. I was discussing, among other things related to the perils of litigating statistical inferences, the recent paper “Robust Misinterpretation of Confidence Intervals.” It reports on the results of a study designed to determine how well researchers and students in a field that relies heavily on statistical inference actually understand their statistical tools. What it found was a widespread “gross misunderstanding” of those tools among both students and researchers. “[E]ven more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.” So, returning to the very good question, how are our courts doing?

To find out I ran the simple search “confidence interval” across Google Scholar’s Case Law database with the date range set to “Since 2014″. The query returned 56 hits. Below are eight representative quotes taken from those orders, reports and opinions. Can you tell which ones are correct and which constitute a “gross misunderstanding”?

(A) “The school psychologist noted that there was a 95% confidence interval that plaintiff’s full scale IQ fell between 62 and 70 based on this testing.” And later: “A 90% confidence interval means that the investigator is 90% confident that the true estimate lies within the confidence interval”

(B) “A 95% confidence interval means that there is a 95% chance that the “true” ratio value falls within the confidence interval range.”

(C) “Once we know the SEM (standard error of measurement) for a particular test and a particular test-taker, adding one SEM to and subtracting one SEM from the obtained score establishes an interval of scores known as the 66% confidence interval. See AAMR 10th ed. 57. That interval represents the range of scores within which “we are [66%] sure” that the “true” IQ falls. See Oxford Handbook of Child Psychological Assessment 291 (D. Saklofske, C. Reynolds, & V. Schwean eds. 2013).”

(D) “Dr. Baker applied his methodology to the available academic research and came up with a confidence interval based on that research. The fact that the confidence interval is high may be a reason for the jury to disagree with his approach, but it is not an indication that Dr. Baker did not apply his method reliably.”

(E) “A 95 percent confidence interval indicates that there is a 95 percent certainty that the true population mean is within the interval.”

(F) Statisticians typically calculate margin of error using a 95 percent confidence interval, which is the interval of values above and below the estimate within which one can be 95 percent certain of capturing the “true” result.

(G) “Two fundamental concepts used by epidemiologists and statisticians to maximize the likelihood that results are trustworthy are p-values, the mechanism for determining “statistical significance,” and confidence intervals; each of these mechanisms measures a different aspect of the trustworthiness of a statistical analysis. There is some controversy among epidemiologists and biostatisticians as to the relative usefulness of these two measures of trustworthiness, and disputes exist as to whether to trust p-values as much as one would value confidence interval calculations

(H) The significance of this data (referring to calculated confidence intervals) is that we can be confident, to a 95% degree of certainty, that the Latino candidate received at least three-quarters of the votes cast by Latino voters when the City Council seat was on the line in the general election.

Before I give you the answers (and thereafter some hopefully helpful insights into confidence intervals) I’ll give you the questionnaire given to the students and researchers in the study referenced above along with the answers. Thus armed you’ll be able to judge for yourself how our courts are doing.

Professor Bumbledorf conducts an experiment, analyzes the data, and reports:

The 95% confidence interval for the mean ranges from 0.1 to 0.4

Please mark each of the statements below as “true” or “false”. False means that the statement does not follow logically from Bumbledorf’s result.

(1) The probability that the true mean is greater than 0 is at least 95%.

Correct Answer: False

(2) The probability that the true mean equals 0 is smaller than 5%.

Correct Answer: False

(3) The “null hypothesis” that the true mean equals zero is likely to be incorrect.

Correct Answer: False

(4) There is a 95% probability that the true mean lies between 0.1 and 0.4.

Correct Answer: False

(5) We can be 95% confident that the true mean lies between 0.1 and 0.4.

Correct Answer: False

(6) If we were to repeat the experiment over and over, then 95% of the time the true mean would fall between 0.1 and 0.4.

Correct Answer: False

Knowing that these statements are all false it’s easy to see that the statements (A), (B), (C), (E), (F), and (H) found in the various orders, reports and opinions are equally false. I included (D) and (G) as examples typical of those courts that were sharp enough to be wary about saying too much about what confidence intervals might be but which fell into the same trap nonetheless. And that trap is believing that confidence intervals have anything to say about whether the parameter (again typically the mean/average – something like the average age of recently laid off employees) that has been estimated is true or even likely to be true. (G), by the way, manages to get things doubly wrong. Not only does it repeat the false claim that estimations falling within the confidence interval are “trustworthy” it also repeats the widely held but silly claim that confidence intervals are perhaps more reliable than p-values. Confidence intervals you see are made out of p-values (see “Problems in Common Interpretations of Statistics in Scientific Articles, Expert Reports, and Testimony” by Greenland and Poole if you don’t believe me) so that the argument (albeit unintentionally) being made in (G) is that p-values are more reliable than p-values. Perhaps unsurprisingly, of the 56 hits I only found two instances of courts not getting confidence intervals wrong and in both cases they avoided any discussion of confidence intervals and instead merely referenced the section on the topic from the Reference Manual on Scientific Evidence, Third Edition.

Why do courts (and students and researchers) have such a hard time with confidence intervals? Here’s a guess. I suspect that most people have a pretty profound respect for science. As a result, when they first encounter “the null hypothesis racket” (please see Revenge of the Shoe Salesmen for details) they simply refuse to believe that a finding published in a major peer reviewed scientific journal could possibly be the result of an inferential method that would shock a Tarot Card reader. People are only just coming to realize what the editor of The Lancet wrote this Spring: scientific journals are awash in “statistical fairy tales”.

Now there’s nothing inherently suspect about confidence intervals – trouble only arises when they’re put to purposes for which they were never intended and thereafter “grossly misunderstood”. To understand what a confidence interval is is to know the two basic assumptions on which it rests. The first is that you know something about how the world works. So, for example, if you’re trying to estimate the true ratio of black marbles to white marbles in a railcar full of black and white marbles you know that everything in the railcar is a marble and each either black or white. Therefore whenever you take a sample of marbles you can be certain that what you’re looking at is reliably distinguishable and countable. The second is that you can sample your little corner of nature over and over again, forever, without altering it and without it changing on its own.

Without getting too deep in the weeds those two assumptions alone ought to be enough to make you skeptical of claims about largely unexplained, extraordinarily complex processes like cancer or IQ or market fluctuations estimated from a single sample; especially when reliance on that estimate is urged “because it fell within the 95% confidence interval”. And here’s the kicker. Even in the black and white world of hypothetical marbles the confidence interval says nothing about whether the sample of marbles you took is representative of, or even likely to be representative of, the true ratio of black to white marbles. All it says is that if your assumptions are correct, and given the sample size you selected, then over the course of a vast number of samples your process for capturing the true ratio (using very big nets two standard deviations on either side of each estimate) will catch it 95% (or 99% or whatever percent you’d like) of the time. There is no way to know (especially after only one sample) whether or not the sample you just took captured the true ratio – it either did or it didn’t. Thus the confidence interval says nothing about whether you caught what you were after but rather speaks to the size of the net you were using to try to catch it.

Thus my take on all the judicial confusion surrounding confidence intervals: they’re like judges at a fishing competition where contestants keep showing up with nets but no fish and demanding that their catches be weighed “scientifically” according to the unique characteristics of their nets and their personal beliefs about fish. Who wouldn’t be confused?

Texas: No More No No-Duty

Posted in Reason, The Law

Until last Friday an owner whose premises harbored some known or knowable danger could not avail itself of the argument that it had no duty either to warn invitees or to render its premises safe even when the danger was “open and obvious” and even when the invitee was aware it. The “no-duty” rule that had once meant “no money” for plaintiffs who slipped and fell in the very spills they were paid to clean up had been abolished years before and Texas became a “no no-duty” state. The idea behind abolishing the rule was that Texas’ then new comparative fault scheme, especially once coupled with a bar on recovery whenever plaintiff’s fault is greater than 50%, would sort things out fairly, and spare judges the bother of untangling knotty issues of duty in the bargain. The results were otherwise.

If a deliberating jury wants to keep turning pages until it gets to one with blanks for the money they want to give away it’s not hard to figure out how to do it. They just have to solve this riddle: “Answer [the damages question] if you answered “Yes” for Danny Defendant to [the liability question] and answered: 1) “No” for Pauline Plaintiff to [the liability question], or 2) 50 percent or less for Pauline Plaintiff to [the percentage of causation question]”. Without the no-duty rule to screen out obviously meritless claims juries began to return verdicts which when compared to the underlying facts were simply absurd. Compounding the problem, efforts to reconcile the reasoning behind the reversal of a judgment for e.g. a plaintiff who’d been shown a hole by a premises owner and then promptly stepped in it anyway, and a rule the absence of which implies that a premises owner owed the plaintiff a duty irrespective of the hole’s obviousness or the plaintiff’s awareness of it, produced a number of appellate opinions which were, to put it kindly, confusing.

With Austin v. Kroger the Texas Supreme Court has declared that the era of “no no-duty” is over. An owner still has a duty to maintain its premises in a reasonably safe condition but that duty is discharged by warning of hidden dangers. Open and obvious dangers, and those of which an invitee is aware, don’t give rise to a duty to warn or to remediate in the first place. Two exceptions remain and both are quite limited. The first involves criminal activity by a third party and arises when the premises owner “should have anticipated that the harm would occur despite the invitee’s knowledge of the risk”. The second exception arises in the context of “necessary-risk”. If an invitee is aware of a hazard and yet must cross it nonetheless then a duty to lessen the attendant risk likely remains.

All in all it’s a very good opinion though I wish they’d spent a bit of time on the issue of why an open and obvious danger, or one of which an invitee is aware, cannot give rise to a duty, because it’s vitally important to understanding the no-duty rule.

Any system that adjudicates outcomes based on fault rests upon the idea that the parties being judged have agency – that they have both the faculty of reason and the ability to act according to their reason. When a party with agency is confronted with a known and avoidable danger the risk drops to zero so long as the party acts according to her reason, which is to say “reasonably”.

Since duty (in this context, at least) manifests only when a risk rises to a level at which a reasonable person would take action (i.e. warn or remediate) there can be no duty to act in a no risk (i.e. open and obvious) situation.

So, (and at last I come to the point) what always bothered me about the no-duty rule was that it essentially denied that individuals have agency. By denying that Texans couldn’t be assumed to be reasonable, to be the sort of people who upon seeing a hole decide to walk around it,  the no-duty rule denied that they had the faculty of reason and/or the ability to act upon it.  Stranger still, the rule assumed that the typical defendant, a corporation, did have agency which is why it got stuck with the duty. Thus corporations could be assumed to have agency but not so the state’s citizens. Ugh. Glad that chapter’s behind us.

I Dreamed of Genie

Posted in The Law

A month has passed since the Texas Supreme Court delivered its opinion in Genie Industries, Inc. v. Matak. It’s the most thorough explication of Texas products liability jurisprudence that I’ve read in a good while. Nevertheless I struggled to come up with a blog post because I couldn’t quite be sure what to make of it. Did it really, as it seemed upon first reading, finally set the risk component of Texas’ risk/utility analysis on a solidly objective foundation? Or was that just wishful thinking; an illusion produced by the inevitable discussion of foreseeability and prior incidents within the context of a case that turned on weighing risk against utility?

The opinion draws no bright lines. However, the following two sentences finally settled the question for me:

The undisputed evidence is that Genie has sold more than 100,000 AWP model lifts all over the world, which have been used millions of times. But the record does not reflect a single misuse as egregious as that in this case.

Immediately after those words the court summed up its reasoning and concluded that the Genie lift is not unreasonably dangerous. I read these tea leaves to mean the court believed that no reasonable person could conclude that a risk of death at the level of 1 in 1 million (or less) could outweigh a product’s demonstrated utility. If so it’s both sensible and a pretty big deal. Hard data can now trump a jury’s or judge’s subjective risk assessment.

As noted above the opinion is a very nice summary of Texas product liability law and below this paragraph I’ll set out in bullet point fashion a CliffsNotes version. Before getting there however I want to touch upon the issue of misuse. The court spends some time talking about misuse but could have done a better job of saying where and exactly how it fits in a risk/utility scheme. The concept of misuse, when understood as the likelihood of misuse (gauged by the obviousness of what would follow) multiplied by the gravity of the failure produced by the misuse, is really just another dimension of risk if you think about it. That means it ought to be dissolved back into the general risk construct rather than being precipitated out and given a different name (and causing confusion).

Here are the takeaways:

To recover on a design defect product liability claim plaintiff must prove:

(1) the product was defectively designed so as to render it unreasonably dangerous;

(2) a safer alternative design existed; and

(3) the defect was a producing cause of the injury for which the plaintiff seeks recovery.

A product is unreasonably dangerous when its risk outweighs its utility.

A safer alternative design is one that would have prevented or significantly reduced the risk of the injury, would not substantially impair the product’s utility, and was economically and technologically feasible at the time.

When weighing risk against utility consider:

(1) the utility of the product to the user and to the public as a whole weighed against the gravity and likelihood of injury from its use;

(2) the availability of a substitute product which would meet the same need and not be unsafe or unreasonably expensive;

(3) the manufacturer’s ability to eliminate the unsafe character of the product without seriously impairing its usefulness or significantly increasing its costs;

(4) the user’s anticipated awareness of the dangers inherent in the product and their avoidability because of the general public knowledge of the obvious condition of the product, or of the existence of suitable warnings or instructions; and

(5) the expectations of the ordinary consumer

 

So Much For Science

Posted in Reason

Wired ran an article last week titled “Science Says American Pharoah Won’t Win The Triple Crown“. It consisted of a detailed review of the science of horse racing; the energy demands, the metabolic hurdles a horse must overcome to refuel for the next race, the microscopic injuries produced by any great exertion, the time needed for bone to remodel to meet new demands, the impossible task of balancing treatments that speed recovery of one bodily system only to slow down recovery of another, and the unique challenge posed by Belmont’s long track (1.5 miles). In the concluding paragraph the author offered the doomed American Pharoah consolation: “It’s not your fault. It’s science and those pesky fresh horses.”  If you click on the link (which contains the original title) you’ll see that experience has spoiled yet another good theory and in doing so caused a new title to take the place of the old: “Update: Whoa! American Pharoah Beats Science to Win the Triple Crown“.

The point of this post is not to to mock Lexi Pandell, who authored the piece. She is to be commended for having a sort of courage conspicuously absent in most of the expert witnesses I encounter. Specifically, she laid out her theory, the data that led her to it, and then made a testable prediction by which her theory could be judged. That her theory failed the test is no cause for shame – the (vast) majority of all theories meet the same fate.

Rather, the point of this post is to remind you that until it is tested a clever argument, though painstakingly built fact by fact and expertly cemented together with analysis so that the gaps between are all fully and solidly filled, remains just that – a clever argument. One hundred accurate predictions adds only modestly to its strength yet a single failed prediction causes it to collapse. That is the essence of science.

And so the only flaw I found in Ms. Pandell’s piece is that she, like too many courts, mistook the impressive argument built out of studies and rhetoric for science. Science wasn’t the clever argument; science was the race.

Revenge of the Shoe Salesmen

Posted in Causality, Epidemiology, Reason

By 1990 Paul E. Meehl had had enough. He’d had enough of lazy scientists polluting the literature with studies purporting to confirm fashionable theories that in fact couldn’t even be tested; enough of cynical scientists exploiting the tendency of low power statistical significance tests to produce false positive results just so they could churn out more of the same; and enough of too many PhD candidates, eager to get in on what Meehl called “the null hypothesis refutation racket”, who were unashamedly ignorant of the workings of the very mathematical tools they hoped to use to further muddy the waters with their own “intellectual pollution.” He called on them to give up their “scientifically feckless” enterprise and to take up honest work, something more suited to their talents –  selling shoes perhaps. The shoe salesmen as we now know would not give up so easily.

Rather than a tiresome diatribe in a meaningless war of words among academics, what Meehl wrote in Why Summaries Of Research On Psychological Theories Are Often Uninterpretable is one of the best explanations you’ll ever read about what went wrong with science and why. And if you’re curious about whether he (posthumously) won the argument, the results are now coming in. Of the first 100 important discoveries in the field of psychology tested to see if they are in fact reproducible, all of which “discoveries” by the way were peer reviewed and published in prominent journals, only 39 passed the test. The obvious conclusion is that the literature has indeed been thoroughly polluted.

Meehl demonstrated that any time tests of statistical significance are used to test hypotheses involving complex systems where everything is correlated with everything, as in the psyche and the body, a weak hypothesis (which is to say one that is merely a suspicion not built upon other theories about underlying mechanisms that have been rigorously tested) carries an unacceptably high risk of producing a false positive result. This problem is not limited to psychology. It is estimated to arise in the biomedical sciences just as often.

Fortunately, those who in the past funded the “null hypothesis refutation racket” have begun to take notice, and action. The National Children’s Study which recently got the axe from the NIH (following a review by the NAS) is the most notable example thus far. Criticized for years as being short on robust hypotheses and long on collecting vast amounts of data on environmental exposures and physical, behavioral and intellectual outcomes it was finally determined that the study was “unlikely to achieve the goals of providing meaningful insight into the mechanisms through which environmental factors influence health and development.” That the study would have found all sorts of statistically significant correlations between environment and outcomes was a given. That none could reliably be said to be causal was the problem.

The shoe salesmen turned scientists had a good run of it. Uncounted billions in grant money went into research founded on nothing more than the ability of computers to find correlations among random numbers and of humans to weave those correlations into a plausible explanation. Scientists in the right fields and also blessed with earnestness or at least the skills of an advocate really got lucky. They became expert witnesses. But now, frustrated with research that never seems to go anywhere and alarmed that good research is being obscured by bad, funders are directing their money towards basic research. And it’s a target rich environment. Take for example the remarkable discovery that an otherwise harmless amoeba can for purposes known only to itself resuscitate a moribund Listeria monocytogenes, let it grow and multiply within itself and then release the bacteria into what was previously thought to be an L. monocytogenes-free environment.

Alas, such research is hard and its chances of success, unlike significance testing, wholly unpredictable. It looks like the shoe salesmen’s luck has run out. That one of their last redoubts has turned out to be the courthouse is perhaps the most remarkable development of all.

Tracing Listeria Through Time

Posted in Uncategorized

The aspect of the recent Listeria monocytogenes outbreak that is likely to have the biggest impact on pathogen transmission litigation going forward is the ability to identify victims who acquired the infection years before the outbreak was finally recognized and the source identified. Thanks to the fact that in recent years some state health departments have begun preserving samples from patients diagnosed with certain infectious diseases the CDC  has realized, now armed with the ability to use of PFGE to “fingerprint” bacteria, that though the ice cream contamination wasn’t suspected until January of this year and wasn’t confirmed until last month people have been getting sick from it since as far back as 2010.

L. monocytogenes infections are acknowledged to be far more widespread than what’s reflected in the CDC outbreak statistics. Most cases produce nothing more than short term, mild, flu-like symptoms and go undetected as patients rarely get to a physician before they’re feeling better and so diagnostic tests aren’t even run. In the very old, the very young and the immune-compromised however it can produce a systemic or invasive infection with a significant mortality rate. It is these cases, assuming the infection is detected and it’s estimated that at least 50% of all such cases are accurately diagnosed, that get the attention of state health departments and the CDC. The silent tragedies are the miscarriages and stillbirths caused by L. monocytogenes. Expectant mothers can acquire the infection and experience nothing other than the vaguest sense of being under the weather while the bacteria is launching an all out attack on her child. The cause of those deaths regularly go undetected. This whole thing renders jokes about husbands being sent out on late night runs for pickles and ice cream soberingly unfunny.

From the legal perspective the creation of databases of the genetic fingerprints of pathogens will obviously increase the number of plaintiffs in the future as more silent outbreaks are discovered and previously unknown victims from the past are identified. It will also create some interesting legal issues. Take for instance Texas’ two year statute of limitations in Wrongful Death cases. There’s no discovery rule to toll the claim in large part because death is an easily appreciated clue that something has gone wrong and those who suffered the loss have two whole years to figure out the cause. Here though the ability to discover the cause didn’t exist in 2010. But of course if we start to draw the line somewhere else the debate over where is quickly overrun by the horrid thought of people digging up Granny who died of listeriosis back in 1988 to see if the genetic fingerprint of her killer matches one from a growing list of suspects. And let’s not forget about L. monocytogenes’ aiders and abettors, the other types of bacteria with whom it conspired to form the biofilm that protected L. monocytogenes from the disinfectants used to clean the food processing equipment. The promiscuous bugs likely acquired their special skill thanks to horizontal gene transfer not just among their own phyla but any passing bug with a helpful bit of code (like one that protects against the chemical agents, scrubbing and high pressure sprays used to disinfect food processing equipment) – and none of them were spontaneously generated at the ice cream factory. They all came from somewhere else.

Ultimately what makes claims arising out of the transmission of pathogens so different from other mass torts is that there is none of the usual causal uncertainty because, for example, the only cause of the 2010 patient’s listeriosis was Listeria monocytogenes that came from a particular flavor of ice cream that came from a particular plant. So what makes today’s news important is not that science can now answer “where did it come from and how many were infected?” but rather that science now asks in reply “how far back do you want to go?”

Discretizations

Posted in Epidemiology, Microbiology, Molecular Biology

Listeria monocytogenes is estimated to cause the loss of 8,800 years of life annually in the U.S. due to poor health, disability and premature death

Fighting L. monocytogenes in refrigerated milk products with viral drones

A method for relatively rapid (a few hours) and accurate (97%+) detection of L. monocytogenes in milk

And another

L. monocytogenes is a problem for seafood too thanks to biofilms

A coating to make food-handling surfaces resistant to bacterial adhesion and biofilm formation

A tool (PFGE) for tracing L. monocytogenes from food processing equipment to food

PFGE, the gold standard for tracing L. monocytogenes

 

 

Dubious About Bringing Scientific Peer Review to Scientific Evidence

Posted in Reason, The Law

Bloomberg’s Toxics Law Reporter recently published a paper by Professor David L. Faigman titled “Bringing Scientific Peer Review to Scientific Evidence” that sets out an idea worth thinking about. Specifically, that the quality of scientific testimony presented to juries would be improved (and presumably the likelihood that justice is done would be increased) if the job of screening proposed testimony was shifted to, or at least augmented by, experts in the relevant field. Surely it would be better than the current system in which the task falls solely to someone who’s often sitting where he or she is for the very reason that science and math weren’t his or her thing. However, my objection goes to the premise lurking within his idea –  that the courtroom is in the first place a proper venue for publishing, testing and ultimately deciding the scientific status of an hypothesis.

Name some established scientific theories that were first published in a courtroom. How about an analytical technique developed for and refined through the adversarial process at the courthouse that has made its way into widespread use beyond the bar? Cue the crickets. On the other hand, if asked for examples of theories and techniques deemed “scientific” at the courthouse yet found to be embarrassingly unscientific and the cause of widespread injustice you can start with the National Academy of Science’s 2009 scathing report on forensic science and update it with the weekend’s news that the FBI now admits that its hair analysis, admitted in hundreds of trials, was almost always “scientifically invalid”. Courtroom-produced science has a long and dismal record yet courts continue solemnly to admit into evidence opinions of experts that would be laughed at if presented in a venue geared toward the scientific enterprise.

Rather than speculate (again) about the cause of the problem I’d like to remind anyone who’s reading that there’s a simple solution and it doesn’t involve peer review (besides, let’s not forget there’s a reason why most peer reviewers do worse than coin flipping when it comes to separating the scientific wheat from the chaff). The solution is to admit only theories that have been borne out by observation of predictions made by the theory. That’s it. I propose treating scientific evidence like any other evidence.

Imagine a case involving a car wreck at an intersection with a traffic signal and a witness who will testify as to having seen Defendant enter the intersection against a red light. No difficult admissibility issue here. Now imagine the same facts except the witness didn’t actually observe the accident but nevertheless is prepared to testify that Defendant looks like the sort of person who’d run a red light and so probably did. Again, there’d be no angst over the (in)admissibility of such testimony. But when an expert converts “looks like the sort of person who’d run a red light” into the jargon of psychological science and converts “probably did” into something like “assuming independence among the variables I would estimate that there is less than a 1 in 2,000,000 chance that Defendant did not run the light” judges too often allow what would otherwise be seen as obvious baloney to be admitted as filet mignon. Yet all a judge need do to avoid the mistake is to first ask “have you or anyone else actually observed Defendant run a red light?”

We tend to lose sight of the fact that science and math are practical endeavors – the purpose of which is to make accurate predictions about questions like “where will this cannonball land if fired at an angle of elevation of 45 degrees and with a muzzle velocity of 1,054 ft/sec?” Practicality being paramount, conjectures about ballistics or any other aspect of nature are always tested the same way – subsequent observations are compared to predictions and whichever theory comes closest wins. There’s no trophy or ribbon for second place.

All “science” really is is a formalization of the way we’ve learned to understand the world: trial and error. It’s the way babies learn how to make Mom come running and it’s how toddlers learn to walk. Eventually we come to know how cars travel, how intersections are laid out, how things are perceived when at a distance and how a certain range of frequencies in the visible light spectrum means a traffic signal is red. Get it? We’re all scientists.

Now should the expert reply that the individual pieces that form the foundation of his theory have been tested and that ought to be enough remind him that that is no test at all of his theory that Defendant is a red light runner. Just as no judge would put her family on a type of airplane that has never been tested, irrespective of the evidence for the airworthiness of its component parts, the fact that the pieces of evidence that generate a hypothesis are sound doesn’t mean the hunch, however clever, will be borne out. One of the essential elements of the scientific revolution was the recognition that Nature is stranger than we can imagine; so much so that her secrets cannot yield even to the keenest mind precisely because we know so little about how the world works. Thus it was that originally through tragedy and investigation and later through extensive pre-maiden flight testing we learned that an airplane is more than the sum of its parts and that it has its own unique resonance vibration characteristics which if not recognized and controlled for can lead to disaster.

My argument then is as follows: if the prediction entailed by an expert’s opinion has never been observed outside the courtroom it is not evidence, because evidence is simply that which has been observed. If the response is that the harm alleged by plaintiff is in fact the observation that proves the predictive power of the expert’s hypothesis your reply should be that it is only evidence of our ability to string cherry picked pieces of the past together into a narrative to explain some other thing that lies in the past and so beyond the reach of science. Real science means placing a bet, on your reputation if nothing else, on a forward looking experiment the results of which you do not know in advance. If on the other hand the opinion follows from some hypothesis that has indeed been tested (and I’ll save for some other day a discussion about why low power studies of small effects don’t constitute tests) then it ought to be admitted.

We’re seeing fewer and fewer cases tried and the enormous costs associated with expert witnesses and all the battles we have to fight over them just to get to the courthouse plays a very big part in it. Costs associated with experts increasingly force cases to be settled and others that might have merit never to be filed in the first place. Something needs to be done about it and it seems to me that the simple rule of requiring scientific evidence to be made out of the same thing as any other evidence, an observation that confirms the predictive power of the theory, would go a long way toward untangling the current mess and would at the same time have the salutary effect of putting “evidence” back into “scientific evidence”.

Discretizations

Posted in Microbiology, Reason, Risk, The Law

The Supreme Court of Connecticut adopts the duty triggering rule “if an outcome made possible by the act is foreseeable then any outcome made possible by the act is foreseeable” – and in the process proves that it doesn’t understand physics (the case involved a child dropping an 18 lb piece of cinder block from the third floor of a building on the head of another child standing below. If you remember that force is equal to mass times acceleration you’ll understand that none of the what ifs proposed by the court as potential risks from the cinder block resting on the ground or in the hands of a child generate anything close to the force involved in this scenario. There’s a reason fall protection is required for workers on the third level of scaffolding but not for those standing on the ground).

From endoscopes to ultrasound probes here’s an excellent overview of potential reservoirs of deadly pathogens and a discussion of the ability of some bacteria to escape detection by entering a state of suspended animation known as “viable but non-culturable”

Pouring salt into wounds – your body does it to battle infections and this article does the same thing to those suddenly shy public health advocates who once demanded that you go on a low salt diet

The evidence that screening healthy people for diseases like cancer saves no lives is approaching the point of being overwhelming. In the same journal be sure to read the additional commentary here, here and here.

We need to do more to fight urinary catheter infections

 

A Seat Belt Case Illustrates Why Risk Beats Causation as an Explanation of Tort Liability

Posted in Reason, Risk, The Law

The Texas Supreme Court has finally done away with the prohibition on seat-belt evidence in auto accident cases. See Nabors Well Services, LTD. et al v. Romero, et al. I recall thinking in law school just how odd it was that a defendant couldn’t introduce evidence of plaintiff’s failure to wear his seat belt. After all, the elements of negligence are duty, breach, causation and damages. In a hypothetical case where “but for” plaintiff’s failure to wear his seat belt he probably wouldn’t have impacted the windshield thereby causing the head injury for which he brought suit, surely he breached the duty he owed himself by willfully refusing to use a readily available device that would greatly reduce the risk of smashing his head on the windshield. But that wasn’t the law.

Moving one link up the causal chain the court four decades earlier had reasoned that liability hinged upon the immediate cause of the cars’ collision rather than the cause of plaintiff’s injury. The thought was that if defendant’s car hadn’t struck plaintiff’s then the plaintiff wouldn’t have hit his head; seat belt or no. But you’re not entitled to a recovery, irrespective of defendant’s reckless driving, just because you’ve been in an accident. You have to show that defendant caused you to suffer damages. So the real question is what was the cause of plaintiff’s damages, and here the liability question ought to encompass all those acts or omissions that bear upon the question of why plaintiff hit his head on the windshield.

Another question is why start the causal inquiry either at the immediate cause of the collision or at the immediate cause of plaintiff’s head hitting the windshield? What if someone had mistakenly called plaintiff to tell him he needed to come back downtown to work when he otherwise wouldn’t? What if someone else had been texting at a red light causing the plaintiff in the car behind him to miss the light thereby 10 minutes later bringing plaintiff into the path of defendant’s vehicle (which of course wouldn’t have been there when plaintiff passed that point had me made the first light 11 minutes earlier)? Why doesn’t legal causation extend to these other “but for” causal links? Nobody has a good answer because there’s not one when courts try to base legal causation on “but for” causation alone. All they can do is pick a link or two in the causal chain and pretend not to notice, or worse yet try to explain away, all the other “but for” causal links in it.

If on the other hand you decide that risk is the better determinant of legal causation you get not only a coherent explanation of tort liability, you also get better public policy. Imagine a simple rule like: everyone who creates a substantial risk has a duty to minimize it. Driving is one of the riskiest things you can do, accounting annually for tens of thousands of deaths. As the court demonstrates with National Highway Traffic Safety Administration data you can dramatically decrease that risk by wearing your seat belt. On the other hand the risk of head trauma posed by a single commute or a missed green light ranges from negligible to infinitesimal. Thus basing fault on the creation of, or failure to minimize, a substantial risk rather neatly it seems to me sorts out all the causes in the chain leading to plaintiff’s injury into those to which liability may attach and those to which it cannot while simultaneously disincentivizing and incentivizing conduct that respectively creates or abates a substantial risk.

All in all it’s a good opinion (hooray for the Palsgraf mention!) but it wasn’t the “risk as the unifying theory of negligence jurisprudence” opinion for which I’ve been waiting.

 

Lexblog