Legal scholars continue to push inference to the best explanation as the form of reasoning which our rules of evidence are designed to serve. Just remember that when inference to the best explanation is applied to probably false but nevertheless admissible risk factor studies it turns them into verities (which is why plaintiffs have been pushing it)
The idea (known as the epidemiological transition) that infectious diseases had been or soon would be conquered and that chronic and degenerative diseases, often if not mostly the result of man’s vices and industry’s alleged toxins, would be the primary cause of human mortality has got to rank among the worst ideas of the last 50 years. Coupled with null-hypothesis statistical significance testing (and its propensity for generating false positives in risk factor epidemiology studies) it was the bad idea that launched crusade after crusade against everything from eggs to fat to salt to electricity to vaccines to cell phones. Meanwhile, today’s news that herpes zoster, the virus that causes chicken pox and shingles, was found in 74% those who died of giant cell arteritis but only 8% of those who died of other conditions, strongly suggests that our ancient predators were anything but conquered.
Herpes zoster is increasingly being implicated in cerebrovascular disease and cerebrovascular disease in turn in dementia. And there’s direct evidence that herpes zoster encephalitis produces dementia. Could it be that a common virus, another member of the herpesvirus family, is responsible for that terrible scourage of an aging population – Alzheimer’s? (See: Intracerebral propagation of Alzheimer’s diesase: strengthening evidence of a herpes simplex virus etiology ). It’s too early to tell of course but at least they’re looking and so far there are a number of indications that an infectious process lies at the heart of this degenerative process (see Moving Away from Amyloid Beta to Move on in Alzheimer’s Research just published in Frontiers in Aging Neuroscience). It’s a shame it took so long to look. hat tip – LKD
“… 62 of the plaintiffs … had statistically significantly higher rates of genitourinary and reproductive illness and procedures compared to the rest of the county.”
That’s from Whitlock v. Pepsi Americas, a hexavalent chromium case, and it was part of the reasoning that went into the court’s decision to grant plaintiff leave to supplement her expert report based on this “new scientific information.” I’ll explain just why the reasoning is deeply flawed shortly but first I’ll answer the question of why you should care. If the sort of risk factor epidemiology on which the court rests its opinion is really science, and if the sort of data dredging that that went into the study from which the “new scientific information” was inferred is really the scientific method, then anything can always be shown scientifically to cause everything and Daubert has been finally and thoroughly eviscerated.
Whitlock’s underlying facts are typical of those mass tort cases that follow the factory closing of a sparsely populated county’s largest employer. Toxins are identified and the lawyers file suit on behalf of dozens or hundreds of clients with conditions that might be associated with exposure. Here approximately 1,000 toxic tort cases blamed pollution from Remco Hydraulics, Inc.’s Willits, CA manufacturing plant for a host of ailments. Those cases have spawned numerous interesting orders and opinions, Whitlock being only the most recent.
The district court had previously found the proposed exposure and causation testimony of Plaintiff’s experts to be unreliable and accordingly granted summary judgment in favor of Defendants but the U.S. Court of Appeals, Ninth Circuit in an unpublished opinion held that the trial court had abused its discretion; and plaintiff was back in business. Meanwhile, a study to identify possible risk factors associated with living in Willits was being updated but the results arrived after Plaintiff’s deadline to amend her experts’ reports. This iteration of Whitlock then the court’s determination that the study update, what it deemed “newly discovered evidence in support of her claims”, constituted good cause for amending her experts’ reports.
The study, Longitudinal analysis of health outcomes after exposure to toxics, Willits California, 1991-2012: application of the cohort-period (cross-sequential) design looks at the incidence of groups of ailments and/or procedures defined by “body system” noted at the time of any patient discharged between 1991 and 2012 sorted by the decade in which each patient was born (’40s, ’50s, ’60s, ’70s or ’80s). Then the rate of each grouping, for each decade of birth, for patients with a residential address containing the Willits ZIP-code, is compared to the rate of each grouping, by decade of birth, for patients who lived in the same county but didn’t have a Willits ZIP-code (a/k/a ROC, or “rest of the county”) generating thereby a relative risk. The authors also calculated the relative risk (Willits ZIP vs ROC) for hospital admissions, discharges and days spent in the hospital.
Willits men and women, sorted this way, were more likely to be hospitalized and to have spent more time in the hospital than non-Willits ZIP-code residents of the same county. And as for “body systems” Willits women were “at increased risk for all measures” whereas men “were at increased risk for all measures except genitourinary system diagnoses and procedures, and gender based procedures and cancer.” The authors conclude from their study that the people of Willits were at increased risk of “poor health”, that the burden on the community is “incalculable”, and that the cost of to the public is “enormous.” If you think those are reasonable inferences given this data you’re about 20 years late to the scientific community’s realization that risk factor epidemiology isn’t science, generates more false leads than promising hypotheses and is easily exploited.
In 1994 the late Petr Skrabanek wrote The Emptiness of the Black Box. It wasn’t the first journal article to call BS on risk factor epidemiology but it was the best; and coming from a leading epidemiolgist and public health advocate it was also the most powerful of its time. Seven years later, reflecting on the fact that risk factor epidemiology had not only failed to uncover the cause of “a disease which showed an epidemic rise in industrialized countries” but had falsely indicted certain exposures thereby impeding attempts at prevention and cure, the new editors of The International Journal of Epidemiology wrote Epidemiology – is it time to call it a day? In it they discuss the failures, the lack of rigour in the discipline and the already obvious decline in the use of risk factor epidemiology to identify causes of health problems in groups of people. Over the last ten years (as we’ve chronicled repeatedly) the status of risk factor epidemiology has only fallen further. Imagine the money wasted, hopes dashed and time lost in the largely fruitless search for reliable markers of cancer prognosis despite the fact (or actually because of the fact) that Almost All Articles on Cancer Prognostic Markers Report Statistically Significant Results
Now, the fact that most risk factors identified by risk factor epidemiology turn out to be false does not lead necessarily to the conclusion that having a Willits ZIP-code and having been born sometime between 1940 and 1989 doesn’t put you at greater risk of being hospitalized some time between 1991 and 2012 (though it ought to lead you to be intensely sceptical of such a claim). Furthermore, I’ve no reason to believe that the authors engaged in the sort of post hoc rationalizing, p-hacking, multiple comparison testing and selective publication responsible for much of the now widely recognized crisis of unreproducible “science”. But what I think I can demonstrate rather easily is that any inference about the cause of Whitlock’s ailment that is drawn from this data is fatally flawed.
Remember that business about sorting patients’ reason for hospitalization not by the ICD 9 disease codes but rather by “body systems”? A little rummaging around on the web turned up the “Level 1 of the Multi-level Clinical Classification Software” that does the sorting along with a handy appendix. It turns out that the purpose of such sorting hasn’t anything to do with discovering the cause of diseases but rather everything to do with analyzing and predicting healthcare costs. That doesn’t mean it can’t be (somehow) used to discover the causes of illness but it does make it an odd choice. You can find it at Healthcare Cost and Utilization Project – HCUP: A Federal-State-Industry Parthership in Health Data .
In any event go to appendix C1 and scroll down until you get to body system 10 – Diseases of the genitourinary system. Body system 11 is Complications of pregnancy; childbirth and the puerperium. The list of the procedures by body system can be found in Appendix D1. Operations on the urinary system are found in category 10 and operations on the female genital organs is category 12. These are the systems and categories of procedures to which the court was referring when it wrote that “… 62 of the plaintiffs had statistically significantly higher rates genitourinary and reproductive illness and procedures …”.
The plaintiff’s argument then goes like this: A peer reviewed and published study has shown a statistically significant increased risk of being hospitalized between 1991 and 2012 for treatment of a genitorurinary system problem among women with a Willits ZIP-code who were born between 1940 and 1989. I have a Willits ZIP-code, was born between 1940 and 1989 and had a genitorurinary system ailment. Therefore my ailment was caused by living within the Willits ZIP-code. Somehow from there must come “and living in the Willits ZIP-code meant I was exposed to hexavalent chromium so hexavalent chromium caused my genitourinary problem!” Since there was no data collected on any of the patients to determine whether they were actually exposed to hexavalent chromium, lived downstream, upstream, worked in or ever drove past the Remco factory the analytical gap between ZIP-code and hexavalent chromium exposure/dose would appear unbridgeable. But let’s assume it is because the argument is still demonstrably absurd.
If you’ve looked through the list of conditions and operations you know what I mean when I write that it’s full of cross-examination gold. However, given the highly personal and sensitive nature of these subsets of body systems and procedures I’ll use another category that was also statistically significantly elevated among patients with a Willits ZIP-code – Infectious and parasitic diseases (which is body system 1). In fact, Willits women had a slightly higher risk of infectious and parasitic diseases than of diseases of the genitorurinary system. And the last clue you need to figure out what’s going on here is the discovery that Willits women were at a statistically significantly increased risk for all of the categories of body systems and procedures for almost all years.
So let’s take, in honor of the 110th anniversary of Robert Koch’s Nobel Prize in Physiology or Medicine for his discovery of Mycobacterium tuberculosis, 1.1.1 – Tuberculosis from body system 1 and plug it into a hypothetical plaintiff’s argument.
1) A peer reviewed and published study has shown a statistically significant risk of being hospitalized for infectious and parasitic diseases among women with Willits ZIP-code born any time between 1940 and 1989.
2) A woman with a Willits ZIP-code born between 1940 and 1989 has been afflicted by tuberulosis, a member of the set of infectious and parasitic diseases.
3) A Willits ZIP-code and exposure to hexavalent chromium are (somehow) the same thing
4) Therefore, hexavalent chromium exposure caused plaintiff’s tuberculosis (Koch’s postulates, M. tuberculosis and the Nobel Prize notwithstanding)
Hopefully I’ve made my first point.
My second arises out of the sentence that launched this post. People don’t have rates of disease. They either get a disease or they don’t. Populations have rates of disease. And when you go from data about populations to inferences about individuals you commit a logical fallacy known as the ecological fallacy. The court’s reasoning is a perfect example of it.
Any finally my third point. If you’ve ever worked on one of these plant closure / toxic tort cases in a down and out county you know why the people who lived near the plant have more hospitalizations and procedures. They disproportionately had the best jobs in the county meaning more money and more access to health care. In other words, as is so often the case in these risk factor studies, the authors have probably pointed the arrow of causation in the wrong direction. Living in Willits didn’t cause poor health and hospitalizations. Not living in Willits meant disproportionately poor access to health care dispensed by hospitals.
For several years now we’ve been trying to spread the word to the legal community that a great many people who hold themselves out as scientists, including more than a few who’ve published papers in the most prestigious peer reviewed journals around, aren’t really doing science. They’re not coming up with hypotheses and testing them. Instead of avoiding that pitfall which humans are particularly prone to falling in, the one whereby we become so enamored of our clever hypotheses that we simultaneously become blind to any holes and hostile to those who dare point them out, too many scientists are fooled by the ability of statistical analysis to readily generate spurious associations that, with a little bit of post hoc narrative editing, look just like causal associations.
The combination of vast amounts of data quickly sliced and diced by powerful modern computers plus multiple statistical methods which are poorly understood but easy to use has led to the current crisis in bio-medical science whereby only a shockingly small fraction of “scientific discoveries” turn out to be true. The essence of the problem is well put by the quote that appears in the subject line of this blog post. It’s from Donald Berry, a biostatistician at MD Anderson Cancer Center, and he made it during a discussion of the issue at last January’s meeting of the President’s Council of Advisors on Science and Technology. You can watch that portion of the conference dealing with irreproducible science here; it’ll take less than an hour of your time and is well worth it.
If you watch the webcast linked above you’ll hear concerned scientists explaining that a lot of other well-meaning scientists fail to comprehend the scientific method, are fooled by statistical tools they don’t understand, or both; and that more and better education is the answer. This idea, that with a little more of the right sort of education we’d get better science, assumes that nobody is trying to game the system. We’re to assume for example that: (1) no one is hatching his hypothesis after the computer has found the inevitable statistically significant associations that arise from looking at any bucket of data from multiple perspectives (if you doubt that finding something statistically significant in any random batch of numbers is easy then spend 60 seconds on An Exact Fishy Test); (2) no one is p-hacking his way to confirmatory evidence for his favored hypothesis by turning random noise into seeming proof; (3) no one consciously uses a test that is biased in favor of validating his method; and, (4) no one is exploiting the decision-making heuristics of peer reviewers and editors to sneak bad science into leading journals. If a articles in this January’s The Cancer Letter are any indication we shouldn’t be too sure of such assumptions.
You need to read Duke Officials Silenced Med Student Who Reported Trouble in Anil Potti’s Lab and Duke Scientist: I Hope NCI Doesn’t Get Original Data (h/t Error Statistics) for several reasons. First, it’s the story of a brave young man who risked his career by refusing to participate in and attempting to expose research practices that were shoddy at best and fraudulent at worst. Second, it’s about how an article published in Nature Medicine went from revolutionary to retracted. Third, it details how an institution dedicated to education was willfully blind to the rot that had set in at one of its most prominent laboratories even after the rot was pointed out. Fourth, it reminds us that bad science isn’t a victimless crime – that desperate cancer patients endure worthless and time-robbing clinical trials as a result of it. Finally, the article reminds us of the power of our adversarial legal system and the good it can do by bringing truth to light. Though the Institute of Medicine had investigated and Dr. Berry and others had pointed out the flaws in the since retracted article in the end everyone, perhaps out of a sense of collegiality, put the failings down to sloppy work and it looked like the worst thing Potti was guilty of was resume inflation. But then came the lawyers for the patients. They uncovered the emails and audio recordings showing, if intent can be inferred from conduct, that the data dredging, cherry-picking and non-test testing used to construct Potti’s revolutionary finding and to justify the clinical trials was done quite deliberately.
So enjoy the read, remember that bad science can be hard to spot, that provenance is no guarantee of good science, and maybe take a little pride in the fact that the tort system once again has helped to advance the cause of truth.
Conceptually the loss-of-a chance doctrine recently reaffirmed in Rash v. Providence Health & Services appears to make sense. The typical facts in such cases include (1) a usually fatal disease (e.g. certain cancers); (2) that was diagnosed later than was possible with proper care (or that a less effective treatment was used); and where (3) the limited chances of survival decline further with each successive stage of the disease’s progression. Not wanting to “provide a ‘blanket’ release from liability for doctors and hospitals any time there was less than a 50 percent chance of survival, regardless of how flagrant the negligence“, yet unable to come up with a sound reason why a plaintiff ought to be able to recover for an act or omission which probably did not cause the course of her disease to be altered, some courts made the erosion of the chance of survival the harm rather than the subsequent death. With that the causation dilemma seemed to disappear. Meanwhile, a mechanism for disincentivizing (via the imposition of tort liability) the provision of anything less than optimal care, even to those unlikely to benefit from it, is created. One problem with the approach is that chance, especially in this setting, is not a thing that can be lost. Another comes from encouraging doctors to treat probability distributions instead of people.
Chance is a word imbued with powerful meanings. Often wrapped up in it are ideas about fate, destiny, fairness and even justice. Take the case of a simple coin flip that settles controversies from who kicks-off to who owns a $125,000 car. We may dispute the circumstances of the flip but never the outcome. Somehow, once in the air and spinning, fate, destiny, justice, karma or whatever hands down its unappealable judgment which is promptly revealed for all to plainly see. This idea of chance as a proxy for justice (or perhaps as a ward against injustice) is a particularly old one. Consider Jonah 1:7 :
Then the sailors said to each other, “Come, let us cast lots to find out who is responsible for this calamity.” They cast lots and the lot fell on Jonah.
Of course in the age of “Big Data” chance is supposed to be about the attempt to quantify our uncertainty. When we say “the odds are 50-50″ what we’re really saying is that we don’t have access to any information that would lead us to believe that one side is more likely to come up than the other. In this sense chance may be considered a measure of our ignorance of the mechanisms and/or variables that determine which side comes up.
Now the fact that it’s unappetizingly about uncertainty and ignorance wouldn’t be a good reason not to compensate someone who lost a chance like the one depicted in the coin toss scene from “No Country for Old Men”. There’s one chance you wouldn’t want to lose. Only in such a pure instance of chance can it become a thing you can lose; and that must be the concept of chance imagined by courts like the one that authored Rash. Unfortunately that’s not at all the sort of chance we’re talking about when we talk about the chance of surviving cancer.
Where do estimations of the chance of surviving cancer for five years come from? Obviously from other people and not the newly diagnosed. And did those other people all experience identical survival intervals? No. Even the graph of late stage pancreatic cancer patients has a long tail of the very lucky few. Consequently, any estimation of the central tendency of those other people, usually the median but sometimes the average survival time, homogenizes the experience of all the patients and produces a mathematically “typical” patient with an experience unlike any of the individual patients. Whereas the gas station cashier in the coin toss scene had the opportunity to save his life by choosing “heads”, to seize the opportunity presented by the graph of the survival experience of patients undergoing a new treatment the cancer patient would somehow have to be able to choose to be the “typical” patient; and that would mean being able to choose to have whatever currently unknown genetic and epigenetic makeup is responsible for the slightly improved “typical” survival time – which is impossible. You can’t buy that chance, and neither can you lose it.
The remaining argument for the loss-of-a-chance doctrine is that disincentivizing doctors from providing anything other than the treatment with the longest “typical” survival time at the earliest possible date would save some unidentifiable lives and so produce a benefit to society as a whole. This is where we wade into the widening controversy swirling around the use of statistics, despite (or rather because of) ignorance of underlying mechanisms and variables, to determine treatment. On one side are those who hold the view that “it is obsolete for the doctor to approach each patient strictly as an individual; medical decisions should be made on the basis of what is best for the population as a whole“. The idea here is that if earlier or a newer treatment has shifted the survival curve in the direction of longer survival in a subset of people with the disease then earlier or newer treatment across all people with the disease will surely save lives.
On the other side are those who point out that the medical journals (and law books) are littered with examples of treatments which demonstrated a pattern of better outcomes in a small population but which showed no benefit or worse outcomes once they were widely prescribed. That many researchers, doctors and pharmaceutical companies “find some pattern in their data and they don’t even want to consider the possibility that it might not hold in the general population” is a well-known phenomenon.
As for our take on the controversy all we can say is that until the underlying mechanisms of cancer are elucidated inferring treatment from statistics is pretty much all we’ve got … but often that ain’t sayin’ much. Hopefully in the not too distant future physicians will look back on our current era and shake their heads at the thought of the primitives who settled upon cancer treatment options essentially by casting lots. That being said, to anchor liability on the claim that the slight positive shift in the probability distribution calculated for a small sample of likely terminal patients (in turn premised on the dubious assumption that patients can be thought of as so many balls in a quincunx machine getting chemotherapy an infinite number of times) will also be seen in a much larger sample of completely different likely terminal patients seems more than just a bit of a stretch.
Consider also the following: if the loss of the (imagined) chance is the harm, why don’t the people who lost the chance at the new treatment, but who responded to the old treatment anyway, have a claim? They lost a chance and that’s a harm after all. And what would the damages be for the harm? They’d be the same as they would be for the person who lost a chance at a treatment that probably wouldn’t have made a difference anyway, right? So why is it that some who are harmed have a claim while others who sufferer the identical harm do not? Because the loss-of-a-chance doctrine is incoherent.
In Rash the appellate court ultimately affirmed the dismissal of plaintiff’s claim because her expert couldn’t quantify the chance she had lost. That’s just another example of a court falling into the trap of believing that assigning numbers to things, even to things that are not things, makes them “scientific” so that, as here, damages may be “accurately” calculated. Yet it’s vital to the assumption that doctors are able to sell and patients are able to buy the “typical ” (mean or median) outcome of a treatment that actually yielded a wide range of outcomes, none of which were precisely “typical”. And the illusion of accuracy created by multiplying the quantified chance of the “typical” patient from small study by the value of someone else’s life to determine her damages is just that. But so it goes with the loss-of-a-chance doctrine.
However far science pushes back the shadows to reveal how the universe really works, chance retains its place as a somehow essential and inescapable aspect of our lives. Perhaps, as ably argued by a colleague recently when we were outlining this post, Garth Brooks nailed it when he sang “I’m glad I didn’t know, the way it all would end, the way it all would go. Our lives, are better left to chance, I could have missed the pain, but I’d have had to miss the dance”. Or maybe chance, once revealed as uncertainty, is actually the driving force behind mankind’s quest for truth. That’s my take. But whatever it is it’s not something you can buy at the doctor’s office.
A colleague asked me yesterday what I thought about a story she’d seen in the media regarding a virus often found in algae. Supposedly it can impair human cognition. I told her that as a matter of fact I’d been working up a brief blog post on the topic because of its implications for mass tort litigation, gave her the short version and promised to finish the post and send it along once I’d finished working on a presentation. Now I have so here it is.
A couple of weeks ago "Chlorovirus ATCV-1 is Part of the Human Oropharyngeal Virome and is Associated With Changes in Cognitive Functions in Humans and Mice" was published in Proceedings of the National Academy of Sciences. It’s an excellent example of good science revealing just how strange our world really is and just how little we really know and how poor our guesses can be about the causes of human disease.
I’ll get to the strangeness and the upset applecart of prior beliefs shortly but first it’s important to point out that this is a classic case of scientific discovery. It’s a direct observation of something never seen before followed by an experiment designed to test the implication of its discovery. It is not a case of finding a correlation between one event and another that seemingly followed it and thereafter creating a model/explanation of how the observed association might be causal. That’s just generating a hypothesis. Nor is it the usual null hypothesis testing of small effects that drives most modern toxic tort litigation. That’s just pretending to be surprised when a system that has been slightly perturbed turns out to be slightly perturbed.
Like most modern discoveries this one depended on a new method of observation and curious scientists deploying that new method to peer about the world around them. The new method (methods really but I’ll cram them all into one category as they’re evolving rapidly and besides it shortens the post) involves rapidly (and cheaply) sequencing all of the genetic material in a sample, comparing it to libraries of microorganisms, human cancer cells (see e.g. The Cancer Cell Line Encyclopedia ), etc. and thereby "seeing" the biological diversity present in the sample. The name for it is metagenomics.
Gene sequencing isn’t new of course but what is new is the ability to sequence the unknown unknowns in a sample and to do so rapidly and (relatively) inexpensively. In the past you’d take a sample, culture whatever was present in a petri dish (multiple techniques here too but anyway …) and then sequence whatever was in each of the clumps of clones you’d cultured. The problem is that not every microbe is culturable. In some cases the right medium has yet to be found. In other cases the beastie is dead – an invisible casualty of an unseen battle between the immune system and an invader. Either way we were blind to much of what was going on.
In this study scientists took throat swabs from people participating in an investigation of cognitive ability (among other things) to see what they could see with their new method. Upon comparing the genetic data they found to the libraries available they were surprised to find a match with a virus that infects algae; one that had not previously been identified as being infectious in humans. They then looked to see if there were differences between those with and without the virus and found a marked decrease in cognitive ability among those infected. To test the hypothesis that springs so readily from such a correlation they then ran an experiment on lab animals. Sure enough, cognitive ability including memory declined in those mice inoculated with the virus. But that wasn’t the end of it. They next looked to see what genetic changes the virus had wrought and it turned out that it altered the expression of genes that have to do with cognition. Not quite proof but pretty compelling.
As the use of this new way of seeing, metagenomics, has expanded so have the number of discoveries. Have a look at Elevated Levels of Circulating DNA in Cardiovascular Disease Patients: Metagenomic Profiling of Microbiome in the Circulation and Pathogenic Microbes, the Microbiome, and Alzheimer’s Disease (AD) to see what may await.
And along with the discovery of new causes come ways to track it back up the causal chain (at least until a deep pocket is found). See for example Seeking the Source of Pseudomonas aeruginosa Infections In a Recently Opened Hospital: An Observational Study Using Whole-Genome Sequencing .
One question raised is what duty is owed to those of your workers and customers who might come in contact with what was thought to be a harmless green algae occasionally infected with a virus nobody suspected until now to have the ability to fiddle with your brain’s fine tuning. Another is whether you ought to have your cognitively impaired plaintiff who blames the landfill tested for Chlorovirus. Or how about whether there ought to be a REACh for food (since the companies will come to have, thanks to the use of metagenomic techniques to ensure food safety, vast libraries of the presumably harmless microbes and their ever evolving offspring that ride to market in and on their products)? And what about record-keeping requirements because even helpful bugs sometimes go bad? Is it foreseeable that an innocent bug shipped far from home and thrown in with a strange crowd might pick up a plasmid coded for virulence?
If scientific discoveries raise more questions than they answer then the intersection of the law and new discoveries ought to keep us busy.
Korea’s Thyroid-Cancer "Epidemic" – Screening and Overdiagnosis . The short version is as follows: the ability to screen for thyroid cancer led to an explosion (an almost unbelievable 15 fold increase) in its diagnosis. In fact, it’s now the most common cancer in South Korea. Nevertheless, the mortality rate hasn’t budged. That means the only thing screening managed to accomplish was thousands of needless biopsies and surgeries and their occasional complications, including death.
There’s an excellent Op-Ed piece by one of the paper’s authors in The New York Times that lays it all out but this line really caught my attention:
Too many epidemiologists concern themselves not with controlling infectious disease, but with hoping to find small health effects of environmental exposures – or worse, uncertain effects of minor genetic alterations. Perhaps they should instead monitor the more important risk to human health: epidemics of medical care.
Will this finding which clearly echoes the results of similar studies on the impact of mass screenings for prostate cancer and breast cancer and their respective diagnosis and mortality rates finally dampen the enthusiasm for screening? Not if the new screening recommendation for type 2 diabetes, and the public’s response to prior news about the impact of mass screenings, is any indication.
The U.S. Preventive Services Task Force (USPSTF) now recommends blood glucose screening for adults at increased risk of type 2 diabetes. Among those many risk factors are being aged 45 or older, BMI > 25 and having a close relative with type 2 diabetes. That’s not exactly everybody but it’s certainly a lot of bodies. So how many lives will be saved? Given that this is a Grade B recommendation and that the USPSTF’s review of the evidence led it to conclude that first-line therapy for diabetes prevention (aggressive lifestyle modifications) currently results in a lower incidence of diabetes, cardiovascular mortality and overall mortality you’d assume the answer would be "many". However, "the task force found inadequate direct evidence that measuring blood glucose leads to improvements in mortality or cardiovascular morbidity."
That doesn’t sound very promising. So why make such a recommendation? Who knows? Given the enormous political pressure recently brought to bear on the USPSTF (especially after its recommendation against screening mammography in women under 50) it would be easy to slide into cynicism and speculate about the potential motives of those (other than the undiscovered incipient diabetics) who stand to benefit from this new recommendation (remember: under ACA if the USPSTF recommends it, it gets paid for). Instead, since this is a mass torts blog, I’ll speculate about why the new thyroid-screening study notwithstanding, most folks will enthusiastically line up for another screening opportunity.
Despite widespread coverage and laudatory editorials in leading newspapers both left-leaning and right the evidence that mammography screening in those at negligible risk did more harm than good apparently had little effect. No Fall In Mammogram Rates After USPSTF Recommendations was the conclusion drawn from surveys of the attitudes of almost 28,000 women about screening. The same was true for elderly men and prostate cancer screening (though it must be noted that urologists and middle aged men have demonstrated significant reductions in rates of screening / being screened). So why do so many people go to the bother of having a diagnostic test that is sure to be an annoyance and almost just as sure to do no good? I propose it’s because of one of the greatest decision drivers of all and one effective plaintiff lawyers use to devastating effect. It’s fear of regret.
Think about the power of fear of regret in the context of the reluctance of parents exposed to stories about autism and vaccines to vaccinate their children. They’re weighing the chance of autism times its cost against the chance of irreversible brain damage from measles times its cost. Should they decide that the value of the former outweighs the value of the latter it can only be that, given the grossly unequal risks, they’ve added something quite heavy to that scale holding the price of their children not developing autism in order to bring it even with the costs their children not be brain damaged by measles. What could it be? I suspect that it’s fear of regret / fear of becoming blameworthy.
In looking back across some of the most effective jury arguments I’ve seen (as determined by damages subsequently awarded) it’s those taking the form of "If you don’t stop this company, one day you’ll learn of another death and that victim’s blood will be on your hands" that work best. It’s the argument that appeals not to sympathy but that rather instills the social fear of shame that does the most work. Be en garde.
Kip Viscusi has just published a very good paper about the GM ignition switch recall controversy. In it he argues that blockbuster jury awards in cases where corporations had made design and recall decisions based on cold economic calculations influenced GM’s decision not to do the same sort of risk analysis on its defective ignition switch problem. The result was a delayed recall and lost lives. Viscusi further argues against the admissibility of potentially inflammatory risk analysis evidence on the basis that when such analyses fairly (and highly) value human life they actually promote consumer welfare and so ought not be discouraged. The paper is "Pricing Lives for Corporate Risk Decisions" and can be found here.
GM isn’t the only company to have forsworn risk/benefit assessment. After a presentation to in-house counsel on recent trends among the courts regarding causation and risk, in which I discussed how not to do risk/benefit analyses, the GC of that energy company commented "we don’t do them (risk/benefit assessments) at all and we routinely counsel our business units not to engage in the practice." The sentiment is surely understandable given the usual media reaction to anything that sounds remotely like weighing profits and people on the same scales; and Viscusi’s paper lends further support to it with empirical evidence demonstrating that people, given identical facts about a defect and subsequent harm, tend to award significantly larger punitive damages against those defendants that examined the risks and benefits before they went to market than against those that either did nothing at all or that consciously chose not see how the numbers added up.
How then do companies that eschew risk/benefit analysis deal with risk? Sometimes by clinging to the belief that regulatory compliance renders their product or activity risk-free and sometimes by ignoring it altogether. Unfortunately, in either case it’s just whistling past the graveyard.
Remember that, generally speaking, a wrong is an action that was taken without proper regard for others and that caused some harm. Since every action entails risk (because every action entails change and every change entails risk, assuming the actor isn’t omniscient) any reasonable person who acts must either (a) be ignorant of the risk; (b) suspects a risk but chooses to leave things to the Fates; or (c) estimates that the benefits outweighs risks. Only the diligent then, those who have undertaken (c), have actually made an attempt at calculating the "proper regard for others" given the risk inevitably created.
Now of course it could be the case that a diligent actor is also a nefarious one who has grossly undervalued human life and/or has calculated that the chances his perfidy will be uncovered are slight, but such a case is the very one one for which punitive damages were designed. However, if the studies cited by Viscusi can be relied upon, then even the honest risk assessor (c) who highly values human life will get hammered harder by a jury than the ignorant (a) or the careless (b) manufacturer. Thus the perverse result: in an age when knowledge is ever more accessible, when questions about the likelihood and cost of harm posed by a particular design incorporated into millions of vehicles are readily calculable, and when a sound risk analysis can send a signal that saves lives we have a legal system that threatens enormous costs to any manufacturer who dares to look up the answers.
Viscusi’s solution is essentially to set the value of a human life (he suggests $9.1 million) at a level which, if used by a company in its risk/benefit analysis and if guided truly thereafter by the answer, would mean that the company was not putting its profits before people. Accordingly such an analysis could not be offered into evidence by the Plaintiff to demonstrate knowledge, callousness, etc. though it could be offered by the Defendant, at its option, to show proper regard. It’s a good idea and soundly reasoned. Read the whole thing as it’s well worth your time.
While some may imagine that scientific hypotheses are the product of highly educated people with brilliant minds drawing straightforward inferences from compelling evidence the fact remains that all scientific hypotheses are nothing more than guesses; and as every middle schooler taught the scientific method knows, even the best pedigreed hypotheses are usually false. On the other hand, sometimes it’s the hypothesis with the most dubious provenance that gets promoted to the status of scientific theory (i.e. one that has survived rigorous testing and is powerfully explanatory) as in the case of benzene’s structure:
I was sitting writing at my textbook but the work did not progress; my thoughts were elsewhere. I turned my chair to the fire and dozed. Again the atoms were gambolling before my eyes. This time the smaller groups kept modestly in the background. My mental eye, rendered more acute by the repeated visions of the kind, could now distinguish larger structures of manifold confirmation: long rows, sometimes more closely fitted together, all twining and twisting in snake like motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes.
Because a hypothesis is nothing more than the assembly (by hard work or daydreaming) of a few bits of what is known/believed into a plausible narrative that explains some phenomenon (e.g. gastric lymphoma), because so little is known about the causes of a complex disease like gastric lymphoma such that the discovery of H. pylori suddenly and completely overturned prior views about its causes, and because we can’t know (or factor into our hypotheses) what we don’t know (you’ve heard of the human gut microbiome but what about the human gut virome?) hypotheses are nothing more than speculation. That’s why every epidemiological study you’ve ever read puts the burden of proof squarely on the hypothesis and resolves all doubt in favor of the "null hypothesis" (i.e. the hypothesized causal agent has no effect).
Unfortunately many courts either don’t understand the difference or refuse to distinguish between hypothesis and theory. A recent example is Walker v. Ford. In Walker plaintiff’s expert was allowed to opine on the basis of his hypothesis that asbestos is a cause of Hodgkin’s lymphoma and thereafter to deduce from another of his hypotheses (Hodgkin’s lymphoma is caused by either Epstein-Barr virus, smoking or asbestos) that plaintiff’s lymphoma must have been caused by asbestos as he hadn’t the virus and didn’t smoke. And it isn’t just another case of a court conflating hypothesis generation (guessing) with the scientific method (testing guesses) so that guesswork by a properly credentialed witness is turned into a "scientifically valid method" and Rule 702 can be deemed satisfied. It’s worse. Not only has the hypothesis that asbestos causes Hodgkin’s lymphoma never been verified, it has in fact been repeatedly tested and serially refuted. Furthermore, the most important observation that spawned the hypothesis in the first place (an increased risk of gastric lymphoma among a sample of asbestos workers) has never been reproduced (and will never be reproduced) because when the study was done nobody outside two researchers in Australia even knew H. pylori existed much less to look for it in gastric lymphoma patients – several years would elapse between its discovery and the determination that it is worldwide the leading cause of gastric lymphoma.
The general causation opinion of plaintiff’s expert rested on these studies:
1) Cancer Morbidity of Foundry Workers in Korea. A slight increased risk of stomach cancer and non-Hodgkins’s lymphoma was found among foundry workers exposed to a laundry list of things including asbestos. No exposure assessment was done for any substance and no increase in Hodgkin’s disease was reported. The mortality study of the workforce published this year isn’t any more persuasive – here’s the SMR table for malignant diseases: SMR table.
2) Extranodal marginal zone lymphoma of mucosa-associated lymphoid tissue type arising in the pleura with pleural fibrous plaques in a lathe worker. Guess what? Asbestos isn’t the only cause of pleural plaques and so I stopped reading this article when I got to "He had not been exposed to asbestos."
3) Asbestos exposure and lymphomas of the gastrointestinal tract and oral cavity. This is the study mentioned above that suffers fatally from the understandable ignorance of the confounder H. pylori though it also appears to have the multiple comparison problem as evidenced by the fact that subgroupings of lymphomas, here GI and oral, produced a higher risk than for lymphomas in general. Finally, being a case-control study, there was no estimation of exposure in any of the cases.
4) Does asbestos exposure cause non-Hodgkin’s lymphoma or related hematolymphoid cancers? A review of the epidemiologic literature. I didn’t get past the abstract which concludes that a review of the literature reveals "no increased risk of NHL (non-Hodgkin’s lymphoma) or other HL-CAs (hematolymphoid cancer) associated with asbestos exposure."
Not discussed in Walker but apparently the last nail in the asbestos-causes-lymphoma hypothesis’ coffin (and the last sign of any scientific interest in this apparently dead issue) occurred 10 years ago with the publication in the Annals of Epidemiology of Occupational asbestos exposure and the incidence of non-Hodgkin lymphoma of the gastrointestinal tract: an ecologic study. The study found "no support for the hypothesis that occupational asbestos exposure is related to the subsequent incidence of GINHL (gastrointestinal tract non-Hodgkin’s lymphoma).
These articles along with the expert’s belief that "as long as asbestos reaches an area, regardless of where it is, it can cause different types of cancer" and asbestos can make its way to the lymph nodes, were all he needed to opine that asbestos causes lymphoma including plaintiff’s Hodgkin’s lymphoma (because after all "a lymphoma is a lymphoma" save "for therapeutic purposes"). That’s too much nonsense to unpack in one blog post so I’ll just focus on the claim that wherever asbestos goes in the body it causes cancer. The Institute of Medicine was tasked with answering this very question – is there evidence for a causal relationship to asbestos for cancer of everything from the larynx to the rectum – and generally found that what was in the literature was suggestive but insufficient to reasonably conclude that there is a causal link. See: Asbestos: Selected Cancers.
To save plaintiff’s expert and his hypothesis the appellate court held that it doesn’t matter if an expert’s conclusions are correct. All that matters is that the method whereby he reaches his opinion is reliable, and plaintiff’s expert’s method, guessing about the cause of Hodgkin’s lymphoma by creating a narrative about the causation of Hodgkin’s lymphoma from a few studies (that didn’t actually study Hodgkin’s lymphoma) counts as a reliable one. But who, other than the hopelessly ironic, would label as "reliable" a method (i.e. the guessing that constitutes a scientific hypothesis) of causal determination the product of which is usually incorrect? Recall that not only are most scientific hypotheses false but that even most of those with a statistically significant chance of being true are probably false.
Only scientific theories get the Seal of Reliability, which is to say they make predictions on which you can rely. And they gain that status only by being put to the test, and passing; and by passing I mean that the predictions they make actually come to pass. So what prediction would follow from "asbestos exposure causes Hodgkin’s disease"? Wouldn’t it be "people exposed to asbestos are more likely to get Hodgkin’s disease than those who aren’t"? And what follows from the fact that no study of asbestos-exposed workers has shown an increased risk of Hodgkin’s lymphoma? That the claim "asbestos causes Hodgkin’s disease" isn’t reliable.
So if hypotheses are unreliable in general because by definition they have not been tested, and if the specific hypothesis "asbestos causes Hodgkin’s lymphoma" is unreliable because it has been tested and failed to predict the future it entails, in what sense is the opinion of Walker’s expert "reliable"? Let me know if you figure it out.
Jumping the Snark: Erionite in Mexican Town Tied to High Rate of Mesothelioma (or, how "Sir, have you ever been to Turkey?" became too cute by half)