Need Something To Attack The Reliability Of Animal Studies?

The Reference Manual on Scientific Evidence, Third Edition has the following to say about animal studies:

Animal studies have a number of advantages. They can be conducted as true experiments, and researchers control all aspects of the animals' lives. Thus, they can avoid the problem of confounding, which epidemiology often confronts. Exposure can be carefully controlled and measured. Refusals to participate in a study are not an issue, and loss to followup very often is minimal. Ethical limitations are diminished, and animals can be sacrificed and their tissues examined, which may improve the accuracy of disease assessment. Animal studies often proved useful information about pathological mechanisms and play a complementary role to epidemiology by assisting researchers in framing hypotheses and in developing study designs for epidemiologic studies.

Animal studies have two significant disadvantages, however. First, animal study results must be extrapolated to another species - human beings - and differences in absorption, metabolism, and other factors may result in interspecies variation in responses. For example, one powerful human teratogen, thalidomide, does not cause birth defects in most rodent species. Similarly, some known teratogens in animals are not believed to be human teratogens. In general, it is often difficult to confirm that an agent known to be toxic in animals is safe for human beings. The second difficulty with inferring human causation from animal studies is that the high doses customarily used in animal studies require consideration of the dose-response relationship and whether a threshold no-effect dose exists. Those matters are almost always fraught with considerable, and currently unavoidable, uncertainty.

It turns out that there's a third significant disadvantage and one that ranks ahead of the other two. It's nicely summed up in the title of a new article in Nature - "Animal studies produce many false positives." Put another way, before you start arguing about whether the result of a particular animal study is relevant to the effect of some chemical in humans you need to assess whether the study is even reliable in the first place. That's because the number of animal studies showing statistically significant results is far higher than would be expected by chance alone. In fact, it's almost double what you'd expect to find if hypotheses were being fairly tested in animals and the results invariably reported.

Interestingly, small sample size and financial interest seem to correlate with what appears to be p-hacking and/or selective publishing. Thus as we've said previously, you should assume the p-hacking scandal won't be limited to academia. The article is "Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases" and it sensibly ends:

In conclusion, the literature of animal studies on neurological disorders is probably subject to considerable bias. This does not mean that none of the observed associations in the literature are true.

 

 

 

Squeak Squeak

In the run up to the trial of a case in which we're arguing that the B6C3F1 mouse ain't a man and 1,3 butadiene ain't a human carcinogen just because it causes cancer in the B6C3F1 mouse, out comes "Mice Fall Short as Test Subjects for Humans’ Deadly Ills" by Gina Kolata of the NYTimes. And it's a bombshell. Kolata reports on the paper "Genomic responses in mouse models poorly mimic human inflammatory diseases" and its central finding that immune responses in the mouse, including those related to heart disease and cancer, are no more closely correlated with human responses to the same stimuli than the roll of a pair of dice. It's the long-sought explanation as to why e.g.

"every one of nearly 150 drugs tested at a huge expense in patients with sepsis has failed. The drug tests all were based on studies in mice. And mice, it turns out, can have something that looks like sepsis in humans, but is very different from the condition in humans."

Good stuff, though not all that surprising if you've been following the sad tale of the development of drugs that cure cancer in mice yet have no effect in humans. And it doesn't mean that all scientific studies done on mice are worthless. Far from it. The ability to produce for example so-called knockout mice, rodents lacking a particular gene required to make a particular protein, allows an otherwise forbidden glimpse into the workings of the tiny chemical factories that we call cells. Nevertheless, the study does shatter the assumption that those little factories in mice run just like their counterparts in humans.

However, that's not the end of the story. If you read the whole thing you'll find this:

The study’s investigators tried for more than a year to publish their paper, which showed that there was no relationship between the genetic responses of mice and those of humans. They submitted it to the publications Science and Nature, hoping to reach a wide audience. It was rejected from both.

... reviewers did not point out scientific errors. Instead, [one of the authors] said, “the most common response was, ‘It has to be wrong. I don’t know why it is wrong, but it has to be wrong’ ”

which leads to our final point. Daubert's peer review factor was intended to serve as an independent indicator of reliability. The Court assumed that disinterested scientists on the lookout for bad science served as gatekeepers of the journals through which "scientific knowledge" was disseminated. Perhaps when there were far fewer journals and far fewer academics desperate to be published peer reviewers served such a function. Nowadays they too often serve the status quo - barring from publication the sort of disruptive findings that would discomfit the guild they serve. Thus, if we're not careful, does Daubert risk being effectively transmuted, at least in part, into Frye - i.e. a test of general acceptance rather than a test of sound science.

 

Formaldehyde in hair straightening products - FDA urged to do more

The FDA has been urged by three US Congressmen to do more to control formaldehyde in hair straightening products. Last year, FDA sent a warning letter to the makers of “Brazilian Blowout” after finding that the hair straightener contained formaldehyde, in significant quantities, though it was marketed as “formaldehyde free.”

While the makers of Brazilian Blowout have settled a lawsuit regarding misbranding, and promised to put on a arming label stating that it can release formaldehyde on application, the product is still misbranded, according to the Congressmen, who have urged FDA to do more.

5-Methylene-1,3-Cyclopentadiene

I thought of fulvene, also known as 5-methylene-1,3-cyclopentadiene, when I read the following in a new law review article (funded, strangely enough, by a National Science Foundation grant):

Tort actions may impel industry to take voluntary steps to redesign chemical molecules ... to be less toxic.

Fulvene you see is made up of six carbon and six hydrogen atoms. So is benzene and so are a few other molecules. The point of course is that while you might be able to rearrange a car's component parts to make it somehow safer while leaving it a car you can't rearrange benzene's atoms (or those of any other complex molecule for that matter) without turning benzene into something else. Something with a different boiling point, solubility, reactivity and the like. Something that cannot, as benzene can, be used to make the breast cancer drug tamoxifen.

The law review article is "Litigating Toxic Risks Ahead of Regulation: Biomonitoring Science in the Courtroom" and it dovetails with "How Chemicals Affect Us" which you've likely seen in the NYTimes. Each claims that very low levels of exposure to substances previously thought safe may be causing subtle changes and each ends with a call for regulation; the former by way of lowering evidentiary standards in tort proceedings so as to bring about more claims and bigger awards and the latter by way of the regulatory state. Irrespective of wielder the same tool is urged: one that resolves all uncertainties in favor of stasis, of inaction, i.e. the Precautionary Principle.

"Litigating Toxic Risks", funded under a $366,785 research grant for "Toxic Ignorance and the New Right-to-Know: The Implications of Biomonitoring for Regulatory Science", proceeds from the hypothesis that "toxic tort litigation has emerged as a means of controlling risks." It recounts 1) the number of chemicals that have never been tested for toxicity (tens of thousands); 2) the non-stop synthesis of new ones; 3) the purported shortcomings of TSCA; 4) the fact that asbestos and lead paint are made of chemicals and turned out to adversely affect some of those exposed; 5) the apparently obvious conclusion "it follows that many of today's routine chemical exposures are cause for great health concern"; and, finally, 6) the ability of biomonitoring to demonstrate those chemicals to which we've been exposed. The authors then deduce that the effort to regulate chemicals via toxic tort litigation "depends greatly on whether courts are able to apply tort theories to the scientific data used in appraising the health risks of chemicals".

They lament, however, that there's no cause of action for simply being exposed to the activities of other people; that plaintiffs must show harm - an adverse health effect - before they can prevail. Regarding those chemicals to which everyone is exposed in low doses they complain that it's not practical for plaintiffs to do epidemiological studies since there is (unsurprisingly) no unexposed reference population. Furthermore, the cost and time involved in doing epi and tox studies are significant. So, if standards of proof could just be lowered the class action mechanism would expose potential defendants to existential liability risks for harms they probably didn't cause (see pg. 6) so that vast sums could be extracted from them and the production of synthetic chemicals would be thereby curtailed or eliminated.

Additional helpful measures would include dropping the requirement that class members demonstrate that they have actually been exposed to the substance in question. As support for this assertion the authors write "[t]he courts' current stance contradicts standard scientific procedure, where it is well recognized that sampling can lead to reliable assumptions about population characteristics". (Really? A calculated sample mean is superior to knowledge of the actual population mean for making conclusions about the population? And superior to even knowing the actual exposure of each member of the population?)

To make sure that as many people as possible can assert medical monitoring claims the article's authors urge "implementation of the precautionary principle in the legal standards required to show significant exposure and increased risk of disease". The precautionary principle apparently will turn every "is it likely" hurdle to plaintiffs' recovery into an "is it possible" speed bump.

As for damages "courts can accept, as legally actionable injuries, subtle health and developmental impacts as well as emotional concern and stress related to chemical exposure."

So far some 50 million different chemical substances have been cataloged and 12,000 new ones are added every day. Most were synthesized by nature rather than by man. Over the eons our ancestors managed to survive in this sea of chemicals, surrounded and inhabited by countless biochemical factories constantly synthesizing new molecules in order to survive in and/or exploit their ever-changing environment - and our ancestors largely did it by synthesizing their own new molecules. We've only had trouble when we've been out-engineered by our biochemical competitors or when we've violated the rule: "all things in moderation". So what's with the chemohysteria over trace exposures and the discovery that our bodies notice and adapt to them on the fly?

I think a large part of it stems from the fact that we've come to realize our genetic code is more toolbox than blueprint; that we're far more impermanent than we ever imagined; and, that so much of what we believed about how it all works, especially decades old myths about the principal causes of human diseases, is being swept away by remorseless empiricism. The attempt to incorporate the Precautionary Principle into the law can thus be seen as part of a deeply conservative movement, standing athwart science, yelling Stop!

 

 

 

Bending the Dose Response Curve

The linear no-threshold model of dose-response meant that plaintiffs could continue to prevail on toxic tort claims even though their exposures had occurred in the modern era and thus were tiny fractions of those that led to epidemics in years past. Either courts permitted plaintiffs to rely on a one molecule / one particle theory of causation (consistent with the view that some risk is associated with a single molecule or particle) or they allowed plaintiffs to conflate causation with risk.

Eventually some courts began to grasp the absurdity that follows from basing proximate cause on a "one-hit" model in a world of trillions of hits while others began to take notice of the fact that despite probing larger and larger populations with low exposures epidemiology was unable to verify the linear no-threshold model for numerous diseases; thereby suggesting that there is indeed a threshold for diseases including leukemia (a new case making the latter point is Schultz v. Glidden Company.) Meanwhile we have argued that the old cases got it right - that causation in an individual toxic tort case is unfathomable and that the most sensible approach is to estimate the risk imparted (e.g. by a single molecule); to ask why it makes sense to impose liability for creating a 1:1,000,000,000,000,000,000 chance of harm; and, further asking why it wouldn't make sense to impose liability for a 1:100,000 or greater risk.

But all of that assumes risk goes to zero or at least continues to decrease as exposure is reduced below previously measured levels. If that assumption is false, if risk starts heading back up as exposure goes down, especially if unpredictably so, then all bets are off. We will have entered another period of great uncertainty, And it's in such times that toxic tort claims flourish. The horsemen of this new age of uncertainty have published a review paper on the topic and if you want to understand what's coming, why it's pitch perfect for the health and wellness movement and why what happened to BPA will be repeated again and again for other chemicals until some new way is established to either verify or refute their claim that dose doesn't make the poison you need to read it:  "Hormones and Endocrine-Disrupting Chemicals: Low-Dose Effects and Nonmonotonic Dose Responses"

Discretizations

Discretizations

Discretizations

Hydraulic Fracturing (a/k/a fracing a/k/a fracking) Roundup

Yesterday our energy partners reported on the EPA's claim of water contamination in Wyoming due to hydraulic fracturing fluids used in natural gas production. Today The New York Times is wondering whether earthquakes can be blamed on fracing. Thus it sounds like a good time to provide you some links to recent studies of the process that you may find of interest. Here goes:

Scientific American has the truth about "fracking" and thinks that engineering science has gotten ahead of safety

The comment period for New York's Supplemental Generic Environmental Impact Statement just ended and some public health advocates don't like it

Two miles underground amidst the shale and gas, where the pressures and temperatures are extreme lives a fascinating community

And some of its members traveled there via drilling muds

Finally, some public health advocates and journals tend to overlook one important aspect of the energy business - that it provides lots of high paying jobs and benefits from free laundry service to transportation to health care and often excellent pension benefits; not to mention an interesting and disciplined work environment - a big boost to socioeconomic status which bestows dramatic economic, physical and even mental health benefits that echo through succeeding generations. So let's not forget when balancing risks and benefits of fracing to add the profound public health benefits that flow from good jobs to the benefit side of the ledger.

Discretizations

Discretizations

NTP Adds Formaldehyde to "Known", Glass Wool FIbers and Styrene to "Reasonably Anticipated To Be" Lists of Human Carcinogens

Formaldehyde has been known to be a cause of nasopharnygeal cancer for a long time but the NTP's determination that it likely causes leukemia and other lymphohematopoetic cancers is a big deal. The inclusion of styrene on the list of things "reasonably anticipated to be a human carcinogen"is the real shocker though. Back in the late 80s when the butadiene litigation was beginning to unfold there was considerable worry about whether the other big component of styrene-butadiene rubber might be a carcinogen. Numerous studies settled the question and the litigation never went anywhere (well, they sued the butadiene people instead of the styrene people). Expect styrene litigation.

Finally, the glass fiber determination brings with it a fair share of irony. For years asbestos plaintiff lawyers claimed that glass wool was a safe, non-carcinogenic substitute fiber (and I'd bet it is, actually). For more see: "12th Report on Carcinogens" or the report itself in .pdf.

Cherry Picking on My Cherry Coke

Today's scare du jour was just launched by the Center for Science in the Public Interest. They claim that the caramel coloring in Coke (and in dark beer and lots of other good stuff) is carcinogenic and ought to be banned. See "FDA Urged to Prohibit Carcinogenic 'Caramel Coloring'".

The claim can be summed up as follows: industrial caramel is unnatural and the product of scary-sounding processes involving scary-sounding chemicals; one of the resulting constitutive chemicals, 4-methylimidazole, has been found "in significant levels" of five brands of cola; 4-methylimidazole causes cancer in lab rodents; therefore, my Cherry Coke is a cancer hazard. Is there anything to it?

Well, sure enough there's a study of lab rats and mice that found small increases in the risk of lung cancer and leukemia that increased as doses (the rodents got the equivalent of thousands of cans of cola per day worth of 4-methylimidazole) increased. See "Toxicity and Carcinogenicity Studies of 4-Methylimidazole in F344/N Rats and B6C3F1 Mice". But something else very interesting happened along the way to a good health scare - something not mentioned by the CSPI.

It turns out that while there were small and at best equivocal indications that 4-methylimidazole might be associated with one or two rodent cancers there were big, statistically significant and dose-dependent associations between cancer prevention and 4-methylimidazole consumption. For example, compared to the rodents not given 4-methylimidazole the female rodents drinking cola by the barrel were essentially completely protected from mammary tumors as well as a host of other cancers. Overall, rodents on a cola binge experienced a greatly reduced risk of many cancers and saw some tumor rates reduced by orders of magnitudes compared to their cousin rats and mice not given 4-methylimidazole.

There was no call for research into the protective effects of caramel coloring. The great big silver lining wasn't even disclosed. Instead, the two insignificant bits of data showing a small risk of tumors in rodents were cherry picked from the forest of data and the big effect, a cancer-protective effect, was completely ignored.

I'll go out on a limb and predict that this scare, like the CSPI acrylamide in bread, chips and roasted coffee is going to give everybody cancer scare, is also headed for the dustbin of history.

 

Welding and Cardiovascular Disease

When it comes to cholesterol the thing to worry about is too little HDL. Your total cholesterol level can be high and your LDL can be high but if your HDL is up there you're swimming in the shallow end of the risk pool. That's what makes "Acute Decrease in HDL Cholesterol Associated With Exposure to Welding Fumes" so interesting.

The finding of a large decrease in HDL without effect upon other lipids following exposure to pm2.5 in welding fumes provides, if replicated, a biological mechanism for some of the maladies laid at the feet of pm2.5. Low levels of HDL lead to inflammation and chronic inflammation leads to a variety of illnesses like hardening of the arteries.

The Linear No-Threshold Theory: A Crumbling Foundation

The idea that a known cause of cancer, e.g. ionizing radiation, poses a risk of cancer at any dose, no matter how small, is a central thesis informing modern environmental and occupational regulations and modern, which is to say low dose, toxic tort cancer litigation. In the toxic tort context plaintiffs regularly employ the logical fallacy of the appeal to ignorance (argumentum ad ignorantiam) to prove that even the slightest exposure was risky. They say that because defendants cannot establish a safe level of exposure it follows that every exposure is necessarily unsafe. The formal name for the idea that risk doesn't drop to zero until exposure drops to zero is the linear no-threshold dose theory or LNT. The LNT theory, always longer on theory and politics than evidence is increasingly under attack. Now even NIOSH has had to concede that at least in some circumstances there is indeed a safe dose for a carcinogen.

In "Checking the Foundation: Recent Radiobiology and the Linear No-Threshold Theory" the author states "a large and rapidly growing body of radiobiological evidence indicates that cell and tissue level responses to [radiation damage], particularly at low doses and/or dose-rates, are nonlinear and may exhibit thresholds ... this evidence directly contradicts the assumptions upon which the microdosimetric [LNT] argument is based". The idea that a substance that is harmful at high levels can be harmless or better yet beneficial or protective (the idea of hormesis) at low levels is discussed at length in this month's issue of Human & Experimental Toxicology.

The claim that "if it takes an ounce to kill ten men then a drop will thousands" was itself just a theory based on the idea that carcinogenesis was a stochastic process. Getting cancer was sort of like hitting the anti-lottery and the more tickets you bought (exposures you sustained) the more likely you were to lose yet if you were unlucky enough just one ticket could do it. Like black box epidemiology LNT was simply a way to ignore the formerly incomprehensible molecular biological mechanisms responsible for cancer. Now that those mechanisms are being uncovered and understood they can no longer be ignored as they shatter one paradigm after another.

Bisphenol A Roundup

Since it's detected at low levels in 95% of us and since Americans have been exposed to it for more than 50 years you'd think someone would have noticed if exposure to bisphenol A (BPA) were responsible for widespread illness, deformity and death. Apparently not, at least not if the findings from a recent wave of BPA studies are to be believed.

The new findings are, in no particular order, that BPA: (a) damages sperm (b) inhibits the normal development of ovaries (c) alters brain development (d) causes premature birth (e) may be a carcinogen like DES (f) damages blood cells (g) activates breast cancer cells (h) impairs the body's defenses against colon cancer especially in women; and, (i) makes offspring anti-social and neurotic. And those are just a few of the "findings" published in the last two months. Obviously the world that existed before 1950 or so, before  BPA was everywhere used to seal bacteria out of food and dental cavities, had to have been a much healthier and more peaceful one. Alas.

A Man Is Not A Mouse, At Least When It Comes To Butadiene

Why are mice so much more susceptible to butadiene? Apparently it's because they metabolize it into potent mutagens at 200 times the rate of humans. As a result, while mice exposed to butadiene at current occupational levels promptly yield evidence of genotoxicity there's no evidence of genotoxicity in humans at current workplace exposure levels. See: "1,3-Butadiene: Biomarkers and Application to Risk Assessment".

Risk Assessment From In Vitro Testing: Staggeringly Complex or Just Impossible?

In vitro testing has been proposed as a way to clear out the backlog of toxicity testing on thousands of chemicals currently in use. It's quicker and cheaper and lab animals needn't be "sacrificed". The plan is to use the results to estimate the dose response curve in humans so that regulatory agencies can regulate accordingly. Too bad it won't be that easy.

In this month's Environmental Health Perspectives, Kenny Crump et al discuss the daunting task of using data from in vitro testing to set reasonably safe exposure limits. See (free): "The Future Use of In Vitro Data in Risk Assessment to Set Human Exposure Standards".

The problem of course is that it's not a matter of exposing some cells in a petri dish to the chemical of interesting and watching what happens. There are multiple pathways and multiple feedback mechanisms involving multiple types of cells that define the pathways to toxicity, not to mention any that work to offset and fix the ill effects. How many might there be and in how many ways might they interact? A model of how E. coli protects against heat shock "consists of a set of 31 differential-algebraic equations with 27 kinetic parameters, data for many of which are not yet available." Just finding these pathways will be a huge undertaking and billions of dollars in funding are being sought over the next decade to find and elucidate them.

Nevertheless, the authors conclude: "Use of in vitro data in risk assessment has great promise toward allowing chemicals to be tested more quickly and cheaply and for reducing or eliminating the need for subjecting animals to toxic insults. It is our hope that the bar for accepting approaches based on in vitro data will not be set too high. In view of the numerous serious limitations of current approaches, results from these methods based on whole-animal data should not be held up as gold standards. This point is particularly important considering that almost all whole-animal data are obtained from high doses that may operate through different sets of [toxicity pathways] than do low doses."

That last sentence is the key. We're entering a whole new world of toxic torts. One in which many heretofore innocuous chemicals will be claimed to be toxic at very low doses.

What Do Wrinkles, Rheumatoid Arthritis and Multiple Sclerosis Have in Common?

Apparently, whether you get them or not depends on the microbes that live in your gut.

It may not make sense intuitively (undoubtedly a common problem in times of crumbling paradigms) but the bacteria in your intestines may decide whether your skin responds to UV damage with wrinkles or is instead rejuvenated. See "Probiotics for Photoprotection".

Interested in how the right gut microbes suppress central nervous system inflammation and how the wrong ones cause just the sort of chronic brain and spinal cord inflammation thought to be responsible for MS? Read: "Proinflammatory T-Cell Responses to Gut Microbiota Promote Experimental Autoimmune Encephalomyelitis". Here's one of many interesting takeaways: "... mammals are colonized for life with extraordinary multitudes of indigenous bacteria, and the contributions of this enormous and diverse ecosystem to human health remain poorly understood. Recent studies have launched a revolution in biology aimed at understanding how (and more importantly, why) mammals harbor symbiotic bacteria."

Take a mouse predisposed to rheumatoid arthritis and make it germ free. No rheumatoid arthritis. Then expose it to a single microbe, the segmented filamentous bacteria, "and arthritis rapidly ensued." That was the finding of "Gut-Residing Segmented Filamentous Bacteria Drive Autoimmune Arthritis via T Helper 17 Cells". And for a real eye opener read "Segmented Filamentous Bacteria Shape Intestinal Immunity". How could two genetically identical mice have dramatically different immune systems? By having different microbes in their guts - as in the case of B6 mice from two different vendors.

Memo to self: look into whether different sources of B6 mice might correlate with different results re: butadiene's carcinogenic potential.

"[O]ur Old Assumptions About Toxicants and How They Affect Our Bodies Are Being Changed ..."

There's a remarkable but necessary admission in this month's Environmental Health Perspectives.  It is that that a new (some would say old) paradigm has emerged; that pathogens, sometimes in concert with what for 40 years have been known as toxicants, are responsible for a very large portion of human suffering. Unable to deny any longer that diseases of nature inflict a staggering toll on humanity the "NIEHS Office of the Director will be working with division leaders to develop an initiative on infectious disease and environmental health—to incorporate infectious disease into the toxicological paradigm."

The editorial points to "A Niche for Infectious Disease in Environmental Health: Rethinking the Toxicological Paradigm" just published in the same journal. It's a call for the study of infectious diseases in environmental health research. Ultimately it's a recognition that the simple (and simplistic) models of many diseases are collapsing under the weight of modern microbiology. It's an admission that "the complexity of real-world exposures and multifactorial health outcomes" cannot be captured by the simple one-to-one associations that ruled environmental health research for the past four decades.

Years ago real insight, real genius (at least when it came to environmental illness) was replaced by a sort of blue collar approach to science in which grotesquely simplified statistical data dredges could be automated so that a never ending stream of putative causes of human suffering could be manufactured, studies and regulated. Some of the techniques were so malleable that clever researchers could not only manufacture causes, they could also decide in advance what the causes would be. Now, the real causes are uncovered and it often turns out that our ancient enemies, pathogens, were to blame all along.

The long war continues, but now the scales that covered environmental science's eyes for decades are falling.