An interesting and thoughtful comment on one of our recent posts from Dr. Marc-Andre Gagnon:

I believe your interpretation of the study is simply wrong. The authors refer to a study by Chan et al., published in BMJ, which showed that such differences between protocols and publications were prevalent in a sample of 70 clinical trials, 56 of which received industry funding. The quoted paper never said that non-industry funded studies showed as much difference (or bias) as industry funded studies, it simply made the case that we needed access to full data for every clinical trial to be sure that there are no bias in the reporting of results.

On the link on reporting bias in research and industry funding, one can find a Cochrane systematic review by Lungh et al. published on the topic two months ago.

Marc-André Gagnon

School of Public Policy and Administration, Carleton University

All the best,


Which precipitated an exchange wherein we asked about discrepancies and biases in non-industry funded data and got this equally thoughtful reply:

I think the topic is a very complex one since you need to take in consideration publication bias: journals like crispy results and dull ones are often unpublished. I think this type of bias applies in the same way to industry and non-industry funded studies. But this is not the same thing as reporting bias when it comes to the important question of industry funding and research outcome. A recent Cochrane systematic review showed that the problem is not an imaginary one:

The article analyzes the existing literature about reporting bias.

One thing is likely not in dispute – we’re entering (or have entered) another age of empiricism in which data, and the inferences statistically drawn from it, presumptively trump even the cleverest idea if the data underpining that idea is biased, not honestly gathered or missing (in whole or in part). And that’s a good thing. See also: "‘In God we trust, all others (must) bring data’".

Thanks for the comments, Dr. Gagnon