You might remember me from such articles as “Even Systematic Reviews are Pretty Easy to Manipulate” and “I Can Predict the Conclusion of Your Review Even Without Reading It“. If you have been around for a while you may even remember me from “Industry-based Researchers get More Love Because They are Better Connected“.
In a web browser near you, see the newest installment: “More Money from Industry, More Favourable Reviews“…
With colleagues from here in Sydney and over in Boston, I recently published the latest in a string of related research on financial competing interests in neuraminidase inhibitor research. This is probably the last paper we will do on this topic for a while (though we do have two more manuscripts related to financial competing interests on the way soon). In this one we looked at non-systematic reviews of neuraminidase inhibitor evidence and compared the number of non-systematic reviews and the proportions of favourable conclusions between authors who had financial competing interests and authors who did not. You will not be at all surprised to learn that authors who had relevant financial competing interests and wrote non-systematic reviews about the topic ended up writing more of them, were more likely to conclude favourably in the reviews, and also wrote more of other kinds of papers in the same area.
So after looking in way too much detail at the creation and translation of evidence in this area, I thought it would be good timing to write down a few tips on how anyone with deep pockets can control an evidence base and get away with it (for a while). So here are some hints on the easiest and fastest ways to control the research consensus for a clinical intervention, even when it isn’t as effective or safe as it should be.
Step 1. Design studies that will produce the conclusions you want.
- The effects of industry sponsorship on comparator selection in trial registrations for neuropsychiatric conditions in children, PLOS ONE, 8(12): e84951.
Step 2. When publishing trial reports, leave out the outcomes that don’t look good; or just don’t publish them.
- Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov, Annals of Internal Medicine
- Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews, BMJ 2014;349:g6501.
- Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review, PLoS ONE 8(7): e66844
Step 3. When publishing reviews, just select whatever evidence suits the conclusion you like best and ignore everything else.
- Citations alone were enough to predict favourable conclusions in reviews of neuraminidase inhibitors. Journal of Clinical Epidemiology, 68(1):87-93.
- Industry influenced evidence production in collaborative research communities: A network analysis, Journal of Clinical Epidemiology, 65(5): 535-543.
- Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database of Systematic Reviews. 2014 Oct 1;(10):MR000035.
Step 4. If the data fail to illustrate the picture you want, you can report them as is and just write down a different conclusion anyway.
- Financial Conflicts of Interest and Conclusions About Neuraminidase Inhibitors for Influenza: An Analysis of Systematic Reviews, Annals of Internal Medicine; 161(7):513-518.
Step 5. Use the credibility of reputable and prolific academic researchers by paying them to run trials, write reviews, and talk to the media.
- Financial competing interests were associated with favourable conclusions and greater author productivity in non-systematic reviews of neuraminidase inhibitors. Journal of Clinical Epidemiology.
- Conflict of interest disclosure in biomedical research: A review of current practices, biases, and the role of public registries in improving transparency. Research Integrity and Peer Review, 1:1.
- Set up a public registry of competing interests. Nature, 533:9.
Step 6. Profit.
- Industry Influence in Evidence Production, Journal of Epidemiology & Community Health, 67:537-538.
Two important caveats: I am not claiming that any of these things have been done deliberately for neuraminidase inhibitors or any of the interventions described above – I am describing these processes in general based on multiple sources, and in a flippant way. Of course it might have happened for some or many clinical interventions in the past but that is not what we claim here or anywhere else. Secondly, I am not anti-industry, I am anti-waste and anti-harm.
And everyone should share the blame. There are researchers from inside industry, outside industry with industry funding, and completely divorced from all industry activity who have each been responsible for the kinds of waste and harm we read about after the damage has been done.
No matter what kind of intervention you work on, poorly-designed or redundant studies are a waste of money, time, and can put participants at risk for no reason. Failing to completely publish the results of trials is just as bad, and producing piles of rubbish reviews that selectively cite whatever evidence helps prove your preconceived version of the truth is about as bad as trying to convince people that a caffeine colon cleanse cures cancer.
When I find time, I will continue to add in related links to specific papers (for now, mostly just those from my team and my collaborators) for each of these areas. There are hundreds of other relevant articles that have been written by lots of other smart people but for now I am just listing a selection of my own as well as some of my favourite examples for each category.