How to predict the conclusion of a review without even reading it…

Short version: We published a new article in the Journal of Clinical Epidemiology all about selective citation in reviews of neuraminidase inhbitors – like Tamiflu and Relenza.

Lots of reviews get written about drugs (especially the ones that get prescribed often), and the drugs used to treat and prevent influenza are no exception. There are more reviews written than there are randomised controlled trials, and I think it is hard to justify why doctors and their patients would need so many different interpretations of the same evidence. When too many reviews are written in this way, I like to call it “flooding”.

The reason for why there are so many reviews written probably has something to do with a problem that has been written about many times over by people much more eloquent than I am: when marketing is disguised as clinical evidence.

We recently undertook some research to try and understand how authors of reviews (narrative and systematic) manage to come up with conclusions that appear to be diametrically opposed. For the neuraminidase inhibitors (e.g. Tamiflu, Relenza), conclusions in reviews varied from recommending the early use in anyone who looks unwell, or massive global stockpiling for preventative use, to others who question the very use of the drug in clinical practice and raise safety concerns. We hypothesised that one of the ways these differences could manifest in reviews was through something called selective citation bias.

Selective citation bias happens when authors of reviews are allowed to pick and choose what they cite in order to present the evidence in ways that fit their predetermined views. And of course, we often associate this problem with conflicts of interest. This has in the past led to drugs being presented as safe and effective (repeatedly) when they simply aren’t.

By the way, here’s a picture of approximately where I am right now while I’m writing this quick update. I’m on a train between Boston and NYC in the United States, passing through a place called New Haven.

train

To test our hypothesis about selective citation bias, we did something quite new and unusual using the citation patterns among the reviews of neuraminidase inhibitors. We looked at 152 different reviews published since 2005, as well as the 10,086 citations in the reference lists pointing at 4,574 unique articles. Two members of the team (Diana and Joel) graded the reviews as favourable or otherwise, and when they both agreed that the review presented the evidence favourably, we put that in the favourable pile. The majority of reviews (61%) ended up in this group.

We then did two things: we undertook a statistical analysis to see if we could find articles that were by themselves much more likely to be cited by favourable reviews. And we constructed a bunch of classifiers using supervised machine learning algorithms to see how well we could predict which reviews were favourable by looking only at the reference lists of the articles.

What we found was relatively surprising – we could predict a favourable conclusion with an accuracy of 96.5% (in sample) only using the reference lists, and without actually looking at the text of the review at all.

A further examination of the articles that were most useful (in combination) for predicting the conclusions of the reviews suggested that the not-favourable pile tended to cite studies about viral resistance much more often than their favourable counterparts.

What we expected to find, but didn’t, was that industry-funded studies would be over-represented in favourable reviews. To me, the lack of a finding here means that the method we devised was probably better at finding what was “missing” from the reference lists of the majority rather than what is over-represented in the majority. The maths on this makes sense too.

So we think that applying machine learning to the metadata from published reviews could be useful for editors trying to review new narrative reviews. More importantly, when faced with multiple reviews that clearly disagree with each other, these methods could be used to help identify what’s missing from reviews in order to restore some balance in the representation of primary clinical evidence in things like reviews and guidelines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s