How to predict the conclusion of a review without even reading it…

Short version: We published a new article in the Journal of Clinical Epidemiology all about selective citation in reviews of neuraminidase inhbitors – like Tamiflu and Relenza.

Lots of reviews get written about drugs (especially the ones that get prescribed often), and the drugs used to treat and prevent influenza are no exception. There are more reviews written than there are randomised controlled trials, and I think it is hard to justify why doctors and their patients would need so many different interpretations of the same evidence. When too many reviews are written in this way, I like to call it “flooding”.

The reason for why there are so many reviews written probably has something to do with a problem that has been written about many times over by people much more eloquent than I am: when marketing is disguised as clinical evidence.

We recently undertook some research to try and understand how authors of reviews (narrative and systematic) manage to come up with conclusions that appear to be diametrically opposed. For the neuraminidase inhibitors (e.g. Tamiflu, Relenza), conclusions in reviews varied from recommending the early use in anyone who looks unwell, or massive global stockpiling for preventative use, to others who question the very use of the drug in clinical practice and raise safety concerns. We hypothesised that one of the ways these differences could manifest in reviews was through something called selective citation bias.

Selective citation bias happens when authors of reviews are allowed to pick and choose what they cite in order to present the evidence in ways that fit their predetermined views. And of course, we often associate this problem with conflicts of interest. This has in the past led to drugs being presented as safe and effective (repeatedly) when they simply aren’t.

By the way, here’s a picture of approximately where I am right now while I’m writing this quick update. I’m on a train between Boston and NYC in the United States, passing through a place called New Haven.

train

To test our hypothesis about selective citation bias, we did something quite new and unusual using the citation patterns among the reviews of neuraminidase inhibitors. We looked at 152 different reviews published since 2005, as well as the 10,086 citations in the reference lists pointing at 4,574 unique articles. Two members of the team (Diana and Joel) graded the reviews as favourable or otherwise, and when they both agreed that the review presented the evidence favourably, we put that in the favourable pile. The majority of reviews (61%) ended up in this group.

We then did two things: we undertook a statistical analysis to see if we could find articles that were by themselves much more likely to be cited by favourable reviews. And we constructed a bunch of classifiers using supervised machine learning algorithms to see how well we could predict which reviews were favourable by looking only at the reference lists of the articles.

What we found was relatively surprising – we could predict a favourable conclusion with an accuracy of 96.5% (in sample) only using the reference lists, and without actually looking at the text of the review at all.

A further examination of the articles that were most useful (in combination) for predicting the conclusions of the reviews suggested that the not-favourable pile tended to cite studies about viral resistance much more often than their favourable counterparts.

What we expected to find, but didn’t, was that industry-funded studies would be over-represented in favourable reviews. To me, the lack of a finding here means that the method we devised was probably better at finding what was “missing” from the reference lists of the majority rather than what is over-represented in the majority. The maths on this makes sense too.

So we think that applying machine learning to the metadata from published reviews could be useful for editors trying to review new narrative reviews. More importantly, when faced with multiple reviews that clearly disagree with each other, these methods could be used to help identify what’s missing from reviews in order to restore some balance in the representation of primary clinical evidence in things like reviews and guidelines.

Media collection about conflicts of interest in systematic reviews of neuraminidase inhibitors

As usual, I’m keeping a record of major stories in the media related to a recently published paper. I will continue to update this post to reflect the media response to our article in the Annals of Internal Medicine.

When I checked last (24 November 2014), the Altmetric score was 91. Here’s the low-tech way to check that…

Capture

Should we ignore industry-funded research in clinical medicine?

A quick update to explain our most recent editorial [pdf] on evidence-based medicine published in the Journal of Comparative Effectiveness Research. It’s free for anyone to access.

What do we know?

Industry funded research does not always lead to biases that are detrimental to the quality of clinical evidence, and biased research may come about for many reasons other than financial conflicts of interest. But there are clear and strong systemic differences in the research produced by people who have strong financial ties to pharmaceutical companies and other groups. These differences have in the past been connected to problems like massive opportunity costs (ineffective drugs) and widespread harm (unsafe drugs).

[Spoiler alert]

What do we think?

Our simple answer is no, we don’t think that industry-funded research should be excluded from comparative effectiveness research. To put it very simply, around half of the completed trials undertaken each year are funded by the industry, and despite the overwhelming number of published trials we see, we still don’t have anywhere near enough of them to properly answer all the questions that doctors and patients have when trying to make decisions together.

Instead, we think improvements in transparency and access to patient-level data, the surveillance of risks of bias, and new methods for combining evidence and data from all available sources at once are much better alternatives. You can read more about all of these in the editorial.

Bonus:

Also, check out the new article from our group on automated citation snowballing published in the Journal of Medical Internet Research. It forms the basis of a recursive search and retrieval method that finds peer-reviewed articles online, downloads them, extracts the reference lists, and follows those links to find and retrieve articles recursively. It is particularly interesting because it can automatically construct citation networks back from a single paper.

Guerilla open access, public engagement with research, and ivory towers

Despite the growth of open access publishing, there is still a massive and growing archive of peer-reviewed research that is hidden behind paywalls. While academics can reach most of the research they need through library subscriptions, researchers, professionals and the broader community outside of academia are effectively cut off from the vast majority of peer-reviewed research. If the growth of file sharing communities transformed the entertainment industry more than fifteen years ago, is a similar transformation in academic publishing inevitable?

Together with Enrico Coiera and Ken Mandl, I published an article today in the Journal of Medical Internet Research. In the article, we considered the plausibility and consequences of a massive data breach and leak of journal articles onto peer-to-peer networks, and the creation of a functioning decentralised network of peer-reviewed research. Considering a hypothetical Biblioleaks scenario, we speculated on the technical feasibility and the motivations that underpin civil disobedience in academic publishing.

It appears as though academics are not providing pre-print versions of their article anywhere near as often as they could. For every 10 articles published, 2 or 3 can be found online for free, but up to 8 of them could be uploaded by the authors legally (this is called self-archiving, where authors upload pre-print versions of their manuscripts). Civil disobedience in relation to sharing articles is still quite rare. Examples of article-sharing on Twitter and via torrents have emerged in the last few years but only a handful of people are involved. There it not yet a critical mass of censorship-resistant sharing that would signal a shift into an era of near-universal access like we saw in the entertainment industry in the late 1990s.

However, as the public come to expect free access to all research as the norm rather than the exception, it might be more likely that the creation of an article-sharing underground will come from outside academia. What is unknown is whether or not the public actually want to access peer-reviewed research directly. From the little evidence that is available on this question, it seems that doctors, patients, professionals of all kinds, as well as the broader community might all benefit from the creation of an underground network of article-sharing, and it may even serve to reduce the gap between research consensus and public opinion for issues like climate change and vaccination, where large sections of the broader community disagree with the overwhelming majority of scientific experts.

Given the size of recent hacks on major companies, there appears to be no technical barriers to a massive data breach and leak. However, by removing the motivations behind a Biblioleaks scenario, publishers and researchers might be able to avoid (or skip over) a period of illegal file-sharing. University librarians could build the servers that would seed the torrents for pre-prints, helping to ensure quality control and improving the impact of the research in the wider community. Researchers can and should learn the self-archiving policies for all their work and upload their manuscripts as soon as they are entitled or obliged to do so. Prescient publishers might find ways to freely release older articles on their own websites to avoid losing traffic and advertising revenue.

Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the clinicaltrials.gov registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Bohannon’s Science Sting – playing devil’s advocate and proposing a solution

[Update: I realise that perhaps many of you are not going to have the same perspective about the Science Sting that I have purposefully taken here (hence the title). Apologies in advance.]

It took a journalist (albeit one with a PhD in molecular biology) to reveal the extent of the peer-review problems among predatory journals. Bohannon submitted fatally flawed and boringly bad articles to a set of open access journals that charge fees to publish. Of 255 submissions with results, 157 journals (62%) accepted the article. This did not come as a surprise to a lot of people who have been watching or interacting with this vast underworld of predatory publishing.

Simply, fee-charging predatory open access publishers make more money if they accept more articles. It’s not going to be good for business in the long-term (because of reputation problems) but it seems to be working quite well for a number of publishers right now. The need/desire for profits is also a big problem in subscription journals but often for very different reasons (where profit relies on denying access).

But is it working for them? There is limited evidence to suggest that business is good for predatory journals. Hindawi (previously listed as borderline predatory) apparently makes 52% profits, which is phenomenal compared to the already obscene 36% operating profits reported by Elsevier. The reason why business is so good is the direct consequence of the publish or perish mentality that pervades academia. Add to that the hundreds of thousands of students graduating with PhDs each year, the low overheads associated with starting a journal, and paid gold open access becomes an attractive business proposition for any shonky operator.

Response from those heavily invested in open access

Bohannon tells us that he talked about the work with “a small group of scientists who care deeply about open access”, and in the news article explained carefully that the growth of open access has multiplied the problem of predatory journals. This is because predatory publishers are much more likely to opt for a paid gold open access model. I don’t think anyone can disagree with that. The problem is that people who care deeply about open access appear to have often interpreted the news article in a defensive way because of their particular perspectives on open access. There’s a whole bunch of responses to the article that come from the full spectrum of open access advocates and I have made a list below.

To play the devil’s advocate, here’s what they have often missed in their responses.

  • Bohannon correctly included the suggestion that targeting the low end of subscription journals could produce the same result and explicitly indicated that he did not examine them.
  • The article is directed at a particular subset of journals that charge fees and are listed on the DOAJ or on Beall’s list. The aims match the selection of journals and both are clear. At no point does Bohannon say that the chosen set represents open access generally.
  • It is an article about peer review. The fact that the targeted journals were all paid gold open access journals is important to note but that point has very little to do with the huge problems in peer review, which we all know are pervasive and a bigger problem in gold open access.

The results also show that Beall’s list does a reasonably good job of compiling the problems. It was nice to see that Beall was (mostly) vindicated in his identification of dodgy publishers. Over 80% of the journals on Beall’s list accepted the article after some kind of a review. For journals on DOAJ, the proportion was 45%. Much lower but still substantial.

It was unsurprising that Beall’s list produced the highest proportion of accepted articles and the highest proportion of missing peer review, that DOAJ produced fewer in each of the two. Glass-half-full types would have focused more heavily on this to show that there were in fact plenty of good open access journals that do charge fees, do undertake peer review or reject on first principles, and noted that even among a targeted group of journals accused of predatory behaviour, some still undertook peer review.

So what?

There are two important things we need to remember when interpreting the results of the Bohannon’s Science Sting. Firstly, predatory journals and open access journals are not synonymous but there are a lot of predatory journals that are open access. Bohannon did not conflate them in his report and explained his methods better than most journal articles I’ve seen recently. Secondly, if open access is to eventually replace the subscription model completely, then the people that are best placed to tackle predatory journals using open access models are the people that are already measuring, curating, listing and analysing open access. Houses in order, so to speak.

DOAJ should be the first to act – directly with the journals that accepted the paper, and then with the wider group of fee-charging journals. It doesn’t matter what proportion of subscription articles *would* have accepted the papers, the problem still exists no matter how much the “lack of a proper control” or “Science conspiracy against open access” arguments get thrown around to dilute the message.

Solving the problem

If you want a radical solution to the problem of bad peer review in predatory journals, there’s an obvious one that no one seems to be suggesting:

Remove profits from publishing entirely.

The method is simple. Only index and recognise articles that are published with non-profit and flat-rate subsidised (platinum open access) journals – like some of those published by the various scientific societies. Don’t pay journals for “how much” but for “how well”. If subscription journals and for-profit open access journals can’t be cited, indexed, or contribute to career progression, then the market for predatory journals disappears.

What others have been saying

Here is a list of other responses to Bohannon’s Science Sting.

1. Graham Steel: The Publishing “Sting”, the reaction, and the outcome

2. Åse Innes-Ker: A publishing sting, but what was stung?

3. Björn Brembs: Science Magazine Rejects Data, Publishes Anecdote

4. Claire Shaw: Hundreds of open access journals accept fake science paper

5. Peter Suber: New “sting” of weak open-access journals.

6. Curt Rice: What Science — and the Gonzo Scientist — got wrong: open access will make research better

7. Martin Eve: Flawed sting operation singles out open access journals  – and a longer version

8. Ernesto Priego: Who’s Afraid of Open Access?

9. Nigel Hawkes: Spoof research paper is accepted by 157 journals

10. Michael Eisen: I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals

11. Mike Taylor (SV-POW): John Bohannon’s peer-review sting against Science

12. Fabiana Kubke (makes good points about navigating open access): Science gone bad

13. Lenny Teytelman (interesting take & new analysis): What hurts science – rejection of good or acceptance of bad?

14. The Directory of Open Access Journals: DOAJ’s response to the recent article in Science entitled “Who’s Afraid of Peer Review?”

15. Open Access Scholarly Publishers Association: OASPA’s response to the recent article in Science entitled “Who’s Afraid of Peer Review?”

16. Jeroen Bosman (an excellent description of the issues): Science Mag sting of OA journals: is it about Open Access or about peer review?

17. John Hawks (interesting take): “Open access spam” and how journals sell scientific reputation

It is also nice to see some of them disclosing their particular set of conflicts in their discussions. I have none. And here are some more reactions on Twitter and a discussion hosted by Science: