How about a systematic review that writes itself?

Guy Tsafnat, me, Paul Glasziou and Enrico Coiera have written an editorial for the BMJ on the automation of systematic reviews. I helped a bit, but the clever analogy with the ticking machines from Player Piano fell out of Guy’s brain.

In the editorial, we covered the state-of-the-art in automating specific tasks in the process of synthesising clinical evidence. The basic problem with systematic reviews is that we waste a lot of time and effort in trying to re-do systematic reviews when new evidence becomes available – and in a lot of cases, systematic reviews are out-of-date nearly as soon as they are published.

The solution – using an analogy from Kurt Vonnegut’s Player Piano, which is a dystopian science fiction novel in which ticking automata are able to replicate the actions of a human after observing them – is to replace the standalone systematic reviews with dynamically and automatically updated reviews that change when new evidence is available.

At the press of a button.

The proposal is that after developing the rigorous protocol for a systematic review (something that is already done), we should have enough tech so that clinicians can simply find the review they want, press a button, and have the most recent evidence synthesised in silico. The existing protocols determine which studies are included and how they are analysed. The aim is to dramatically improve the efficiency of systematic reviews and improve their clinical utility by providing the best evidence to clinicians whenever they need it.

G Tsafnat, AG Dunn, P Glasziou, E Coiera (2013) The Automation of Systematic Reviews, BMJ 346:f139

Do pharmaceutical companies have too much influence over the evidence base?

Imagine you are a doctor and you have a patient sitting with you in your office. You have already diagnosed your patient with a condition. Treatment for this condition will definitely include prescribing the patient with one or more drugs. And, because the condition is quite common, there are several government-subsidised drugs from which you can choose. Some of the drugs have only recently been approved, and the others have been around for more than a decade.

So what do you need to know to choose which drug to prescribe?

Well, you need to know which of the drugs is going to be most effective, which of the drugs is safest, and which of the drugs has the best value [1]. Since all of the drugs you can choose from have been approved and are subsidised, presumably there have been clinical trials that have compared each of those drugs together in appropriate doses, right? And those clinical trials were conducted with good intentions and in an objective way [2]?

Well, for a lot of drugs, that is simply not the case.

In fact, around half of the drugs approved in the US do not have enough clinical trials of sufficient quality to allow doctors to effectively answer those questions [3]. And why is that so strange? Well, every day 75 clinical trials and 11 systematic reviews are published [4]. So even though there is way too much evidence for you as a doctor to ever be able to read [5], there still isn’t enough information around to help you answer those questions. And when it comes to pharmaceutical companies, we know that the trials they conduct end up producing different results and conclusions [6] and are often designed differently, too [7]. Oh, and from memory, industry sponsors around 36% of clinical trials, and this number has been increasing for decades.

What’s worse is that it looks like pharmaceutical companies have disproportionate levels of control over the production of the clinical evidence that will end up in the doctors’ decision-making.

I believe that in order to affect and hopefully improve the way we do things, we have to first be able to accurately measure them. I mean, we all know that we can’t improve our recipes without trying them out and having a taste-test.

So, along with colleagues in the Centre for Health Informatics at UNSW, I did a taste-test to see who is publishing these clinical trials and get an idea of exactly where clinical evidence comes from. We took 22 common drugs in Australia and collected up all of the published randomised controlled trials (RCTs) written about those drugs [8]. Then we looked at the affiliations of all of the authors to see who was directly affiliated with the pharmaceutical company making the drug.

A co-authorship network for rosiglitazone, as of 2006 when all the fuss started.

We found that when you draw the network of co-authorship (authors are linked to each other if they collaborated in an RCT) that those authors affiliated with the drug companies tended to be right in the middle of the network [9]. They also tended to receive more citations and often had the right network position to be able to reach and control the largest and most important part of the community producing the evidence. When it comes to producing meta-analyses, reviews, guidelines and policy decisions, which parts of the evidence base do you expect to be included and carry the most weight?

So, as a doctor making a decision about your patient’s treatment, how do you know if you can trust that guideline, the knowledge base underpinning that ‘synthesised information resource’, or even that google search [10]?

Of course most doctors already know this and are careful about the information they assimilate, discuss information about new drugs with their colleagues, or simply not prescribe new drugs until they have been on the market for long enough to make sure they are safe and effective. So, although we are still in very safe hands when we visit the doctor, wouldn’t it be nice if we could improve the way evidence makes its way into the decision-making process?

tl;dr

We’ve recently published a new article in the Journal of Clinical Epidemiology looking at networks of co-authorship for individual drugs that are commonly prescribed in Australia. Using network analysis, we found that authors who are directly affiliated with the pharmaceutical companies that are producing the drug are much more likely to be central in their networks, receive a greater number of citations, and have the potential to exert their influence over the important core of authors publishing the results of clinical trials.

Notes

[1] Indeed, you would also be thinking about whether any of the drugs are different depending on your particular patient’s genotypic and phenotypic characteristics but that’s a story for another day.

[2] Better yet, there’s a database of all of the outcomes and adverse reactions that have occurred to patients around the country since the drug was introduced. But of course that’s not the case either.

[3] Goldberg, N. H., S. Schneeweiss, et al. (2011). “Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” JAMA: The Journal of the American Medical Association 305(17): 1786-1789.

[4] Bastian, H., P. Glasziou, et al. (2010). “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?” PLoS Medicine 7(9): e1000326.

[5] Fraser, A. G. and F. D. Dunstan (2010). “On the impossibility of being expert.” BMJ 341.

[6] Yank, V., D. Rennie, et al. (2007). “Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study.” BMJ 335(7631): 1202-1205.

[7] Lathyris, D. N., N. A. Patsopoulos, et al. (2010). “Industry sponsorship and selection of comparators in randomized clinical trials.” European Journal of Clinical Investigation 40(2): 172-182.

[8] We also included reviews and meta-analyses that collect up RCTs and use them to produce conclusions about the safety and efficacy of the drugs.

[9] Dunn, A. G., B. Gallego, E. Coiera (2012). “How industry influence evidence production in collaborative research communities: a network analysis.” Journal of Clinical Epidemiology: In Press.

[10] Yes, 69% of general practitioners search on Google and Wikipedia weekly, compared to 32% who consult original research weekly. O’Keeffe, J., J. Willinsky, et al. (2011). “Public Access and Use of Health Research: An Exploratory Study of the National Institutes of Health (NIH) Public Access Policy Using Interviews and Surveys of Health Personnel.” J Med Internet Res 13(4): e97.

International collaboration tends to yield higher impact

International collaboration increases (via Research sans frontières : Nature News). This reminds me of some recent work looking at the effect of international collaboration on the prestige of publications – international collaboration tends to yield higher impact papers in higher impact journals.