Introducing evidence surveillance as a research stream

I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.

Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.

It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.

Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.

It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.

My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.

Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.

Do pharmaceutical companies have too much influence over the evidence base?

Imagine you are a doctor and you have a patient sitting with you in your office. You have already diagnosed your patient with a condition. Treatment for this condition will definitely include prescribing the patient with one or more drugs. And, because the condition is quite common, there are several government-subsidised drugs from which you can choose. Some of the drugs have only recently been approved, and the others have been around for more than a decade.

So what do you need to know to choose which drug to prescribe?

Well, you need to know which of the drugs is going to be most effective, which of the drugs is safest, and which of the drugs has the best value [1]. Since all of the drugs you can choose from have been approved and are subsidised, presumably there have been clinical trials that have compared each of those drugs together in appropriate doses, right? And those clinical trials were conducted with good intentions and in an objective way [2]?

Well, for a lot of drugs, that is simply not the case.

In fact, around half of the drugs approved in the US do not have enough clinical trials of sufficient quality to allow doctors to effectively answer those questions [3]. And why is that so strange? Well, every day 75 clinical trials and 11 systematic reviews are published [4]. So even though there is way too much evidence for you as a doctor to ever be able to read [5], there still isn’t enough information around to help you answer those questions. And when it comes to pharmaceutical companies, we know that the trials they conduct end up producing different results and conclusions [6] and are often designed differently, too [7]. Oh, and from memory, industry sponsors around 36% of clinical trials, and this number has been increasing for decades.

What’s worse is that it looks like pharmaceutical companies have disproportionate levels of control over the production of the clinical evidence that will end up in the doctors’ decision-making.

I believe that in order to affect and hopefully improve the way we do things, we have to first be able to accurately measure them. I mean, we all know that we can’t improve our recipes without trying them out and having a taste-test.

So, along with colleagues in the Centre for Health Informatics at UNSW, I did a taste-test to see who is publishing these clinical trials and get an idea of exactly where clinical evidence comes from. We took 22 common drugs in Australia and collected up all of the published randomised controlled trials (RCTs) written about those drugs [8]. Then we looked at the affiliations of all of the authors to see who was directly affiliated with the pharmaceutical company making the drug.

A co-authorship network for rosiglitazone, as of 2006 when all the fuss started.

We found that when you draw the network of co-authorship (authors are linked to each other if they collaborated in an RCT) that those authors affiliated with the drug companies tended to be right in the middle of the network [9]. They also tended to receive more citations and often had the right network position to be able to reach and control the largest and most important part of the community producing the evidence. When it comes to producing meta-analyses, reviews, guidelines and policy decisions, which parts of the evidence base do you expect to be included and carry the most weight?

So, as a doctor making a decision about your patient’s treatment, how do you know if you can trust that guideline, the knowledge base underpinning that ‘synthesised information resource’, or even that google search [10]?

Of course most doctors already know this and are careful about the information they assimilate, discuss information about new drugs with their colleagues, or simply not prescribe new drugs until they have been on the market for long enough to make sure they are safe and effective. So, although we are still in very safe hands when we visit the doctor, wouldn’t it be nice if we could improve the way evidence makes its way into the decision-making process?


We’ve recently published a new article in the Journal of Clinical Epidemiology looking at networks of co-authorship for individual drugs that are commonly prescribed in Australia. Using network analysis, we found that authors who are directly affiliated with the pharmaceutical companies that are producing the drug are much more likely to be central in their networks, receive a greater number of citations, and have the potential to exert their influence over the important core of authors publishing the results of clinical trials.


[1] Indeed, you would also be thinking about whether any of the drugs are different depending on your particular patient’s genotypic and phenotypic characteristics but that’s a story for another day.

[2] Better yet, there’s a database of all of the outcomes and adverse reactions that have occurred to patients around the country since the drug was introduced. But of course that’s not the case either.

[3] Goldberg, N. H., S. Schneeweiss, et al. (2011). “Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” JAMA: The Journal of the American Medical Association 305(17): 1786-1789.

[4] Bastian, H., P. Glasziou, et al. (2010). “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?” PLoS Medicine 7(9): e1000326.

[5] Fraser, A. G. and F. D. Dunstan (2010). “On the impossibility of being expert.” BMJ 341.

[6] Yank, V., D. Rennie, et al. (2007). “Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study.” BMJ 335(7631): 1202-1205.

[7] Lathyris, D. N., N. A. Patsopoulos, et al. (2010). “Industry sponsorship and selection of comparators in randomized clinical trials.” European Journal of Clinical Investigation 40(2): 172-182.

[8] We also included reviews and meta-analyses that collect up RCTs and use them to produce conclusions about the safety and efficacy of the drugs.

[9] Dunn, A. G., B. Gallego, E. Coiera (2012). “How industry influence evidence production in collaborative research communities: a network analysis.” Journal of Clinical Epidemiology: In Press.

[10] Yes, 69% of general practitioners search on Google and Wikipedia weekly, compared to 32% who consult original research weekly. O’Keeffe, J., J. Willinsky, et al. (2011). “Public Access and Use of Health Research: An Exploratory Study of the National Institutes of Health (NIH) Public Access Policy Using Interviews and Surveys of Health Personnel.” J Med Internet Res 13(4): e97.