Dealing with industry’s influence on clinical evidence

I co-wrote a piece for The Conversation about a new article that was published in the Cochrane Database of Systematic Reviews, written by Andreas Lundh and other luminaries from the research area. The authors showed that industry sponsored clinical trials more often report positive outcomes and fewer harmful side effects.

The most interesting result from the article was that the biases that make industry funded clinical trials more likely to produce positive results could not be accounted for  using the standard tools that measure bias. This is amazing because it gives us a strong hint that industry is an independent source of heterogeneity in the systematic reviews that include them.

Too bad it’s the 12th of the 12th 2012 and the world is about to end. We won’t have time to sort it out.

(Feature image from AAP Image/Joe Castro via The Conversation – click the link)

How long does it take for new prescription drugs to become mainstream?

You probably don’t want to hear your doctor proclaiming “I’m so indie, I was prescribing that *way* before it was cool.” Or maybe you do?

If you’re a hipster and you really need to know when a particular band stops being underground and teeters on the edge of being mainstream so you can only like it in the cool ironic way, then you would want to know how quickly it passes from the early adopters into the hands of the mainstream.

It’s the same for prescribed drugs in primary care – we want to know how long it takes for new prescription drugs to become part of mainstream practice.

tl;dr – we had a go at working out how long it takes for prescription drugs to be fully adopted in Australia, and published it here.

We already know quite a lot about how an individual prescriber makes a decision to change his or her behaviour for a new drug on the Pharmaceutical Benefits Scheme (PBS). Sometimes it has something to do with the evidence being produced in clinical trials and aggregated in systematic reviews. But often it is all about what the prescriber’s colleagues are saying and the opinions of influential people and companies.

It’s evidence of social contagion. And it’s been shown to be important for innovations in healthcare.

What we haven’t seen are good models for describing (or even better, predicting) the rate of adoption within a large population the size of a country. So in a new paper in BMC Health Services Research I wrote about a well-studied model and its application to prescription volumes in Australian general practice. Together with some of my more senior colleagues, I applied a simple model to over a hundred drugs that were introduced to Australia since ‘96.

It turns out that, in Australia, your average sort of drug takes over 8 years to reach a steady level of prescriptions.

The model is arguably too simple. It assumes an initial external ‘push’, which falls away as social contagion grows. The problem is that these external pushes don’t all happen at once when a new drug is released onto the PBS but more likely exist as a series of perturbations which correspond to new evidence, new marketing efforts, other drugs, and changes and restrictions. So while the model produces some very accurate curves that correspond to the adoption we have seen historically, it wouldn’t be particularly good at predicting adoption based on some early information.

For that, I think we need to create a strong link between the decision-making of individuals and the structure of the network through which diffusion of information and social contagion flows. I’ve started something like this already. I think we still have quite a way to go before we can work out why some drugs take over a decade and some are adopted within a couple of years.

Learning from “Learning from Hackers”

Alongside colleagues (Enrico Coiera and Richard Day) from here in Sydney and (Kenneth Mandl) from near Boston in the US, I wrote an article for Science Translational Medicine in which we related the current system of “clinical trial evidence translation” to the very successful open source software movement. We highlighted the factors in that success – open access, incentives for participation, and interoperability of source code.

In the article, we drew parallels between the production of source code for open source software and the “source code” of clinical trials – the patient level data that says how well an intervention worked for each patient. If the source code of clinical trials were to be made more widely available, we could start to answer much more interesting questions, more accurately. We think it has the potential to dramatically improve the speed at which we detect unsafe drugs, and help doctors provide the right drugs to the right patients.

Just so that I can keep a record, here is a rough timeline of what happened in the media after the article was published:

  • The article was published in Science Translational Medicine on the 2nd of May in the US (early am on the 3rd in Sydney time).
  • The article was covered by the Sydney Morning Herald on page 15 of Thursday’s (3rd May 2012) edition.
  • Joshua Gliddon was very quick to call me up and have a chat about the article, writing a nice piece about it at
  • Enrico Coiera and I wrote a piece for the Sydney Morning Herald’s National Times talking about the article in more detail (published online on the 4th May 2012).
  • The article was also covered by Higher Education section of The Australian on Friday (4th May 2012).
  • Australian Life Scientist collected up a wide selection of information and wrote a summary of the article and our comments (first recorded example of the phrase “all information should be free” that I found was in Levy’s Hackers published in 1984, which would pre-date Woz, I believe).
  • @RyanMFierce found irony in the publication because it argues for open data and was published behind a paywall.
  • The article was mentioned in the introduction to a piece on sharing in genetics on The Conversation (an excellent outlet), which quickly became the most read article on the website (3rd May 2012).
  • A summary of the SMH National Times story and the article appeared on Open Health News (4th May 2012).
  • Here is the original media release from UNSW.

Hopefully once this burst of activity falls away, it will leave some lasting resonance and help convince a few people to think harder about how we can fix the problems of evidence translation.

I learnt a couple of lessons from the media activity surrounding the publication. Firstly, I learnt that it is impossible to control the message from your own work – people will read whatever they want and will probably focus on sections you thought were less important. There’s nothing you can do about it other than to faithfully represent your work and push your own agenda. I also learnt that there is a wide and diverse group of people already dealing with open access issues in clinical trial data – many more than I originally realised when I wrote the piece.

The next steps in the research will include learning about how far we can push the limits of patient-level meta-analysis by pooling clinical trial data in clever ways, while maintaining rigorous de-identification. Eventually we may even be able to automate the rapid integration of new evidence into organic, linked and dynamic systematic reviews and guidelines, customised for groups or even individuals.

Repost: Pharma’s influence over published clinical evidence

Below is a copy of an article I wrote for The Conversation. It’s an independent source of information and analysis about things that matter – but from the university and research perspective, which means it’s generally more rigorous than much of the rest.
This article was originally published at The Conversation. Read the original article.
TRANSPARENCY AND MEDICINE – A series examining issues from ethics to the evidence in evidence-based medicine, the influence of medical journals to the role of Big Pharma in our present and future health.

Here Adam Dunn discusses his research into authorship networks, which revealed the position of industry researchers in academic publishing.

There’s growing concern that large pharmaceutical companies are capable of undermining the truth about the published evidence doctors use to treat patients. The suspicion is that pharmaceutical companies may be trading lives for profits.

Clinical trials are one of the main sources of information that guide doctors when they treat patients. But controversial drug withdrawals have given doctors good reasons to be sceptical about the evidence that reaches them, and eroded their trust in the evidence base.

Vioxx gave us the quintessential story of what can go wrong when a big pharmaceutical company exerts influence over the evidence base. The arthritis drug was prescribed millions of times in Australia before it was revealed that it doubled the risk of heart attack. Vioxx was withdrawn in 2003 but the evidence showing its harmful effects was available years earlier.

So when looking for someone to blame, the fingers of prominent academics point directly at the pharmaceutical industry. But are their views justified?

With colleagues from the Centre for Health Informatics, I used network analysis to investigate clinical trial collaboration for a selection of widely prescribed drugs. Much like the way network pictures of Twitter or Facebook are drawn, we connected researchers who had worked together in a clinical trial.

We wanted to see how important each researcher was in their network, especially those who were affiliated with pharmaceutical companies that manufacture the drugs they study.

Our results showed that industry-based authors of clinical trials held more influential positions in their collaborative networks. These authors also received more citations than their non-industry peers.

We concluded that when it comes to clinical trials about drugs, industry researchers occupy influential positions, and their work is more widely cited. These conclusions left us feeling very uneasy about clinical evidence.

It appears that pharmaceutical companies are disproportionately powerful in coming up with the evidence to support the safety and efficacy of their own drugs.

Those familiar with clinical trials might ask how this could happen when clinical trials are registered under strict protocols and published after rigorous peer-review processes. In other words, if clinical trials are so tightly controlled, how can they be manipulated to show a drug is safe when it’s not?

The simple answer is that industry groups do trials differently. Industry-sponsored trials are less likely to publish negative results and more likely to design trials that will produce positive results in the first place. On top of that, industry is responsible for more evidence now than ever before – over a third of registered clinical trials each year are now funded by pharmaceutical companies.

When important evidence is designed to provide only positive conclusions, the data proving a drug’s safety is simply not made available. This is exactly what happened when diabetes drug Avandia was shown to increase the risk of heart failure in 2007. Even after a decade in the market, there were simply not enough data available to show the long-term risks or benefits.

Avandia was a key piece of the evidence puzzle for our research because it revealed the clear and direct negative effect of industry influence. An analysis of articles about the drug revealed that researchers with financial conflicts of interest continued to write favourably about the drug even after the negative evidence was published.

And although Avandia was withdrawn in the United Kingdom and New Zealand, it remains available (albeit under much tighter controls) in Australia and the United States.

So it seems that the lessons from this case may not have been learned. How can we know where and when industry influence will next tip the evidence balance in the favour of another harmful drug?

We’ll need to know more than whether or not clinical trials demonstrate safety and efficacy – we need to know if the right kinds of clinical trials were done in the first place.

This is the eighth part of Transparency and Medicine. You can read the previous instalment by clicking the link below:

Part One: Power and duty: is the social contract in medicine still relevant?

Part Two: Big debts in small packages – the dangers of pens and post-it notes

Part Three: Show and tell: conflicts of interest undeclared for clinical guidelines

Part Four: Eminence or evidence? The ethics of using untested treatments

Part Five: Don’t show me the money: the dangers of non-financial conflicts

Part Six: Ghosts in the machine: better definition of author may stem bias

Part Seven: Clearing the air: why more retractions are good for science

Feature photo EPA/TANNEN MAURY via The Conversation

Do pharmaceutical companies have too much influence over the evidence base?

Imagine you are a doctor and you have a patient sitting with you in your office. You have already diagnosed your patient with a condition. Treatment for this condition will definitely include prescribing the patient with one or more drugs. And, because the condition is quite common, there are several government-subsidised drugs from which you can choose. Some of the drugs have only recently been approved, and the others have been around for more than a decade.

So what do you need to know to choose which drug to prescribe?

Well, you need to know which of the drugs is going to be most effective, which of the drugs is safest, and which of the drugs has the best value [1]. Since all of the drugs you can choose from have been approved and are subsidised, presumably there have been clinical trials that have compared each of those drugs together in appropriate doses, right? And those clinical trials were conducted with good intentions and in an objective way [2]?

Well, for a lot of drugs, that is simply not the case.

In fact, around half of the drugs approved in the US do not have enough clinical trials of sufficient quality to allow doctors to effectively answer those questions [3]. And why is that so strange? Well, every day 75 clinical trials and 11 systematic reviews are published [4]. So even though there is way too much evidence for you as a doctor to ever be able to read [5], there still isn’t enough information around to help you answer those questions. And when it comes to pharmaceutical companies, we know that the trials they conduct end up producing different results and conclusions [6] and are often designed differently, too [7]. Oh, and from memory, industry sponsors around 36% of clinical trials, and this number has been increasing for decades.

What’s worse is that it looks like pharmaceutical companies have disproportionate levels of control over the production of the clinical evidence that will end up in the doctors’ decision-making.

I believe that in order to affect and hopefully improve the way we do things, we have to first be able to accurately measure them. I mean, we all know that we can’t improve our recipes without trying them out and having a taste-test.

So, along with colleagues in the Centre for Health Informatics at UNSW, I did a taste-test to see who is publishing these clinical trials and get an idea of exactly where clinical evidence comes from. We took 22 common drugs in Australia and collected up all of the published randomised controlled trials (RCTs) written about those drugs [8]. Then we looked at the affiliations of all of the authors to see who was directly affiliated with the pharmaceutical company making the drug.

A co-authorship network for rosiglitazone, as of 2006 when all the fuss started.

We found that when you draw the network of co-authorship (authors are linked to each other if they collaborated in an RCT) that those authors affiliated with the drug companies tended to be right in the middle of the network [9]. They also tended to receive more citations and often had the right network position to be able to reach and control the largest and most important part of the community producing the evidence. When it comes to producing meta-analyses, reviews, guidelines and policy decisions, which parts of the evidence base do you expect to be included and carry the most weight?

So, as a doctor making a decision about your patient’s treatment, how do you know if you can trust that guideline, the knowledge base underpinning that ‘synthesised information resource’, or even that google search [10]?

Of course most doctors already know this and are careful about the information they assimilate, discuss information about new drugs with their colleagues, or simply not prescribe new drugs until they have been on the market for long enough to make sure they are safe and effective. So, although we are still in very safe hands when we visit the doctor, wouldn’t it be nice if we could improve the way evidence makes its way into the decision-making process?


We’ve recently published a new article in the Journal of Clinical Epidemiology looking at networks of co-authorship for individual drugs that are commonly prescribed in Australia. Using network analysis, we found that authors who are directly affiliated with the pharmaceutical companies that are producing the drug are much more likely to be central in their networks, receive a greater number of citations, and have the potential to exert their influence over the important core of authors publishing the results of clinical trials.


[1] Indeed, you would also be thinking about whether any of the drugs are different depending on your particular patient’s genotypic and phenotypic characteristics but that’s a story for another day.

[2] Better yet, there’s a database of all of the outcomes and adverse reactions that have occurred to patients around the country since the drug was introduced. But of course that’s not the case either.

[3] Goldberg, N. H., S. Schneeweiss, et al. (2011). “Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” JAMA: The Journal of the American Medical Association 305(17): 1786-1789.

[4] Bastian, H., P. Glasziou, et al. (2010). “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?” PLoS Medicine 7(9): e1000326.

[5] Fraser, A. G. and F. D. Dunstan (2010). “On the impossibility of being expert.” BMJ 341.

[6] Yank, V., D. Rennie, et al. (2007). “Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study.” BMJ 335(7631): 1202-1205.

[7] Lathyris, D. N., N. A. Patsopoulos, et al. (2010). “Industry sponsorship and selection of comparators in randomized clinical trials.” European Journal of Clinical Investigation 40(2): 172-182.

[8] We also included reviews and meta-analyses that collect up RCTs and use them to produce conclusions about the safety and efficacy of the drugs.

[9] Dunn, A. G., B. Gallego, E. Coiera (2012). “How industry influence evidence production in collaborative research communities: a network analysis.” Journal of Clinical Epidemiology: In Press.

[10] Yes, 69% of general practitioners search on Google and Wikipedia weekly, compared to 32% who consult original research weekly. O’Keeffe, J., J. Willinsky, et al. (2011). “Public Access and Use of Health Research: An Exploratory Study of the National Institutes of Health (NIH) Public Access Policy Using Interviews and Surveys of Health Personnel.” J Med Internet Res 13(4): e97.

Financial conflicts of interest in guidelines

A new study published in the BMJ shows the prevalence of financial conflicts of interest in the panel members producing clinical guidelines. For consumers of healthcare delivery (that means everyone), I think it is valuable to know that doctors get their information from guidelines, and about half of the people developing those guidelines have financially-based conflicts of interest (e.g. they get money from pharmaceutical companies). The fact that this is not a surprise is probably the most worrying issue.

This is the second time that we’ve heard that journals have become “an extension of the marketing arm of pharmaceutical companies”.

Unfortunately, the double-edged sword is that many talented people do excellent work, and get money from pharmaceutical companies. Removing financial conflicts of interest would remove their talent from the construction of evidence and guidelines.