Learning from “Learning from Hackers”

Alongside colleagues (Enrico Coiera and Richard Day) from here in Sydney and (Kenneth Mandl) from near Boston in the US, I wrote an article for Science Translational Medicine in which we related the current system of “clinical trial evidence translation” to the very successful open source software movement. We highlighted the factors in that success – open access, incentives for participation, and interoperability of source code.

In the article, we drew parallels between the production of source code for open source software and the “source code” of clinical trials – the patient level data that says how well an intervention worked for each patient. If the source code of clinical trials were to be made more widely available, we could start to answer much more interesting questions, more accurately. We think it has the potential to dramatically improve the speed at which we detect unsafe drugs, and help doctors provide the right drugs to the right patients.

Just so that I can keep a record, here is a rough timeline of what happened in the media after the article was published:

  • The article was published in Science Translational Medicine on the 2nd of May in the US (early am on the 3rd in Sydney time).
  • The article was covered by the Sydney Morning Herald on page 15 of Thursday’s (3rd May 2012) edition.
  • Joshua Gliddon was very quick to call me up and have a chat about the article, writing a nice piece about it at ehealthspace.org
  • Enrico Coiera and I wrote a piece for the Sydney Morning Herald’s National Times talking about the article in more detail (published online on the 4th May 2012).
  • The article was also covered by Higher Education section of The Australian on Friday (4th May 2012).
  • Australian Life Scientist collected up a wide selection of information and wrote a summary of the article and our comments (first recorded example of the phrase “all information should be free” that I found was in Levy’s Hackers published in 1984, which would pre-date Woz, I believe).
  • @RyanMFierce found irony in the publication because it argues for open data and was published behind a paywall.
  • The article was mentioned in the introduction to a piece on sharing in genetics on The Conversation (an excellent outlet), which quickly became the most read article on the website (3rd May 2012).
  • A summary of the SMH National Times story and the article appeared on Open Health News (4th May 2012).
  • Here is the original media release from UNSW.

Hopefully once this burst of activity falls away, it will leave some lasting resonance and help convince a few people to think harder about how we can fix the problems of evidence translation.

I learnt a couple of lessons from the media activity surrounding the publication. Firstly, I learnt that it is impossible to control the message from your own work – people will read whatever they want and will probably focus on sections you thought were less important. There’s nothing you can do about it other than to faithfully represent your work and push your own agenda. I also learnt that there is a wide and diverse group of people already dealing with open access issues in clinical trial data – many more than I originally realised when I wrote the piece.

The next steps in the research will include learning about how far we can push the limits of patient-level meta-analysis by pooling clinical trial data in clever ways, while maintaining rigorous de-identification. Eventually we may even be able to automate the rapid integration of new evidence into organic, linked and dynamic systematic reviews and guidelines, customised for groups or even individuals.

Repost: Pharma’s influence over published clinical evidence

Below is a copy of an article I wrote for The Conversation. It’s an independent source of information and analysis about things that matter – but from the university and research perspective, which means it’s generally more rigorous than much of the rest.
This article was originally published at The Conversation. Read the original article.
 
TRANSPARENCY AND MEDICINE – A series examining issues from ethics to the evidence in evidence-based medicine, the influence of medical journals to the role of Big Pharma in our present and future health.

Here Adam Dunn discusses his research into authorship networks, which revealed the position of industry researchers in academic publishing.

There’s growing concern that large pharmaceutical companies are capable of undermining the truth about the published evidence doctors use to treat patients. The suspicion is that pharmaceutical companies may be trading lives for profits.

Clinical trials are one of the main sources of information that guide doctors when they treat patients. But controversial drug withdrawals have given doctors good reasons to be sceptical about the evidence that reaches them, and eroded their trust in the evidence base.

Vioxx gave us the quintessential story of what can go wrong when a big pharmaceutical company exerts influence over the evidence base. The arthritis drug was prescribed millions of times in Australia before it was revealed that it doubled the risk of heart attack. Vioxx was withdrawn in 2003 but the evidence showing its harmful effects was available years earlier.

So when looking for someone to blame, the fingers of prominent academics point directly at the pharmaceutical industry. But are their views justified?

With colleagues from the Centre for Health Informatics, I used network analysis to investigate clinical trial collaboration for a selection of widely prescribed drugs. Much like the way network pictures of Twitter or Facebook are drawn, we connected researchers who had worked together in a clinical trial.

We wanted to see how important each researcher was in their network, especially those who were affiliated with pharmaceutical companies that manufacture the drugs they study.

Our results showed that industry-based authors of clinical trials held more influential positions in their collaborative networks. These authors also received more citations than their non-industry peers.

We concluded that when it comes to clinical trials about drugs, industry researchers occupy influential positions, and their work is more widely cited. These conclusions left us feeling very uneasy about clinical evidence.

It appears that pharmaceutical companies are disproportionately powerful in coming up with the evidence to support the safety and efficacy of their own drugs.

Those familiar with clinical trials might ask how this could happen when clinical trials are registered under strict protocols and published after rigorous peer-review processes. In other words, if clinical trials are so tightly controlled, how can they be manipulated to show a drug is safe when it’s not?

The simple answer is that industry groups do trials differently. Industry-sponsored trials are less likely to publish negative results and more likely to design trials that will produce positive results in the first place. On top of that, industry is responsible for more evidence now than ever before – over a third of registered clinical trials each year are now funded by pharmaceutical companies.

When important evidence is designed to provide only positive conclusions, the data proving a drug’s safety is simply not made available. This is exactly what happened when diabetes drug Avandia was shown to increase the risk of heart failure in 2007. Even after a decade in the market, there were simply not enough data available to show the long-term risks or benefits.

Avandia was a key piece of the evidence puzzle for our research because it revealed the clear and direct negative effect of industry influence. An analysis of articles about the drug revealed that researchers with financial conflicts of interest continued to write favourably about the drug even after the negative evidence was published.

And although Avandia was withdrawn in the United Kingdom and New Zealand, it remains available (albeit under much tighter controls) in Australia and the United States.

So it seems that the lessons from this case may not have been learned. How can we know where and when industry influence will next tip the evidence balance in the favour of another harmful drug?

We’ll need to know more than whether or not clinical trials demonstrate safety and efficacy – we need to know if the right kinds of clinical trials were done in the first place.

This is the eighth part of Transparency and Medicine. You can read the previous instalment by clicking the link below:

Part One: Power and duty: is the social contract in medicine still relevant?

Part Two: Big debts in small packages – the dangers of pens and post-it notes

Part Three: Show and tell: conflicts of interest undeclared for clinical guidelines

Part Four: Eminence or evidence? The ethics of using untested treatments

Part Five: Don’t show me the money: the dangers of non-financial conflicts

Part Six: Ghosts in the machine: better definition of author may stem bias

Part Seven: Clearing the air: why more retractions are good for science

Feature photo EPA/TANNEN MAURY via The Conversation

Who creates the clinical evidence for cholesterol-lowering drugs?

Last week the US Food and Drug Administration released new warnings about the use of statins for patients in the United States. The warnings that have been added to labels in the US come from worries about liver injury, memory-loss and confusion, increased blood sugar levels and some new potentially dangerous interactions between one statin (lovastatin) and a range of other drugs.

Statins are used to inhibit the production of cholesterol in the body. The leading drug in the class, atorvastatin (Lipitor, from Pfizer) is the most-commonly prescribed drug in the world. In the last financial year in Australia, this drug alone cost the government AUD$637M. More than 14 out of every 100 people in Australia were taking a statin each day.

These are a class of drugs that have been around for a long, long time. Simvastatin, one of the oldest in the class, was first approved in the United States more than twenty years ago. So why are new restrictions being put on the labels of the statins now? And shouldn’t the public have been warned about these safety issues a long time ago?

It’s easy to blame the pharmaceutical industry for the problems given previous problems in marketing, conflicts of interest and illegal behaviour but of course the reality is always muddier and more difficult to understand than we’d like it to be. The reality is that the funding of clinical trials and the influence of industry in trial design is a spectrum – and there are different problems all the way along.

Along with some friends, I recently published a paper in Clinical Pharmacology & Therapeutics about clinical trials for antihyperlipidemics.

My friends were colleagues from my own centre, from St Vincent’s Hospital, the Children’s Hospital Informatics Program in Boston, and the Hospital for Sick Kids in Toronto. In the paper, we looked at all the trials that have been registered for these drugs on clinicaltrials.gov since 2007. Some of these trials had already been completed, some were underway, and some were still recruiting. The different patterns of trials funded by industry and non-industry tell an interesting story about the different agendas.

Distribution of trial comparisons across drugs and outcomes

And there were some surprising results.

Trials funded by industry were (and are) typically larger and completed more quickly. They are also more likely to focus on hyperlipidemia rather than cardiovascular outcomes, and less likely to measure safety-related outcomes. The suprise was that industry-funded trials were more likely to register trials that directly compare two or more drugs.

This is a surprise because studies looking at publications of clinical trials find exactly the opposite – that published industry-funded clinical trials are less likely to compare between drugs. This gives us a pretty good hint about which of the trials are being undertaken and then not published.

The next surprise was that industry and non-industry trials had very similar patterns when it came to choosing which drugs to include in trials. Despite some specific differences in the choice of drugs, publicly-funded trials were just as unevenly distributed towards statins and atorvastatin in particular, and even more likely to test a drug against a placebo instead of another drug.

The work suggests or confirms the need to answer the following questions about clinical research in the area of cardiovascular risk and hyperlipidemia in particular:

  1. Why isn’t the pharmaceutical industry compelled to measure safety outcomes more often in clinical trials?
  2. What happens to the data from the comparative effectiveness trials undertaken by the pharmaceutical industry when they aren’t published?
  3. Why aren’t public funds directed more aggressively towards comparative effectiveness research, and towards interventions for which there isn’t already a glut of clinical trials being undertaken?

I think we need to be monitoring the clinical trials registries more closely when guiding research funding for clinical trials.

tl;dr

We’ve recently published a new paper in Clinical Pharmacology & Therapeutics that looks at all the recent clinical trials involving cholesterol-modifying drugs. Specifically, we examined the differences between industry and non-industry funded trials in terms of their design. We wanted to know how the research agendas differ across the funding spectrum, and how that affects their contribution to comparative effectiveness research.

We found that industry-funded trials were more likely to compare between drugs (a surprise given what we know about published clinical trials), undertake larger clinical trials that finish sooner, less likely to examine cardiovascular risk, and less likely to measure safety outcomes.

Do pharmaceutical companies have too much influence over the evidence base?

Imagine you are a doctor and you have a patient sitting with you in your office. You have already diagnosed your patient with a condition. Treatment for this condition will definitely include prescribing the patient with one or more drugs. And, because the condition is quite common, there are several government-subsidised drugs from which you can choose. Some of the drugs have only recently been approved, and the others have been around for more than a decade.

So what do you need to know to choose which drug to prescribe?

Well, you need to know which of the drugs is going to be most effective, which of the drugs is safest, and which of the drugs has the best value [1]. Since all of the drugs you can choose from have been approved and are subsidised, presumably there have been clinical trials that have compared each of those drugs together in appropriate doses, right? And those clinical trials were conducted with good intentions and in an objective way [2]?

Well, for a lot of drugs, that is simply not the case.

In fact, around half of the drugs approved in the US do not have enough clinical trials of sufficient quality to allow doctors to effectively answer those questions [3]. And why is that so strange? Well, every day 75 clinical trials and 11 systematic reviews are published [4]. So even though there is way too much evidence for you as a doctor to ever be able to read [5], there still isn’t enough information around to help you answer those questions. And when it comes to pharmaceutical companies, we know that the trials they conduct end up producing different results and conclusions [6] and are often designed differently, too [7]. Oh, and from memory, industry sponsors around 36% of clinical trials, and this number has been increasing for decades.

What’s worse is that it looks like pharmaceutical companies have disproportionate levels of control over the production of the clinical evidence that will end up in the doctors’ decision-making.

I believe that in order to affect and hopefully improve the way we do things, we have to first be able to accurately measure them. I mean, we all know that we can’t improve our recipes without trying them out and having a taste-test.

So, along with colleagues in the Centre for Health Informatics at UNSW, I did a taste-test to see who is publishing these clinical trials and get an idea of exactly where clinical evidence comes from. We took 22 common drugs in Australia and collected up all of the published randomised controlled trials (RCTs) written about those drugs [8]. Then we looked at the affiliations of all of the authors to see who was directly affiliated with the pharmaceutical company making the drug.

A co-authorship network for rosiglitazone, as of 2006 when all the fuss started.

We found that when you draw the network of co-authorship (authors are linked to each other if they collaborated in an RCT) that those authors affiliated with the drug companies tended to be right in the middle of the network [9]. They also tended to receive more citations and often had the right network position to be able to reach and control the largest and most important part of the community producing the evidence. When it comes to producing meta-analyses, reviews, guidelines and policy decisions, which parts of the evidence base do you expect to be included and carry the most weight?

So, as a doctor making a decision about your patient’s treatment, how do you know if you can trust that guideline, the knowledge base underpinning that ‘synthesised information resource’, or even that google search [10]?

Of course most doctors already know this and are careful about the information they assimilate, discuss information about new drugs with their colleagues, or simply not prescribe new drugs until they have been on the market for long enough to make sure they are safe and effective. So, although we are still in very safe hands when we visit the doctor, wouldn’t it be nice if we could improve the way evidence makes its way into the decision-making process?

tl;dr

We’ve recently published a new article in the Journal of Clinical Epidemiology looking at networks of co-authorship for individual drugs that are commonly prescribed in Australia. Using network analysis, we found that authors who are directly affiliated with the pharmaceutical companies that are producing the drug are much more likely to be central in their networks, receive a greater number of citations, and have the potential to exert their influence over the important core of authors publishing the results of clinical trials.

Notes

[1] Indeed, you would also be thinking about whether any of the drugs are different depending on your particular patient’s genotypic and phenotypic characteristics but that’s a story for another day.

[2] Better yet, there’s a database of all of the outcomes and adverse reactions that have occurred to patients around the country since the drug was introduced. But of course that’s not the case either.

[3] Goldberg, N. H., S. Schneeweiss, et al. (2011). “Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” JAMA: The Journal of the American Medical Association 305(17): 1786-1789.

[4] Bastian, H., P. Glasziou, et al. (2010). “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?” PLoS Medicine 7(9): e1000326.

[5] Fraser, A. G. and F. D. Dunstan (2010). “On the impossibility of being expert.” BMJ 341.

[6] Yank, V., D. Rennie, et al. (2007). “Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study.” BMJ 335(7631): 1202-1205.

[7] Lathyris, D. N., N. A. Patsopoulos, et al. (2010). “Industry sponsorship and selection of comparators in randomized clinical trials.” European Journal of Clinical Investigation 40(2): 172-182.

[8] We also included reviews and meta-analyses that collect up RCTs and use them to produce conclusions about the safety and efficacy of the drugs.

[9] Dunn, A. G., B. Gallego, E. Coiera (2012). “How industry influence evidence production in collaborative research communities: a network analysis.” Journal of Clinical Epidemiology: In Press.

[10] Yes, 69% of general practitioners search on Google and Wikipedia weekly, compared to 32% who consult original research weekly. O’Keeffe, J., J. Willinsky, et al. (2011). “Public Access and Use of Health Research: An Exploratory Study of the National Institutes of Health (NIH) Public Access Policy Using Interviews and Surveys of Health Personnel.” J Med Internet Res 13(4): e97.