How long does it take for new prescription drugs to become mainstream?

You probably don’t want to hear your doctor proclaiming “I’m so indie, I was prescribing that *way* before it was cool.” Or maybe you do?

If you’re a hipster and you really need to know when a particular band stops being underground and teeters on the edge of being mainstream so you can only like it in the cool ironic way, then you would want to know how quickly it passes from the early adopters into the hands of the mainstream.

It’s the same for prescribed drugs in primary care – we want to know how long it takes for new prescription drugs to become part of mainstream practice.

tl;dr – we had a go at working out how long it takes for prescription drugs to be fully adopted in Australia, and published it here.

We already know quite a lot about how an individual prescriber makes a decision to change his or her behaviour for a new drug on the Pharmaceutical Benefits Scheme (PBS). Sometimes it has something to do with the evidence being produced in clinical trials and aggregated in systematic reviews. But often it is all about what the prescriber’s colleagues are saying and the opinions of influential people and companies.

It’s evidence of social contagion. And it’s been shown to be important for innovations in healthcare.

What we haven’t seen are good models for describing (or even better, predicting) the rate of adoption within a large population the size of a country. So in a new paper in BMC Health Services Research I wrote about a well-studied model and its application to prescription volumes in Australian general practice. Together with some of my more senior colleagues, I applied a simple model to over a hundred drugs that were introduced to Australia since ‘96.

It turns out that, in Australia, your average sort of drug takes over 8 years to reach a steady level of prescriptions.

The model is arguably too simple. It assumes an initial external ‘push’, which falls away as social contagion grows. The problem is that these external pushes don’t all happen at once when a new drug is released onto the PBS but more likely exist as a series of perturbations which correspond to new evidence, new marketing efforts, other drugs, and changes and restrictions. So while the model produces some very accurate curves that correspond to the adoption we have seen historically, it wouldn’t be particularly good at predicting adoption based on some early information.

For that, I think we need to create a strong link between the decision-making of individuals and the structure of the network through which diffusion of information and social contagion flows. I’ve started something like this already. I think we still have quite a way to go before we can work out why some drugs take over a decade and some are adopted within a couple of years.

Learning from “Learning from Hackers”

Alongside colleagues (Enrico Coiera and Richard Day) from here in Sydney and (Kenneth Mandl) from near Boston in the US, I wrote an article for Science Translational Medicine in which we related the current system of “clinical trial evidence translation” to the very successful open source software movement. We highlighted the factors in that success – open access, incentives for participation, and interoperability of source code.

In the article, we drew parallels between the production of source code for open source software and the “source code” of clinical trials – the patient level data that says how well an intervention worked for each patient. If the source code of clinical trials were to be made more widely available, we could start to answer much more interesting questions, more accurately. We think it has the potential to dramatically improve the speed at which we detect unsafe drugs, and help doctors provide the right drugs to the right patients.

Just so that I can keep a record, here is a rough timeline of what happened in the media after the article was published:

  • The article was published in Science Translational Medicine on the 2nd of May in the US (early am on the 3rd in Sydney time).
  • The article was covered by the Sydney Morning Herald on page 15 of Thursday’s (3rd May 2012) edition.
  • Joshua Gliddon was very quick to call me up and have a chat about the article, writing a nice piece about it at ehealthspace.org
  • Enrico Coiera and I wrote a piece for the Sydney Morning Herald’s National Times talking about the article in more detail (published online on the 4th May 2012).
  • The article was also covered by Higher Education section of The Australian on Friday (4th May 2012).
  • Australian Life Scientist collected up a wide selection of information and wrote a summary of the article and our comments (first recorded example of the phrase “all information should be free” that I found was in Levy’s Hackers published in 1984, which would pre-date Woz, I believe).
  • @RyanMFierce found irony in the publication because it argues for open data and was published behind a paywall.
  • The article was mentioned in the introduction to a piece on sharing in genetics on The Conversation (an excellent outlet), which quickly became the most read article on the website (3rd May 2012).
  • A summary of the SMH National Times story and the article appeared on Open Health News (4th May 2012).
  • Here is the original media release from UNSW.

Hopefully once this burst of activity falls away, it will leave some lasting resonance and help convince a few people to think harder about how we can fix the problems of evidence translation.

I learnt a couple of lessons from the media activity surrounding the publication. Firstly, I learnt that it is impossible to control the message from your own work – people will read whatever they want and will probably focus on sections you thought were less important. There’s nothing you can do about it other than to faithfully represent your work and push your own agenda. I also learnt that there is a wide and diverse group of people already dealing with open access issues in clinical trial data – many more than I originally realised when I wrote the piece.

The next steps in the research will include learning about how far we can push the limits of patient-level meta-analysis by pooling clinical trial data in clever ways, while maintaining rigorous de-identification. Eventually we may even be able to automate the rapid integration of new evidence into organic, linked and dynamic systematic reviews and guidelines, customised for groups or even individuals.

Do pharmaceutical companies have too much influence over the evidence base?

Imagine you are a doctor and you have a patient sitting with you in your office. You have already diagnosed your patient with a condition. Treatment for this condition will definitely include prescribing the patient with one or more drugs. And, because the condition is quite common, there are several government-subsidised drugs from which you can choose. Some of the drugs have only recently been approved, and the others have been around for more than a decade.

So what do you need to know to choose which drug to prescribe?

Well, you need to know which of the drugs is going to be most effective, which of the drugs is safest, and which of the drugs has the best value [1]. Since all of the drugs you can choose from have been approved and are subsidised, presumably there have been clinical trials that have compared each of those drugs together in appropriate doses, right? And those clinical trials were conducted with good intentions and in an objective way [2]?

Well, for a lot of drugs, that is simply not the case.

In fact, around half of the drugs approved in the US do not have enough clinical trials of sufficient quality to allow doctors to effectively answer those questions [3]. And why is that so strange? Well, every day 75 clinical trials and 11 systematic reviews are published [4]. So even though there is way too much evidence for you as a doctor to ever be able to read [5], there still isn’t enough information around to help you answer those questions. And when it comes to pharmaceutical companies, we know that the trials they conduct end up producing different results and conclusions [6] and are often designed differently, too [7]. Oh, and from memory, industry sponsors around 36% of clinical trials, and this number has been increasing for decades.

What’s worse is that it looks like pharmaceutical companies have disproportionate levels of control over the production of the clinical evidence that will end up in the doctors’ decision-making.

I believe that in order to affect and hopefully improve the way we do things, we have to first be able to accurately measure them. I mean, we all know that we can’t improve our recipes without trying them out and having a taste-test.

So, along with colleagues in the Centre for Health Informatics at UNSW, I did a taste-test to see who is publishing these clinical trials and get an idea of exactly where clinical evidence comes from. We took 22 common drugs in Australia and collected up all of the published randomised controlled trials (RCTs) written about those drugs [8]. Then we looked at the affiliations of all of the authors to see who was directly affiliated with the pharmaceutical company making the drug.

A co-authorship network for rosiglitazone, as of 2006 when all the fuss started.

We found that when you draw the network of co-authorship (authors are linked to each other if they collaborated in an RCT) that those authors affiliated with the drug companies tended to be right in the middle of the network [9]. They also tended to receive more citations and often had the right network position to be able to reach and control the largest and most important part of the community producing the evidence. When it comes to producing meta-analyses, reviews, guidelines and policy decisions, which parts of the evidence base do you expect to be included and carry the most weight?

So, as a doctor making a decision about your patient’s treatment, how do you know if you can trust that guideline, the knowledge base underpinning that ‘synthesised information resource’, or even that google search [10]?

Of course most doctors already know this and are careful about the information they assimilate, discuss information about new drugs with their colleagues, or simply not prescribe new drugs until they have been on the market for long enough to make sure they are safe and effective. So, although we are still in very safe hands when we visit the doctor, wouldn’t it be nice if we could improve the way evidence makes its way into the decision-making process?

tl;dr

We’ve recently published a new article in the Journal of Clinical Epidemiology looking at networks of co-authorship for individual drugs that are commonly prescribed in Australia. Using network analysis, we found that authors who are directly affiliated with the pharmaceutical companies that are producing the drug are much more likely to be central in their networks, receive a greater number of citations, and have the potential to exert their influence over the important core of authors publishing the results of clinical trials.

Notes

[1] Indeed, you would also be thinking about whether any of the drugs are different depending on your particular patient’s genotypic and phenotypic characteristics but that’s a story for another day.

[2] Better yet, there’s a database of all of the outcomes and adverse reactions that have occurred to patients around the country since the drug was introduced. But of course that’s not the case either.

[3] Goldberg, N. H., S. Schneeweiss, et al. (2011). “Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” JAMA: The Journal of the American Medical Association 305(17): 1786-1789.

[4] Bastian, H., P. Glasziou, et al. (2010). “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?” PLoS Medicine 7(9): e1000326.

[5] Fraser, A. G. and F. D. Dunstan (2010). “On the impossibility of being expert.” BMJ 341.

[6] Yank, V., D. Rennie, et al. (2007). “Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study.” BMJ 335(7631): 1202-1205.

[7] Lathyris, D. N., N. A. Patsopoulos, et al. (2010). “Industry sponsorship and selection of comparators in randomized clinical trials.” European Journal of Clinical Investigation 40(2): 172-182.

[8] We also included reviews and meta-analyses that collect up RCTs and use them to produce conclusions about the safety and efficacy of the drugs.

[9] Dunn, A. G., B. Gallego, E. Coiera (2012). “How industry influence evidence production in collaborative research communities: a network analysis.” Journal of Clinical Epidemiology: In Press.

[10] Yes, 69% of general practitioners search on Google and Wikipedia weekly, compared to 32% who consult original research weekly. O’Keeffe, J., J. Willinsky, et al. (2011). “Public Access and Use of Health Research: An Exploratory Study of the National Institutes of Health (NIH) Public Access Policy Using Interviews and Surveys of Health Personnel.” J Med Internet Res 13(4): e97.

A Ghostwriter’s insight into the industry

From PLoS Medicine, a nice article on ghost-writing by a former writer, with interesting information about why she did it, and why she stopped doing it. It is also very interesting to get an insight into exactly how the pharmaceutical industry is able to manipulate the publication of articles and the direct education of clinicians. A related article is here.

French guidelines are withdrawn after court finds potential bias among authors

[Lenzer 342 — bmj.com] Formindep “promotes independent medical education and information” found that the working groups involved with the guidelines for Alzheimer’s disease had major financial conflicts of interest and some members failed to disclose their financial interests.

It’s fine to demand disclosure of financial interests, but what would they do with the guidelines if they were disclosed? Leave them in the public domain? And what happens if/when the clinical trials underpinning the guidelines were mostly (or wholly) funded by the pharmaceutical industry that seeks to profit from the over-use of prescription drugs?

French guidelines are withdrawn after court finds potential bias among authors

Measuring only skin deep conflicts of interest won’t help

Conflicts of Interest in Cardiovascular Clinical Practice Guidelines

In the most recent issue of Archives, a group of US researchers have analysed the cardiovascular clinical practice guidelines on which clinicians rely to make informed decisions about how best to treat patients. Conflicts of interest are contentious in this area because they are known to influence how evidence is reported in a number of interesting ways.

The authors find that conflicts of interest were not as large a problem as many might imagine them to be. My argument here is that 56% of the authors of the clinical practice guidelines may be supported partially (or more) by pharmaceutical companies but they are still writing guidelines based on evidence that may be more dependent on big pharma, from clinical trials that may be funded and designed by big pharma, and with the concerted effort of a 900 billion dollar industry helping them reinforce the need for more pills.