How long does it take for new prescription drugs to become mainstream?

You probably don’t want to hear your doctor proclaiming “I’m so indie, I was prescribing that *way* before it was cool.” Or maybe you do?

If you’re a hipster and you really need to know when a particular band stops being underground and teeters on the edge of being mainstream so you can only like it in the cool ironic way, then you would want to know how quickly it passes from the early adopters into the hands of the mainstream.

It’s the same for prescribed drugs in primary care – we want to know how long it takes for new prescription drugs to become part of mainstream practice.

tl;dr – we had a go at working out how long it takes for prescription drugs to be fully adopted in Australia, and published it here.

We already know quite a lot about how an individual prescriber makes a decision to change his or her behaviour for a new drug on the Pharmaceutical Benefits Scheme (PBS). Sometimes it has something to do with the evidence being produced in clinical trials and aggregated in systematic reviews. But often it is all about what the prescriber’s colleagues are saying and the opinions of influential people and companies.

It’s evidence of social contagion. And it’s been shown to be important for innovations in healthcare.

What we haven’t seen are good models for describing (or even better, predicting) the rate of adoption within a large population the size of a country. So in a new paper in BMC Health Services Research I wrote about a well-studied model and its application to prescription volumes in Australian general practice. Together with some of my more senior colleagues, I applied a simple model to over a hundred drugs that were introduced to Australia since ‘96.

It turns out that, in Australia, your average sort of drug takes over 8 years to reach a steady level of prescriptions.

The model is arguably too simple. It assumes an initial external ‘push’, which falls away as social contagion grows. The problem is that these external pushes don’t all happen at once when a new drug is released onto the PBS but more likely exist as a series of perturbations which correspond to new evidence, new marketing efforts, other drugs, and changes and restrictions. So while the model produces some very accurate curves that correspond to the adoption we have seen historically, it wouldn’t be particularly good at predicting adoption based on some early information.

For that, I think we need to create a strong link between the decision-making of individuals and the structure of the network through which diffusion of information and social contagion flows. I’ve started something like this already. I think we still have quite a way to go before we can work out why some drugs take over a decade and some are adopted within a couple of years.

The Health Informatics Conference 2012, Sydney

So #hic12 is nearly here and I’ll be there in a rather unusual capacity. I won’t be giving a talk. I won’t even be standing in front of a poster. I’ll be there as the official twitterer, which means I’ll be flitting around from talk to talk, tweeting from the official @hic_2012 account, and hopefully connecting people in the sort of decentralised organisational process we’ve all come to love about the medium. It’s on from the 30th of July to the 2nd of August and the details are, you know, on the website.

So what’s health informatics all about? Well, at its essence, it’s really about helping doctors, medical practitioners, and clinical researchers do better medicine. Sometimes it’s also about helping patients to help themselves. And pretty much always, it’s about information – spreading it, keeping it private, fitting it together, and using it to improve things.

For all the money thrown around in supporting new technology in healthcare delivery, we don’t seem to have made the sort of progress you might expect for such a critical part of the community – the bit that looks after you when you’re sick. So when you talk to people from outside medicine and healthcare about what actually happens in hospitals and practices around Australia, it’s not a surprise that they’re shocked.

“So the system is paperless, right?” Not even close.

“So the systems aren’t even connected to share information *within* the hospital?” Nope.

“So, I can’t register for an electronic record if my name has a hyphen or an apostrophe?” Haha! no.

It’s hard to believe that this is how things are in healthcare when in the rest of our day-to-day lives we can just download apps on devices to recognise a song/picture we hear/see on the street, connect to people around the world instantaneously, stream live videos of protests to thousands, run away from imaginary zombies to motivate us to stay fit and healthy, and ask Siri to tell us what gets prescribed to patients like us if we visit a doctor. But when it comes to changing technology in the sacred world of medicine there are a few things that get in the way – safety, bureaucracy, the cultural status quo, and profiteering.

And it’s those things that I always want to hear about at conferences on health informatics. Instead of asking what amazing things could be done with the new and ubiquitous technology we have surrounding us, we tend to ask and answer the following:

“How will you make sure that it’s safe?” It will take us many years to evaluate its safety but first we need ethics approval, which will also take way too long.

“How will you know for sure if it is effective and worth the cost?” We will have to test it in the real world, which is in a constant state of flux, so, ummm, actually, we won’t really be able to tell you how effective it is anyway – we’ll guess.

“And it will only cost you a billion dollars!” What?

“How will you convince clinicians to use it?” Oh, there will be resistance. People prefer to maintain the status quo because they work in tightly-constrained worlds with little room to move and adapt. So yeah, there will be resistance.

Meanwhile, there are some impressive people doing some rather amazing things to address the problems, break down the bureaucracy where it isn’t needed, and generally make the kinds of changes to the system that we can be proud of. Quite a few of them will even be at the Health Informatics Conference in Sydney at the end of July.

If you’re going to be there, I’d love to hear from you, find out what your Twitter account is, and add your talk or poster to my tweeting itinerary. If you don’t have a Twitter account and you work in health informatics, I’d like to know why. And most importantly I’d love to ask you how your work addresses or side-steps some of the above problems. I’m looking for disruptive technologies.

Learning from “Learning from Hackers”

Alongside colleagues (Enrico Coiera and Richard Day) from here in Sydney and (Kenneth Mandl) from near Boston in the US, I wrote an article for Science Translational Medicine in which we related the current system of “clinical trial evidence translation” to the very successful open source software movement. We highlighted the factors in that success – open access, incentives for participation, and interoperability of source code.

In the article, we drew parallels between the production of source code for open source software and the “source code” of clinical trials – the patient level data that says how well an intervention worked for each patient. If the source code of clinical trials were to be made more widely available, we could start to answer much more interesting questions, more accurately. We think it has the potential to dramatically improve the speed at which we detect unsafe drugs, and help doctors provide the right drugs to the right patients.

Just so that I can keep a record, here is a rough timeline of what happened in the media after the article was published:

  • The article was published in Science Translational Medicine on the 2nd of May in the US (early am on the 3rd in Sydney time).
  • The article was covered by the Sydney Morning Herald on page 15 of Thursday’s (3rd May 2012) edition.
  • Joshua Gliddon was very quick to call me up and have a chat about the article, writing a nice piece about it at ehealthspace.org
  • Enrico Coiera and I wrote a piece for the Sydney Morning Herald’s National Times talking about the article in more detail (published online on the 4th May 2012).
  • The article was also covered by Higher Education section of The Australian on Friday (4th May 2012).
  • Australian Life Scientist collected up a wide selection of information and wrote a summary of the article and our comments (first recorded example of the phrase “all information should be free” that I found was in Levy’s Hackers published in 1984, which would pre-date Woz, I believe).
  • @RyanMFierce found irony in the publication because it argues for open data and was published behind a paywall.
  • The article was mentioned in the introduction to a piece on sharing in genetics on The Conversation (an excellent outlet), which quickly became the most read article on the website (3rd May 2012).
  • A summary of the SMH National Times story and the article appeared on Open Health News (4th May 2012).
  • Here is the original media release from UNSW.

Hopefully once this burst of activity falls away, it will leave some lasting resonance and help convince a few people to think harder about how we can fix the problems of evidence translation.

I learnt a couple of lessons from the media activity surrounding the publication. Firstly, I learnt that it is impossible to control the message from your own work – people will read whatever they want and will probably focus on sections you thought were less important. There’s nothing you can do about it other than to faithfully represent your work and push your own agenda. I also learnt that there is a wide and diverse group of people already dealing with open access issues in clinical trial data – many more than I originally realised when I wrote the piece.

The next steps in the research will include learning about how far we can push the limits of patient-level meta-analysis by pooling clinical trial data in clever ways, while maintaining rigorous de-identification. Eventually we may even be able to automate the rapid integration of new evidence into organic, linked and dynamic systematic reviews and guidelines, customised for groups or even individuals.

Repost: Pharma’s influence over published clinical evidence

Below is a copy of an article I wrote for The Conversation. It’s an independent source of information and analysis about things that matter – but from the university and research perspective, which means it’s generally more rigorous than much of the rest.
This article was originally published at The Conversation. Read the original article.
 
TRANSPARENCY AND MEDICINE – A series examining issues from ethics to the evidence in evidence-based medicine, the influence of medical journals to the role of Big Pharma in our present and future health.

Here Adam Dunn discusses his research into authorship networks, which revealed the position of industry researchers in academic publishing.

There’s growing concern that large pharmaceutical companies are capable of undermining the truth about the published evidence doctors use to treat patients. The suspicion is that pharmaceutical companies may be trading lives for profits.

Clinical trials are one of the main sources of information that guide doctors when they treat patients. But controversial drug withdrawals have given doctors good reasons to be sceptical about the evidence that reaches them, and eroded their trust in the evidence base.

Vioxx gave us the quintessential story of what can go wrong when a big pharmaceutical company exerts influence over the evidence base. The arthritis drug was prescribed millions of times in Australia before it was revealed that it doubled the risk of heart attack. Vioxx was withdrawn in 2003 but the evidence showing its harmful effects was available years earlier.

So when looking for someone to blame, the fingers of prominent academics point directly at the pharmaceutical industry. But are their views justified?

With colleagues from the Centre for Health Informatics, I used network analysis to investigate clinical trial collaboration for a selection of widely prescribed drugs. Much like the way network pictures of Twitter or Facebook are drawn, we connected researchers who had worked together in a clinical trial.

We wanted to see how important each researcher was in their network, especially those who were affiliated with pharmaceutical companies that manufacture the drugs they study.

Our results showed that industry-based authors of clinical trials held more influential positions in their collaborative networks. These authors also received more citations than their non-industry peers.

We concluded that when it comes to clinical trials about drugs, industry researchers occupy influential positions, and their work is more widely cited. These conclusions left us feeling very uneasy about clinical evidence.

It appears that pharmaceutical companies are disproportionately powerful in coming up with the evidence to support the safety and efficacy of their own drugs.

Those familiar with clinical trials might ask how this could happen when clinical trials are registered under strict protocols and published after rigorous peer-review processes. In other words, if clinical trials are so tightly controlled, how can they be manipulated to show a drug is safe when it’s not?

The simple answer is that industry groups do trials differently. Industry-sponsored trials are less likely to publish negative results and more likely to design trials that will produce positive results in the first place. On top of that, industry is responsible for more evidence now than ever before – over a third of registered clinical trials each year are now funded by pharmaceutical companies.

When important evidence is designed to provide only positive conclusions, the data proving a drug’s safety is simply not made available. This is exactly what happened when diabetes drug Avandia was shown to increase the risk of heart failure in 2007. Even after a decade in the market, there were simply not enough data available to show the long-term risks or benefits.

Avandia was a key piece of the evidence puzzle for our research because it revealed the clear and direct negative effect of industry influence. An analysis of articles about the drug revealed that researchers with financial conflicts of interest continued to write favourably about the drug even after the negative evidence was published.

And although Avandia was withdrawn in the United Kingdom and New Zealand, it remains available (albeit under much tighter controls) in Australia and the United States.

So it seems that the lessons from this case may not have been learned. How can we know where and when industry influence will next tip the evidence balance in the favour of another harmful drug?

We’ll need to know more than whether or not clinical trials demonstrate safety and efficacy – we need to know if the right kinds of clinical trials were done in the first place.

This is the eighth part of Transparency and Medicine. You can read the previous instalment by clicking the link below:

Part One: Power and duty: is the social contract in medicine still relevant?

Part Two: Big debts in small packages – the dangers of pens and post-it notes

Part Three: Show and tell: conflicts of interest undeclared for clinical guidelines

Part Four: Eminence or evidence? The ethics of using untested treatments

Part Five: Don’t show me the money: the dangers of non-financial conflicts

Part Six: Ghosts in the machine: better definition of author may stem bias

Part Seven: Clearing the air: why more retractions are good for science

Feature photo EPA/TANNEN MAURY via The Conversation

Who creates the clinical evidence for cholesterol-lowering drugs?

Last week the US Food and Drug Administration released new warnings about the use of statins for patients in the United States. The warnings that have been added to labels in the US come from worries about liver injury, memory-loss and confusion, increased blood sugar levels and some new potentially dangerous interactions between one statin (lovastatin) and a range of other drugs.

Statins are used to inhibit the production of cholesterol in the body. The leading drug in the class, atorvastatin (Lipitor, from Pfizer) is the most-commonly prescribed drug in the world. In the last financial year in Australia, this drug alone cost the government AUD$637M. More than 14 out of every 100 people in Australia were taking a statin each day.

These are a class of drugs that have been around for a long, long time. Simvastatin, one of the oldest in the class, was first approved in the United States more than twenty years ago. So why are new restrictions being put on the labels of the statins now? And shouldn’t the public have been warned about these safety issues a long time ago?

It’s easy to blame the pharmaceutical industry for the problems given previous problems in marketing, conflicts of interest and illegal behaviour but of course the reality is always muddier and more difficult to understand than we’d like it to be. The reality is that the funding of clinical trials and the influence of industry in trial design is a spectrum – and there are different problems all the way along.

Along with some friends, I recently published a paper in Clinical Pharmacology & Therapeutics about clinical trials for antihyperlipidemics.

My friends were colleagues from my own centre, from St Vincent’s Hospital, the Children’s Hospital Informatics Program in Boston, and the Hospital for Sick Kids in Toronto. In the paper, we looked at all the trials that have been registered for these drugs on clinicaltrials.gov since 2007. Some of these trials had already been completed, some were underway, and some were still recruiting. The different patterns of trials funded by industry and non-industry tell an interesting story about the different agendas.

Distribution of trial comparisons across drugs and outcomes

And there were some surprising results.

Trials funded by industry were (and are) typically larger and completed more quickly. They are also more likely to focus on hyperlipidemia rather than cardiovascular outcomes, and less likely to measure safety-related outcomes. The suprise was that industry-funded trials were more likely to register trials that directly compare two or more drugs.

This is a surprise because studies looking at publications of clinical trials find exactly the opposite – that published industry-funded clinical trials are less likely to compare between drugs. This gives us a pretty good hint about which of the trials are being undertaken and then not published.

The next surprise was that industry and non-industry trials had very similar patterns when it came to choosing which drugs to include in trials. Despite some specific differences in the choice of drugs, publicly-funded trials were just as unevenly distributed towards statins and atorvastatin in particular, and even more likely to test a drug against a placebo instead of another drug.

The work suggests or confirms the need to answer the following questions about clinical research in the area of cardiovascular risk and hyperlipidemia in particular:

  1. Why isn’t the pharmaceutical industry compelled to measure safety outcomes more often in clinical trials?
  2. What happens to the data from the comparative effectiveness trials undertaken by the pharmaceutical industry when they aren’t published?
  3. Why aren’t public funds directed more aggressively towards comparative effectiveness research, and towards interventions for which there isn’t already a glut of clinical trials being undertaken?

I think we need to be monitoring the clinical trials registries more closely when guiding research funding for clinical trials.

tl;dr

We’ve recently published a new paper in Clinical Pharmacology & Therapeutics that looks at all the recent clinical trials involving cholesterol-modifying drugs. Specifically, we examined the differences between industry and non-industry funded trials in terms of their design. We wanted to know how the research agendas differ across the funding spectrum, and how that affects their contribution to comparative effectiveness research.

We found that industry-funded trials were more likely to compare between drugs (a surprise given what we know about published clinical trials), undertake larger clinical trials that finish sooner, less likely to examine cardiovascular risk, and less likely to measure safety outcomes.