Repost: Pharma’s influence over published clinical evidence

Below is a copy of an article I wrote for The Conversation. It’s an independent source of information and analysis about things that matter – but from the university and research perspective, which means it’s generally more rigorous than much of the rest.
This article was originally published at The Conversation. Read the original article.
 
TRANSPARENCY AND MEDICINE – A series examining issues from ethics to the evidence in evidence-based medicine, the influence of medical journals to the role of Big Pharma in our present and future health.

Here Adam Dunn discusses his research into authorship networks, which revealed the position of industry researchers in academic publishing.

There’s growing concern that large pharmaceutical companies are capable of undermining the truth about the published evidence doctors use to treat patients. The suspicion is that pharmaceutical companies may be trading lives for profits.

Clinical trials are one of the main sources of information that guide doctors when they treat patients. But controversial drug withdrawals have given doctors good reasons to be sceptical about the evidence that reaches them, and eroded their trust in the evidence base.

Vioxx gave us the quintessential story of what can go wrong when a big pharmaceutical company exerts influence over the evidence base. The arthritis drug was prescribed millions of times in Australia before it was revealed that it doubled the risk of heart attack. Vioxx was withdrawn in 2003 but the evidence showing its harmful effects was available years earlier.

So when looking for someone to blame, the fingers of prominent academics point directly at the pharmaceutical industry. But are their views justified?

With colleagues from the Centre for Health Informatics, I used network analysis to investigate clinical trial collaboration for a selection of widely prescribed drugs. Much like the way network pictures of Twitter or Facebook are drawn, we connected researchers who had worked together in a clinical trial.

We wanted to see how important each researcher was in their network, especially those who were affiliated with pharmaceutical companies that manufacture the drugs they study.

Our results showed that industry-based authors of clinical trials held more influential positions in their collaborative networks. These authors also received more citations than their non-industry peers.

We concluded that when it comes to clinical trials about drugs, industry researchers occupy influential positions, and their work is more widely cited. These conclusions left us feeling very uneasy about clinical evidence.

It appears that pharmaceutical companies are disproportionately powerful in coming up with the evidence to support the safety and efficacy of their own drugs.

Those familiar with clinical trials might ask how this could happen when clinical trials are registered under strict protocols and published after rigorous peer-review processes. In other words, if clinical trials are so tightly controlled, how can they be manipulated to show a drug is safe when it’s not?

The simple answer is that industry groups do trials differently. Industry-sponsored trials are less likely to publish negative results and more likely to design trials that will produce positive results in the first place. On top of that, industry is responsible for more evidence now than ever before – over a third of registered clinical trials each year are now funded by pharmaceutical companies.

When important evidence is designed to provide only positive conclusions, the data proving a drug’s safety is simply not made available. This is exactly what happened when diabetes drug Avandia was shown to increase the risk of heart failure in 2007. Even after a decade in the market, there were simply not enough data available to show the long-term risks or benefits.

Avandia was a key piece of the evidence puzzle for our research because it revealed the clear and direct negative effect of industry influence. An analysis of articles about the drug revealed that researchers with financial conflicts of interest continued to write favourably about the drug even after the negative evidence was published.

And although Avandia was withdrawn in the United Kingdom and New Zealand, it remains available (albeit under much tighter controls) in Australia and the United States.

So it seems that the lessons from this case may not have been learned. How can we know where and when industry influence will next tip the evidence balance in the favour of another harmful drug?

We’ll need to know more than whether or not clinical trials demonstrate safety and efficacy – we need to know if the right kinds of clinical trials were done in the first place.

This is the eighth part of Transparency and Medicine. You can read the previous instalment by clicking the link below:

Part One: Power and duty: is the social contract in medicine still relevant?

Part Two: Big debts in small packages – the dangers of pens and post-it notes

Part Three: Show and tell: conflicts of interest undeclared for clinical guidelines

Part Four: Eminence or evidence? The ethics of using untested treatments

Part Five: Don’t show me the money: the dangers of non-financial conflicts

Part Six: Ghosts in the machine: better definition of author may stem bias

Part Seven: Clearing the air: why more retractions are good for science

Feature photo EPA/TANNEN MAURY via The Conversation

Who creates the clinical evidence for cholesterol-lowering drugs?

Last week the US Food and Drug Administration released new warnings about the use of statins for patients in the United States. The warnings that have been added to labels in the US come from worries about liver injury, memory-loss and confusion, increased blood sugar levels and some new potentially dangerous interactions between one statin (lovastatin) and a range of other drugs.

Statins are used to inhibit the production of cholesterol in the body. The leading drug in the class, atorvastatin (Lipitor, from Pfizer) is the most-commonly prescribed drug in the world. In the last financial year in Australia, this drug alone cost the government AUD$637M. More than 14 out of every 100 people in Australia were taking a statin each day.

These are a class of drugs that have been around for a long, long time. Simvastatin, one of the oldest in the class, was first approved in the United States more than twenty years ago. So why are new restrictions being put on the labels of the statins now? And shouldn’t the public have been warned about these safety issues a long time ago?

It’s easy to blame the pharmaceutical industry for the problems given previous problems in marketing, conflicts of interest and illegal behaviour but of course the reality is always muddier and more difficult to understand than we’d like it to be. The reality is that the funding of clinical trials and the influence of industry in trial design is a spectrum – and there are different problems all the way along.

Along with some friends, I recently published a paper in Clinical Pharmacology & Therapeutics about clinical trials for antihyperlipidemics.

My friends were colleagues from my own centre, from St Vincent’s Hospital, the Children’s Hospital Informatics Program in Boston, and the Hospital for Sick Kids in Toronto. In the paper, we looked at all the trials that have been registered for these drugs on clinicaltrials.gov since 2007. Some of these trials had already been completed, some were underway, and some were still recruiting. The different patterns of trials funded by industry and non-industry tell an interesting story about the different agendas.

Distribution of trial comparisons across drugs and outcomes

And there were some surprising results.

Trials funded by industry were (and are) typically larger and completed more quickly. They are also more likely to focus on hyperlipidemia rather than cardiovascular outcomes, and less likely to measure safety-related outcomes. The suprise was that industry-funded trials were more likely to register trials that directly compare two or more drugs.

This is a surprise because studies looking at publications of clinical trials find exactly the opposite – that published industry-funded clinical trials are less likely to compare between drugs. This gives us a pretty good hint about which of the trials are being undertaken and then not published.

The next surprise was that industry and non-industry trials had very similar patterns when it came to choosing which drugs to include in trials. Despite some specific differences in the choice of drugs, publicly-funded trials were just as unevenly distributed towards statins and atorvastatin in particular, and even more likely to test a drug against a placebo instead of another drug.

The work suggests or confirms the need to answer the following questions about clinical research in the area of cardiovascular risk and hyperlipidemia in particular:

  1. Why isn’t the pharmaceutical industry compelled to measure safety outcomes more often in clinical trials?
  2. What happens to the data from the comparative effectiveness trials undertaken by the pharmaceutical industry when they aren’t published?
  3. Why aren’t public funds directed more aggressively towards comparative effectiveness research, and towards interventions for which there isn’t already a glut of clinical trials being undertaken?

I think we need to be monitoring the clinical trials registries more closely when guiding research funding for clinical trials.

tl;dr

We’ve recently published a new paper in Clinical Pharmacology & Therapeutics that looks at all the recent clinical trials involving cholesterol-modifying drugs. Specifically, we examined the differences between industry and non-industry funded trials in terms of their design. We wanted to know how the research agendas differ across the funding spectrum, and how that affects their contribution to comparative effectiveness research.

We found that industry-funded trials were more likely to compare between drugs (a surprise given what we know about published clinical trials), undertake larger clinical trials that finish sooner, less likely to examine cardiovascular risk, and less likely to measure safety outcomes.

Spatial ecological networks – where physics, ecology, geography and computational science meet

It’s part physics, part ecology, and part geography – and that’s probably why it is so much fun. Whenever I fly from city to city my favourite part of the trip is looking out of the window to see the patterns made in the landscapes. Most of the time, the patterns are carved out by humans using the land for agriculture, forestry, mining or just as places to live. Other times the landscape pattern is a consequence of natural stuff like weather and bushfires. It’s even easier to see these patterns with Google Maps – you can just zoom in to the south-west corner of Australia and see a patchwork of farms, towns, roads and less disturbed habitats where the more old-school ecosystems are.

For people working in restoration ecology, the whole idea is to work out the best and most efficient way to improve (or at least maintain) the quality of an ecosystem by helping the right kinds of animals move around and getting the right kinds of plants to disperse seeds around the place. Of course it would be nearly impossible to simply reclaim the vast majority of the land and hand it back over to nature because people still need to eat and also extract stuff out of the ground to make more stuff to put in their homes or store in their garages.

But what restoration can do is to look for the best ways to improve connectivity between the areas of land that are safe from most human disturbance – and that is where the modelling of connectivity and corridors has its place. In this type of work, we look for the locations that are most important to the connectivity and improve or maintain them, having a sort of multiplicative positive effect on the surrounding areas. I’ve worked in this area quite extensively in the past and the science still has quite a way to go. Sadly, I’ve also moved on but it remains a passion of mine to “be more efficient with the resources at your disposal.”

The science itself essentially comes down to finding efficient ways to model, simulate or otherwise estimate the movement of organisms through a landscape. In my summer break, I re-implemented four methods (one based on circuit theory, one firmly established in social network analysis, one based directly on 3rd year shortest path algorithms, and a simulation method I developed based on multi-level cellular automata applications) and wrote it up succinctly for a book chapter, for which the book may yet be a long way from completion.

Admittedly, it’s been quite a while since I have been monitoring recent literature updates in spatial modelling within landscape ecology although I have noticed that one piece of software for doing analysis of corridors has become available and I didn’t notice if they had fixed the issues I wrote about in Ecography, which would mean that people using the application may not be getting the best results.

It also doesn’t help that other research in the area (not the particular methods I discuss above) is mired by unusual discrepancies in the methods – in one case, I found two papers published with the exact same network, yet claiming completely different methods for construction. Let’s hope a new brand of responsible and rigorous researchers can come and revolutionise the field.

Dinner for NetSci2011 at Sofitel Budapest

Pictured below is Robin Dunbar (Oxford) making jokes about monogamy watched by Albert-László Barabási (Harvard, Northeastern), Uri Alon (Weizmann Institute), Alain Barrat (Centre national de la recherche scientifique) and Andrea Baronchelli (UPC Barcelona). Not pictured, but still in the room are other well known luminaries such as Brian Uzzi (Kellogg School of Management), and Hawoong Jeong (KAIST). Next year’s NetSci will be at Northeastern and will be organised by Brian Uzzi.

Spreading, Influencing and Cascading

The NetSci2011 satellite workshop on spreading, influencing and cascading in social and information networks.

Here is Brian Uzzi, whose discussion of the adoption of scientific ideas provided some good laughs, and set some brains ticking over how they might improve the likelihood of increased citation rates for themselves. If only it were as simple as citing the right papers and collaborating across wide distances.

The Central European University is the location for NetSci2011. The first day includes a school and workshops. I’ll be attending Circuits of Profit, which promises to be a practical look at the practice of network analysis in business applications. Hopefully we’ll see lots of good science and not too many palm readers. I’m looking forward to it.