A quick update to explain our most recent editorial [pdf] on evidence-based medicine published in the Journal of Comparative Effectiveness Research. It’s free for anyone to access.
What do we know?
Industry funded research does not always lead to biases that are detrimental to the quality of clinical evidence, and biased research may come about for many reasons other than financial conflicts of interest. But there are clear and strong systemic differences in the research produced by people who have strong financial ties to pharmaceutical companies and other groups. These differences have in the past been connected to problems like massive opportunity costs (ineffective drugs) and widespread harm (unsafe drugs).
What do we think?
Our simple answer is no, we don’t think that industry-funded research should be excluded from comparative effectiveness research. To put it very simply, around half of the completed trials undertaken each year are funded by the industry, and despite the overwhelming number of published trials we see, we still don’t have anywhere near enough of them to properly answer all the questions that doctors and patients have when trying to make decisions together.
Instead, we think improvements in transparency and access to patient-level data, the surveillance of risks of bias, and new methods for combining evidence and data from all available sources at once are much better alternatives. You can read more about all of these in the editorial.
Also, check out the new article from our group on automated citation snowballing published in the Journal of Medical Internet Research. It forms the basis of a recursive search and retrieval method that finds peer-reviewed articles online, downloads them, extracts the reference lists, and follows those links to find and retrieve articles recursively. It is particularly interesting because it can automatically construct citation networks back from a single paper.
I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.
Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.
It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.
Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.
It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.
My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.
Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.
NetSci 2011 is nearly here and this year’s conference will be held in Budapest, Hungary from June 6th to 11th (or thereabouts). The conference is an interesting and important one because of the calibre of the scientists involved – certainly most of the big guns will be there. Considering the most recent release in Nature on the “Controllability of complex networks”, I’m guessing there will be a lot of discussion about the practicalities of decentralised control. That work has direct relevance to my work on understanding how research consensus evolves in a network of evidence.
The picture below is a sneak peek at the work I’ll be presenting at the conference. The network is a very new application for network science, in the domain of pharmaceuticals, clinical trials, and evidence-based medicine. It should be very interesting to see what groups of network scientists think about the systems of evidence that go into clinical decision making. I’m expecting to see a broad range of different perspectives and a broad range of understanding.