tl;dr: we have a new article on conflicts of interest, published today in JAMA.
Imagine you are attempting to answer a question about your health or the health of someone in your care. The answer isn’t immediately obvious so you search online and find two relevant articles from reputable journals, print them out, and put them both down on a table to read. For the moment, don’t worry whether the articles are clinical studies or systematic reviews [and also don’t worry that nobody actually prints things out any more, you’re ruining the imagery]. You flip through both of them, find that they are very similar in design, but the conclusions are different. How do you choose which one to base your decision on?
You then discover that one of the articles was written by authors who have received funding and consulting fees from the company that stands to profit from the decision you will end up making; the other article was not. Would that influence your decision?
Financial conflicts of interest matter. On average, articles written by authors with financial conflicts of interest are different from other articles. Studies on the topic have concluded that financial conflicts of interest have contributed to delays in the withdrawals of unsafe drugs, are an excellent predictor of the conclusions of opinion pieces, and are even associated with conclusions in systematic reviews.
The consequences: around a third of the drugs that are approved will eventually have a safety warning added to them or be withdrawn, and it typically takes more than four years after approval for those issues to be discovered. Globally, we spend billions of extra dollars on drugs, devices, tests, and operations that are neither safer nor more effective than the less expensive care they replaced.
Articles written by authors with financial conflicts of interest are sometimes just advertising masquerading as research. The problem is that if we look at an article in isolation, we usually can’t tell whether it belongs with the sometimes category or not. I will come back to this problem at the end but first…
We might not have found all of the financial conflicts of interest. In the research that was published today, we estimated the prevalence of financial conflicts of interests across all recently published biomedical research articles. To do that we sampled articles from the literature (as opposed to sampling journals). To be included, the articles had to have been published in 2017 and they had to have been published in a journal that was both indexed by PubMed and listed with the International Committee of Medical Journal Editors (which means that are supposed to adhere to standards for disclosing conflicts of interest). We also excluded special types of articles including news, images, poetry, and other things that do not directly report or review research findings.
We will definitely have missed some conflicts of interest that should have been there but weren’t disclosed for one reason or another [This is common – there were even missing disclosures in at least one of the articles published in JAMA this week]. It may be that authors failed to disclose all of their relevant conflicts of interest or that journals failed to include authors’ disclosures in the full text of the articles. Some of these will be in the 13.6% of articles that did not include any statement about conflicts of interest but my guess is that most of them will be among the 63.6% of articles where a “none” disclosure was the standard and they were missing either because of laziness, lack of communication among co-authors and journals, or because authors or journals do not understand what a conflict of interest is.
That there is likely to be missing information from what is available in the disclosures is precisely the reason why we need an open, machine-readable system for listing financial interests for every published author of research, and not just the subset of practicing physicians from a single country. More on this later but first…
Articles from authors with conflicts of interest get more attention. We found that articles written by authors with financial conflicts of interest tend to be published in journals with higher impact factors, and have higher Altmetric scores compared to articles where none of the authors have financial conflicts of interest.
You are clever, so the first thing you are likely to say when you read that statement is: “Isn’t that because journals with higher impact factors published more big randomised controlled trials so the mix of study types is different?” A good question, but we also checked within the different categories of articles and the difference still holds. For example, if a drug trial is written by someone with a financial conflict of interest, then compared to a drug trial without a conflict of interest, it is much more likely to be in a journal with a higher impact factor and it is much more likely to have been written about in mainstream news media and tweeted a lot more often.
There are a couple of potential consequences.
If you are keeping track of conflicts of interest in the research you read about in the news and on your social feeds, it might feel like most biomedical research is funded by industry or written by researchers who take money from the industry. If you believe that financial conflicts of interest are associated with bias and problems with integrity, then seeing conflicts of interest in most of what you read might make you feel distrustful of research generally. Add that to the sensationalism and over-generalisations that is common in health reporting in the news and on social media, and it is unsurprising that sections of the public take a relatively dim view of biomedical research.
In reality though, most biomedical articles are written by people like me, who have no financial relationship with any particular industry that might end up making more profit by making a drug, device, diagnostic tool, diet, or app look safer and more effective than it really is. So we hope this new research can serve as a reminder that most research is undertaken by researchers who have no financial conflict of interest to influence the ways they design and report their experiments.
Being published in a higher impact journal also tends to mean more attention from the research community, regardless of the quality and integrity of the research. While I can’t tell you about it yet, we have more evidence to show that trials in higher impact journals are more likely to make it into systematic reviews quicker than trials in lesser known journals. More attention from within the research community might create a kind of amplification bias, where the results and conclusions of articles written by researchers with financial conflicts of interest have greater influence in reviews, systematic reviews, editorials, guidelines, policies, and other technologies used to help clinicians make decisions with their patients.
It might seem impossible but there are steps we can take to improve the system. I have already written quite a bit about why we need to make financial conflicts of interest more transparent and accessible, including in a review we published in Research Integrity and Peer Review as well as a thing I wrote for Nature. It is worth rephrasing this argument again here to explain why disclosure alone is not enough to fix the problem.
Back to our first problem again. What are we actually supposed to do with an article if we find that an author has a financial conflict of interest? Completely discounting the research is a mistake and would constitute a serious waste of time and effort (not to mention the risk of being a participant in a trial). But then trusting it completely would also be a problem, because we know that there is a real risk that the research in the article may have been designed specifically to produce a favourable conclusion; that it might be the visible tip of the iceberg under which sits a large mass of unpublished and less favourable results, that the outcomes reported were only the ones that made the intervention look good, or that the conclusions obfuscate the safety issues and exaggerate the efficacy.
I’ve spent many years looking for exactly these issues in clinical studies and systematic reviews and I still have trouble identifying them quickly. And the evidence shows that even the best systematic reviewers with the best tools for measuring risks of bias still can’t explain why industry funded studies are more often favourable than their counterparts.
The current reality is that we use financial conflicts of interest as a signal that we need to be more careful when appraising the integrity of what we are reading. It’s not a great signal but the alternative is to spend hours trying to make sense of the research in the context of all the other research answering the same questions.
This gives us a hint at where to go next. If financial conflicts of interest are a poor signal of the integrity of a published article, then we need a better signal. To do this, we need to make sure that conflict of interest disclosures have the following characteristics:
- Complete: missing and incorrect disclosure statements mean that any studies we do looking at how they are different are polluted by noise. To make disclosures complete, all authors of research (not just physicians in the US) could declare them in a public registry. That would have the added bonus of saving lots of time when publishing an article.
- Categorised: a taxonomy for describing different types of conflicts of interest would also improve the quality of the signal. A researcher that relies entirely on funding from a pharmaceutical company is different from a researcher who gets paid to travel to conferences to talk about how great a new drug is, and both are very different from a researcher who once went to a dinner that was partially sponsored by a company.
- Publicly accessible: There are companies trying to build the infrastructure to capture conflict of interest disclosures per author but they are not public. I think we should couple ORCID and CrossRef to store and update records of conflicts of interest for all researchers.
- Machine-readable: extracting conflicts of interest, classifying them according to a taxonomy, and identifying associations with conclusions or measures of integrity is incredibly time-consuming (trust me), so if we want to be able to really quantify the difference between articles with and without different types of conflicts of interest, we have to be able to do that using data mining methods.
Together those elements will make it possible to much more precisely measure the uncertainty introduced by the presence of financial conflicts of interest in a cohort of articles. It won’t tell you exactly whether the article you are reading is reliable or not, but it can tell you historically how often articles with the same types of conflicts of interest had conclusions that were substantially different from cohorts of equivalent articles without conflicts of interest.
Going back to the two imaginary articles sitting on your table, now imagine that we have our public registry and the tools you need to precisely label the risk of the articles relative to their designs as well as the authors’ financial conflicts of interest. Instead of wondering which you base your decision on with no guidance, you can instead quickly determine that the conflicts of interest are the type that typically places the work firmly in the sometimes category; it was incredibly unlikely that the authors would have been able to publish a study that didn’t unequivocally trumpet the safety and efficacy of the new expensive thing. Armed with that information, you down-weight the conclusions from that article and base your decision on the more cautious conclusions of the other.