Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Learning from “Learning from Hackers”

Alongside colleagues (Enrico Coiera and Richard Day) from here in Sydney and (Kenneth Mandl) from near Boston in the US, I wrote an article for Science Translational Medicine in which we related the current system of “clinical trial evidence translation” to the very successful open source software movement. We highlighted the factors in that success – open access, incentives for participation, and interoperability of source code.

In the article, we drew parallels between the production of source code for open source software and the “source code” of clinical trials – the patient level data that says how well an intervention worked for each patient. If the source code of clinical trials were to be made more widely available, we could start to answer much more interesting questions, more accurately. We think it has the potential to dramatically improve the speed at which we detect unsafe drugs, and help doctors provide the right drugs to the right patients.

Just so that I can keep a record, here is a rough timeline of what happened in the media after the article was published:

  • The article was published in Science Translational Medicine on the 2nd of May in the US (early am on the 3rd in Sydney time).
  • The article was covered by the Sydney Morning Herald on page 15 of Thursday’s (3rd May 2012) edition.
  • Joshua Gliddon was very quick to call me up and have a chat about the article, writing a nice piece about it at
  • Enrico Coiera and I wrote a piece for the Sydney Morning Herald’s National Times talking about the article in more detail (published online on the 4th May 2012).
  • The article was also covered by Higher Education section of The Australian on Friday (4th May 2012).
  • Australian Life Scientist collected up a wide selection of information and wrote a summary of the article and our comments (first recorded example of the phrase “all information should be free” that I found was in Levy’s Hackers published in 1984, which would pre-date Woz, I believe).
  • @RyanMFierce found irony in the publication because it argues for open data and was published behind a paywall.
  • The article was mentioned in the introduction to a piece on sharing in genetics on The Conversation (an excellent outlet), which quickly became the most read article on the website (3rd May 2012).
  • A summary of the SMH National Times story and the article appeared on Open Health News (4th May 2012).
  • Here is the original media release from UNSW.

Hopefully once this burst of activity falls away, it will leave some lasting resonance and help convince a few people to think harder about how we can fix the problems of evidence translation.

I learnt a couple of lessons from the media activity surrounding the publication. Firstly, I learnt that it is impossible to control the message from your own work – people will read whatever they want and will probably focus on sections you thought were less important. There’s nothing you can do about it other than to faithfully represent your work and push your own agenda. I also learnt that there is a wide and diverse group of people already dealing with open access issues in clinical trial data – many more than I originally realised when I wrote the piece.

The next steps in the research will include learning about how far we can push the limits of patient-level meta-analysis by pooling clinical trial data in clever ways, while maintaining rigorous de-identification. Eventually we may even be able to automate the rapid integration of new evidence into organic, linked and dynamic systematic reviews and guidelines, customised for groups or even individuals.

Who creates the clinical evidence for cholesterol-lowering drugs?

Last week the US Food and Drug Administration released new warnings about the use of statins for patients in the United States. The warnings that have been added to labels in the US come from worries about liver injury, memory-loss and confusion, increased blood sugar levels and some new potentially dangerous interactions between one statin (lovastatin) and a range of other drugs.

Statins are used to inhibit the production of cholesterol in the body. The leading drug in the class, atorvastatin (Lipitor, from Pfizer) is the most-commonly prescribed drug in the world. In the last financial year in Australia, this drug alone cost the government AUD$637M. More than 14 out of every 100 people in Australia were taking a statin each day.

These are a class of drugs that have been around for a long, long time. Simvastatin, one of the oldest in the class, was first approved in the United States more than twenty years ago. So why are new restrictions being put on the labels of the statins now? And shouldn’t the public have been warned about these safety issues a long time ago?

It’s easy to blame the pharmaceutical industry for the problems given previous problems in marketing, conflicts of interest and illegal behaviour but of course the reality is always muddier and more difficult to understand than we’d like it to be. The reality is that the funding of clinical trials and the influence of industry in trial design is a spectrum – and there are different problems all the way along.

Along with some friends, I recently published a paper in Clinical Pharmacology & Therapeutics about clinical trials for antihyperlipidemics.

My friends were colleagues from my own centre, from St Vincent’s Hospital, the Children’s Hospital Informatics Program in Boston, and the Hospital for Sick Kids in Toronto. In the paper, we looked at all the trials that have been registered for these drugs on since 2007. Some of these trials had already been completed, some were underway, and some were still recruiting. The different patterns of trials funded by industry and non-industry tell an interesting story about the different agendas.

Distribution of trial comparisons across drugs and outcomes

And there were some surprising results.

Trials funded by industry were (and are) typically larger and completed more quickly. They are also more likely to focus on hyperlipidemia rather than cardiovascular outcomes, and less likely to measure safety-related outcomes. The suprise was that industry-funded trials were more likely to register trials that directly compare two or more drugs.

This is a surprise because studies looking at publications of clinical trials find exactly the opposite – that published industry-funded clinical trials are less likely to compare between drugs. This gives us a pretty good hint about which of the trials are being undertaken and then not published.

The next surprise was that industry and non-industry trials had very similar patterns when it came to choosing which drugs to include in trials. Despite some specific differences in the choice of drugs, publicly-funded trials were just as unevenly distributed towards statins and atorvastatin in particular, and even more likely to test a drug against a placebo instead of another drug.

The work suggests or confirms the need to answer the following questions about clinical research in the area of cardiovascular risk and hyperlipidemia in particular:

  1. Why isn’t the pharmaceutical industry compelled to measure safety outcomes more often in clinical trials?
  2. What happens to the data from the comparative effectiveness trials undertaken by the pharmaceutical industry when they aren’t published?
  3. Why aren’t public funds directed more aggressively towards comparative effectiveness research, and towards interventions for which there isn’t already a glut of clinical trials being undertaken?

I think we need to be monitoring the clinical trials registries more closely when guiding research funding for clinical trials.


We’ve recently published a new paper in Clinical Pharmacology & Therapeutics that looks at all the recent clinical trials involving cholesterol-modifying drugs. Specifically, we examined the differences between industry and non-industry funded trials in terms of their design. We wanted to know how the research agendas differ across the funding spectrum, and how that affects their contribution to comparative effectiveness research.

We found that industry-funded trials were more likely to compare between drugs (a surprise given what we know about published clinical trials), undertake larger clinical trials that finish sooner, less likely to examine cardiovascular risk, and less likely to measure safety outcomes.

Of exceptional importance – connecting patients to research

For quite some time, I’ve been very interested in the disconnect between the research being undertaken and the questions that people (especially patients and doctors) need answered. There is a huge disconnect between the two.

The NEJM has published a short piece on a very well-funded institute, The Patient-Centered Outcomes Research Institute (PCORI), which will use about $500 million each year to provide the evidence that is most important to patients. Their basic aim is to help people make informed healthcare decisions. Even more interesting to me is that the institute plans to “deploy the full arsenal”, which includes not only clinical trials, but also analysis of registries and other databases, as well as data syntheses (read: meta-analysis and review).

A lot can be done with $500 million, to improve some of the major causes of morbidity in the US, which will then have a direct impact on the rest of the world. Let’s hope the opportunity isn’t squandered.

Another approved drug may leave the market

Bevacizumab Treatment for Solid Tumors, February 2, 2011, Hayes 305 (5): 506 — JAMA

Avastin is a really expensive (think 50K a year) and dangerous (people die more often) drug that has been shown to have marginal positive effects on the progression of cancer for some patients. It is a very difficult question to weigh up efficacy, safety and cost for a drug of such extremes.

JAMA published a meta-analysis demonstrating the danger of the drug, and follow up with an opinion piece (linked above) that attempts to reconcile the issues to reach a conclusion. I think that enough noise has been made for that conclusion to be quite clear.

Ensuring safe and effective drugs: who can do what it takes?

A nice editorial about the kinds of data available from industry-funded clinical trials, which was published yesterday in the BMJ and written by an inter-continental group of authors.

Ensuring safe and effective drugs: who can do what it takes?