Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the clinicaltrials.gov registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Do people outside of universities want to read peer-reviewed journal articles?

I asked a question on Twitter about whether or not people actually tried to read the peer-reviewed journal articles (not just the media releases), and if they encountered paywalls when they tried. This is what happened:

[Click on the time/date to see the conversation]

In case you don’t want to read through the whole conversation, it turns out that every person who answered the question said that they have in the past tried to access peer-reviewed journal articles, and that they have been stopped by paywalls. Some said it happened all the time.

There is very little evidence to show the prevalence of access and blocked access by the “interested public” for peer-reviewed journal articles. Some people seem to assume that only other scientists (or whatever) would be interested in their work, or that everything the “public” need to know is contained in a media release or abstract.

I think the results tell us a lot about the consumption of information by the wider community, the importance of scientific communication, the problem with the myth that only scientists want to read scientific articles, and the great need for free and universal access to all published research.

So far, I’ve been collecting whatever evidence I can get my hands on to relate to this question, especially in medicine, and I’ll add these pieces one by one below, just in case you are interested.

  1. Open access articles are downloaded and viewed more often than other articles, even when they do not confer a citation advantage. This is seen as evidence that people not participating in publishing are accessing the information.Davis, P.M., Open access, readership, citations: a randomized controlled trial of scientific journal publishing. The FASEB Journal, 2011. 25(7): p. 2129-2134.
  2. A Pew Internet Report found that one in four people hit a paywall when searching for health information online. Perhaps more importantly, that 58% of all people have looked for health information online (and in a country where only 81% use the Internet).
    http://www.pewinternet.org/Reports/2013/Health-online/Part-One/Section-9.aspx
  3. From a UNESCO report on the development and promotion of open access: “First, it is known that [people outside of academia] use the literature where it is openly available to them. For example, the usage data for PubMed Central (the NIH’s large collection of biomedical literature) show that of the 420,000 unique users per day of the 2 million items in that database, 25% are from universities, 17% from companies, 40% from ‘citizens’ and the rest from ‘Government and others’.”Swan A. Policy guidelines for the development and promotion of open access, United Nations Educational, Scientific and Cultural Organization, 2012, Paris, France. (Page 30). Available at:  http://www.unesco.org/new/en/communication-and-information/resources/publications-and-communication-materials/publications/full-list/policy-guidelines-for-the-development-and-promotion-of-open-access/Of course, people accessing PubMed Central from domestic IP addresses might often be academics working late at night at home without a VPN (like I am doing now).

About fifty people responded to my question on Twitter. I realise that my audience is probably biased towards the highly-educated, informed, younger, and information-savvy, but I think there are clear and obvious groups of people outside of universities who would be interested in reading published research. These people include doctors, engineers and developers, parents of sick children, politicians and policy-makers, practitioners across a range of disciplines, museum curators, artists, and basically everyone with an interest in the world around them. That this aspect of open access hasn’t been the feature of many surveys or studies seems bizarre to me.

Perhaps most importantly, I think we need to know a lot more about just how often people outside of academia want to access published research, and if problems with access are stopping them from doing so.

Surely the impetus to move towards universal and open access to published research would grow if more academics realised that actually *everyone* wants access to the complicated equations, to the raw data and numbers, and to the authors’ own words about the breadth and limits of the research that they have undertaken.

How to do work-life balance: learn to say “no”

I wrote a piece for the Guardian’s Higher Education Network, all about the power of “no“. The piece was designed particularly with early-career researchers in mind, but there might be some resonance for researchers at other stages of their careers, and maybe even more widely.

I always struggle with turning down requests, which tends to make for an interesting, diverse, and very tiring career. I have absolutely enjoyed getting involved in a range of unusual and interesting research (and other) projects in the last six years but it has come at the cost of balance in my life. I’m pretty sure that I will continue to struggle with balancing work and whatever else it is that people are supposed to do when they are not working. At least writing about saying “no” has made me think about my own internal mechanisms for saying no.

In case you missed the link to the article I wrote, here it is: “Early career research: the power of ‘no’

On open access – practical issues

Upulie Divisekera, prolific tweeter and all-around awesome scientist, wanted to write a thing about open access and was nice enough to ask me for some help. The result, which you can find on Crikey and read for free, captures the costs of publishing and the avenues through which journal publishers make obscene operating profits.

Long story short, it’s because the publishers have convinced academics to give them all their work for free, as well as do the quality assurance tasks. Then they charge the same communities of academics to access them, or they actually charge authors to give them their work for free in the first place. And through all of that, the costs of publishing have probably decreased because everything is online now, instead of in actual printed books that sit in libraries gathering dust. When we think about it like that, it doesn’t make the academics seem very smart. And it’s kind of true. I’ll explain why…

In case you didn’t catch the link to the article, which proved to be quite popular, then here it is.
Why science doesn’t belong to everyone (yet)

After the cost of knowledge became a thing, more and more of mainstream academia started to think about the open access movement and started to jump on the golden bandwagon. Essentially, open access just shifts the costs of publishing from the library to the scientist, but the money comes from the public either way, so I personally don’t see how this sort of shake-up will have a direct effect on the actual cost of knowledge.

There’s a simpler approach that should be considered the responsibility of every research academic considering the submission of a piece of research. And that is to check the self-archiving rules for each journal. It turns out that most of the decent journals to which we might consider submitting work allow researchers to upload pre-prints (yellow) or post-prints (blue/green) already (some after a delay), and most of them will publish your work for free. The journals that don’t give researchers the ability to self-archive are a small enough proportion that they are easily avoided without having to sacrifice readership, impact (and yes, your choice of journal does matter) and good old-fashioned prestige.

And then just let Google work its magic.

Soon enough, your pdf is available as one of “All X versions” on Google Scholar, and will be linked directly to your institutions’ (or your own) webpages. And if you are looking for an article of mine that is “behind a paywall”, google it first before you start bitching about it on the internet because the post-print version is available at the click of a button.

So why aren’t people doing it properly already?

Because it’s not new. It’s not a buzz word. Blue and green coloured-things might not as desirable as gold. But it is important. And if you’re a researcher, you owe it to the public to know it and get it right, on time, every time.

Introducing evidence surveillance as a research stream

I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.

Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.

It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.

Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.

It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.

My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.

Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.

Dealing with industry’s influence on clinical evidence

I co-wrote a piece for The Conversation about a new article that was published in the Cochrane Database of Systematic Reviews, written by Andreas Lundh and other luminaries from the research area. The authors showed that industry sponsored clinical trials more often report positive outcomes and fewer harmful side effects.

The most interesting result from the article was that the biases that make industry funded clinical trials more likely to produce positive results could not be accounted for  using the standard tools that measure bias. This is amazing because it gives us a strong hint that industry is an independent source of heterogeneity in the systematic reviews that include them.

Too bad it’s the 12th of the 12th 2012 and the world is about to end. We won’t have time to sort it out.

(Feature image from AAP Image/Joe Castro via The Conversation – click the link)