Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Do people outside of universities want to read peer-reviewed journal articles?

I asked a question on Twitter about whether or not people actually tried to read the peer-reviewed journal articles (not just the media releases), and if they encountered paywalls when they tried. This is what happened:

[Click on the time/date to see the conversation]

In case you don’t want to read through the whole conversation, it turns out that every person who answered the question said that they have in the past tried to access peer-reviewed journal articles, and that they have been stopped by paywalls. Some said it happened all the time.

There is very little evidence to show the prevalence of access and blocked access by the “interested public” for peer-reviewed journal articles. Some people seem to assume that only other scientists (or whatever) would be interested in their work, or that everything the “public” need to know is contained in a media release or abstract.

I think the results tell us a lot about the consumption of information by the wider community, the importance of scientific communication, the problem with the myth that only scientists want to read scientific articles, and the great need for free and universal access to all published research.

So far, I’ve been collecting whatever evidence I can get my hands on to relate to this question, especially in medicine, and I’ll add these pieces one by one below, just in case you are interested.

  1. Open access articles are downloaded and viewed more often than other articles, even when they do not confer a citation advantage. This is seen as evidence that people not participating in publishing are accessing the information.Davis, P.M., Open access, readership, citations: a randomized controlled trial of scientific journal publishing. The FASEB Journal, 2011. 25(7): p. 2129-2134.
  2. A Pew Internet Report found that one in four people hit a paywall when searching for health information online. Perhaps more importantly, that 58% of all people have looked for health information online (and in a country where only 81% use the Internet).
  3. From a UNESCO report on the development and promotion of open access: “First, it is known that [people outside of academia] use the literature where it is openly available to them. For example, the usage data for PubMed Central (the NIH’s large collection of biomedical literature) show that of the 420,000 unique users per day of the 2 million items in that database, 25% are from universities, 17% from companies, 40% from ‘citizens’ and the rest from ‘Government and others’.”Swan A. Policy guidelines for the development and promotion of open access, United Nations Educational, Scientific and Cultural Organization, 2012, Paris, France. (Page 30). Available at: course, people accessing PubMed Central from domestic IP addresses might often be academics working late at night at home without a VPN (like I am doing now).

About fifty people responded to my question on Twitter. I realise that my audience is probably biased towards the highly-educated, informed, younger, and information-savvy, but I think there are clear and obvious groups of people outside of universities who would be interested in reading published research. These people include doctors, engineers and developers, parents of sick children, politicians and policy-makers, practitioners across a range of disciplines, museum curators, artists, and basically everyone with an interest in the world around them. That this aspect of open access hasn’t been the feature of many surveys or studies seems bizarre to me.

Perhaps most importantly, I think we need to know a lot more about just how often people outside of academia want to access published research, and if problems with access are stopping them from doing so.

Surely the impetus to move towards universal and open access to published research would grow if more academics realised that actually *everyone* wants access to the complicated equations, to the raw data and numbers, and to the authors’ own words about the breadth and limits of the research that they have undertaken.

All that glitters is not gold: the fallacy of open access evangelism

All-or-nothing open access evangelism perpetuates the problems of scientific publishing. Writing in the Guardian, one advocate has even suggested that publishing behind a pay-wall is immoral. That form of evangelism is wrong – for now – and may do more harm than good.

Yes, there are clear advantages to gold open access. Chief among these is that everyone gets unfettered access to research. There is no doubt that access is fundamental to the way science works. Yet there is a trend towards a simplistic kind of open access evangelism that seems to be gaining traction in mainstream academia, and it has me worried.

Problem 1. It is keeping costs of doing research unsustainable

In its current form, paid gold open access only shifts the cost of access from the libraries to the researchers without making it affordable. Where researchers and libraries are both publicly funded, money flows from the same founts to the same drains, just through different pipes. For hybrid policies (subscription-based journals with paid open access options), the public can sometimes pay twice.

Traditional publishers appear comfortable with gold and hybrid open access. Elsevier manages approximately two thousand journals and nearly 80% have open access options, costing between USD$500 and USD$5000 per article. The average cost of publishing an article in open access is a little over USD$900, and for biomedical disciplines the costs are far higher than that. There are low- or no-cost gold open access alternatives but they are not as popular and do not confer the same levels of prestige.

If advocates (and working groups in the UK) continue to promote the idea that gold open access is the appropriate direction for right now, publishing will remain too expensive for researchers with already restricted budgets, and the public will continue to fund ridiculously profitable publishing groups and shonky operators.

The obvious short-term solution, often proposed from within the open access community, is to focus on green open access (authors may freely distribute a version of the article), and dramatically improve the rate of self-archiving. Per article, library subscriptions may still be more expensive than open access fees, but a critical mass of self-archiving is a necessary step in the process.

The gap between the number of pay-walled articles that could be made available and the number that actually are, is a shameful indictment of academia. In a soon-to-be-formally-published article, Björk et al. have showed that 81% of published articles could have been self-archived, yet the uptake is around 12%. The blame for the gap is aimed directly at academics.

The civil disobedience of #icanhazpdf (Twitter users requesting and receiving articles from academics with institutional access) is nearly always against institutional policy. No one would recommend the practice, even if tools like TOR and anonymous public dropbox accounts could safely preserve anonymity. But it does provide a hint for what could be done with the 50 million articles already out there. Just as the legitimate use of Spotify replaced the illegal use of Napster for sharing music, we may soon see a tool that will be viewed as both the saviour and destroyer of academic publishing.

Problem 2. It permits the widespread ruination of quality and rigour in research

It is not just the largest publishing houses that are embracing pay-to-publish open access models. There appears to be an endless supply of researchers willing to engage with nearly 9000 open access journals. The number of journals has doubled since 2009, with more than three new journals established every day. While there are paragons of quality among the top tiers of open access journals, many have instead duped researchers and watered down the global research endeavour with misconduct, plagiarism, and inadequate peer-review.

Hindawi, a borderline predatory publisher, is reported to have a higher profit margin (52%) than Elsevier (36%), suggesting that publishing offers exceptional returns on investment, better than mining, pharmaceuticals, perhaps only beaten by illegal drug/human trafficking. Only a small fraction of open access journals have impact factors. Yes, there are gems among the bulk of gold open access journals (I’ve seen some), but many of these journals and the crap published in them only function to fill the research arena with work that adds very little to science, to innovation, and to improving society.

As a consequence of the publish-or-perish mentality and ease of publishing, scientific research has jumped way over the line from a curated library (little redundant information and sustained relevance) to the kind of vast streams of quickly-forgotten information that gives meaning to the phrase “sipping from the fire hose”.

This is where gold open access and creative commons (with attribution) licenses will eventually become vital. If academic publishing does reach a state in which access to all research is free, immediate and unrestricted, it will likely signal the biggest shift in science since the digital revolution. Much of the new research synthesis ideas involving algorithmic filtering, reanalysis, and meta-analysis are yet to be developed. Where I work, some of us have started to wander around the edges of these possibilities.

We may even be able to do away with journals completely. Michael Eisen, co-founder of PLoS, has argued that publishers add very little to the work done by researchers. Mathematicians have moved to take publishers out of the equation with the Episciences Project. In the future, scientists might open up their lab books and hard disks so that data and models can be freely shared, searched, recycled and linked together like vast open source software communities have been doing for years.

To get there, science needs better ways to attribute and praise individuals for discrete chunks of research. This is where altmetrics are expected to extend citation-based metrics to detail the full range of impact that research (not just publications) can have on scholarship and society. Using citations and journal impact factors to find good science is like trying to fish with explosives.

Lessons for researchers and research policy developers

While we wait for the future of research dissemination to emerge, there are simple ways in which academics can make sure they act in the best interests of the scientific community and the public.

Besides respecting alternative measurements of impact, funders should continue to mandate self-archiving through institutional repositories. Informaticians should investigate tools and motivations for sharing in line with the decentralized #icanhazapdf, the NIH manuscript submission system, Figshare, the Synaptic Leap, or the loophole that ResearchGate uses to encourage uploads to ‘personal’ pages.

Australians are relatively lucky when it comes to self-archiving. We have a mandate from National Health and Medical Research Council, and the Australian Research Council, that all publicly-funded research must be shared, and we have seen experts in open access discussing the cost-effectiveness of gold and green open access early and clearly.

As a bare minimum, academics must make their contact details public to entertain requests for inaccessible articles, check journal policies on open access prior to submission, avoid the temptation of predatory open access journals, and most importantly:

dramatically improve the woeful record in self-archiving.

There is no direct route to an academic publishing future where publicly-funded research outputs are both libre and gratis. A sustainable trajectory requires a diversity of affordable ways to disseminate research widely, but it will only work if we can retain our grip on the processes that ensure the rigour and quality being eroded.

[Image credit: The ENIAC]

How to do work-life balance: learn to say “no”

I wrote a piece for the Guardian’s Higher Education Network, all about the power of “no“. The piece was designed particularly with early-career researchers in mind, but there might be some resonance for researchers at other stages of their careers, and maybe even more widely.

I always struggle with turning down requests, which tends to make for an interesting, diverse, and very tiring career. I have absolutely enjoyed getting involved in a range of unusual and interesting research (and other) projects in the last six years but it has come at the cost of balance in my life. I’m pretty sure that I will continue to struggle with balancing work and whatever else it is that people are supposed to do when they are not working. At least writing about saying “no” has made me think about my own internal mechanisms for saying no.

In case you missed the link to the article I wrote, here it is: “Early career research: the power of ‘no’

On open access – practical issues

Upulie Divisekera, prolific tweeter and all-around awesome scientist, wanted to write a thing about open access and was nice enough to ask me for some help. The result, which you can find on Crikey and read for free, captures the costs of publishing and the avenues through which journal publishers make obscene operating profits.

Long story short, it’s because the publishers have convinced academics to give them all their work for free, as well as do the quality assurance tasks. Then they charge the same communities of academics to access them, or they actually charge authors to give them their work for free in the first place. And through all of that, the costs of publishing have probably decreased because everything is online now, instead of in actual printed books that sit in libraries gathering dust. When we think about it like that, it doesn’t make the academics seem very smart. And it’s kind of true. I’ll explain why…

In case you didn’t catch the link to the article, which proved to be quite popular, then here it is.
Why science doesn’t belong to everyone (yet)

After the cost of knowledge became a thing, more and more of mainstream academia started to think about the open access movement and started to jump on the golden bandwagon. Essentially, open access just shifts the costs of publishing from the library to the scientist, but the money comes from the public either way, so I personally don’t see how this sort of shake-up will have a direct effect on the actual cost of knowledge.

There’s a simpler approach that should be considered the responsibility of every research academic considering the submission of a piece of research. And that is to check the self-archiving rules for each journal. It turns out that most of the decent journals to which we might consider submitting work allow researchers to upload pre-prints (yellow) or post-prints (blue/green) already (some after a delay), and most of them will publish your work for free. The journals that don’t give researchers the ability to self-archive are a small enough proportion that they are easily avoided without having to sacrifice readership, impact (and yes, your choice of journal does matter) and good old-fashioned prestige.

And then just let Google work its magic.

Soon enough, your pdf is available as one of “All X versions” on Google Scholar, and will be linked directly to your institutions’ (or your own) webpages. And if you are looking for an article of mine that is “behind a paywall”, google it first before you start bitching about it on the internet because the post-print version is available at the click of a button.

So why aren’t people doing it properly already?

Because it’s not new. It’s not a buzz word. Blue and green coloured-things might not as desirable as gold. But it is important. And if you’re a researcher, you owe it to the public to know it and get it right, on time, every time.

Introducing evidence surveillance as a research stream

I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.

Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.

It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.

Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.

It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.

My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.

Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.