Guerilla open access, public engagement with research, and ivory towers

Despite the growth of open access publishing, there is still a massive and growing archive of peer-reviewed research that is hidden behind paywalls. While academics can reach most of the research they need through library subscriptions, researchers, professionals and the broader community outside of academia are effectively cut off from the vast majority of peer-reviewed research. If the growth of file sharing communities transformed the entertainment industry more than fifteen years ago, is a similar transformation in academic publishing inevitable?

Together with Enrico Coiera and Ken Mandl, I published an article today in the Journal of Medical Internet Research. In the article, we considered the plausibility and consequences of a massive data breach and leak of journal articles onto peer-to-peer networks, and the creation of a functioning decentralised network of peer-reviewed research. Considering a hypothetical Biblioleaks scenario, we speculated on the technical feasibility and the motivations that underpin civil disobedience in academic publishing.

It appears as though academics are not providing pre-print versions of their article anywhere near as often as they could. For every 10 articles published, 2 or 3 can be found online for free, but up to 8 of them could be uploaded by the authors legally (this is called self-archiving, where authors upload pre-print versions of their manuscripts). Civil disobedience in relation to sharing articles is still quite rare. Examples of article-sharing on Twitter and via torrents have emerged in the last few years but only a handful of people are involved. There it not yet a critical mass of censorship-resistant sharing that would signal a shift into an era of near-universal access like we saw in the entertainment industry in the late 1990s.

However, as the public come to expect free access to all research as the norm rather than the exception, it might be more likely that the creation of an article-sharing underground will come from outside academia. What is unknown is whether or not the public actually want to access peer-reviewed research directly. From the little evidence that is available on this question, it seems that doctors, patients, professionals of all kinds, as well as the broader community might all benefit from the creation of an underground network of article-sharing, and it may even serve to reduce the gap between research consensus and public opinion for issues like climate change and vaccination, where large sections of the broader community disagree with the overwhelming majority of scientific experts.

Given the size of recent hacks on major companies, there appears to be no technical barriers to a massive data breach and leak. However, by removing the motivations behind a Biblioleaks scenario, publishers and researchers might be able to avoid (or skip over) a period of illegal file-sharing. University librarians could build the servers that would seed the torrents for pre-prints, helping to ensure quality control and improving the impact of the research in the wider community. Researchers can and should learn the self-archiving policies for all their work and upload their manuscripts as soon as they are entitled or obliged to do so. Prescient publishers might find ways to freely release older articles on their own websites to avoid losing traffic and advertising revenue.

Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the clinicaltrials.gov registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Do people outside of universities want to read peer-reviewed journal articles?

I asked a question on Twitter about whether or not people actually tried to read the peer-reviewed journal articles (not just the media releases), and if they encountered paywalls when they tried. This is what happened:

[Click on the time/date to see the conversation]

In case you don’t want to read through the whole conversation, it turns out that every person who answered the question said that they have in the past tried to access peer-reviewed journal articles, and that they have been stopped by paywalls. Some said it happened all the time.

There is very little evidence to show the prevalence of access and blocked access by the “interested public” for peer-reviewed journal articles. Some people seem to assume that only other scientists (or whatever) would be interested in their work, or that everything the “public” need to know is contained in a media release or abstract.

I think the results tell us a lot about the consumption of information by the wider community, the importance of scientific communication, the problem with the myth that only scientists want to read scientific articles, and the great need for free and universal access to all published research.

So far, I’ve been collecting whatever evidence I can get my hands on to relate to this question, especially in medicine, and I’ll add these pieces one by one below, just in case you are interested.

  1. Open access articles are downloaded and viewed more often than other articles, even when they do not confer a citation advantage. This is seen as evidence that people not participating in publishing are accessing the information.Davis, P.M., Open access, readership, citations: a randomized controlled trial of scientific journal publishing. The FASEB Journal, 2011. 25(7): p. 2129-2134.
  2. A Pew Internet Report found that one in four people hit a paywall when searching for health information online. Perhaps more importantly, that 58% of all people have looked for health information online (and in a country where only 81% use the Internet).
    http://www.pewinternet.org/Reports/2013/Health-online/Part-One/Section-9.aspx
  3. From a UNESCO report on the development and promotion of open access: “First, it is known that [people outside of academia] use the literature where it is openly available to them. For example, the usage data for PubMed Central (the NIH’s large collection of biomedical literature) show that of the 420,000 unique users per day of the 2 million items in that database, 25% are from universities, 17% from companies, 40% from ‘citizens’ and the rest from ‘Government and others’.”Swan A. Policy guidelines for the development and promotion of open access, United Nations Educational, Scientific and Cultural Organization, 2012, Paris, France. (Page 30). Available at:  http://www.unesco.org/new/en/communication-and-information/resources/publications-and-communication-materials/publications/full-list/policy-guidelines-for-the-development-and-promotion-of-open-access/Of course, people accessing PubMed Central from domestic IP addresses might often be academics working late at night at home without a VPN (like I am doing now).

About fifty people responded to my question on Twitter. I realise that my audience is probably biased towards the highly-educated, informed, younger, and information-savvy, but I think there are clear and obvious groups of people outside of universities who would be interested in reading published research. These people include doctors, engineers and developers, parents of sick children, politicians and policy-makers, practitioners across a range of disciplines, museum curators, artists, and basically everyone with an interest in the world around them. That this aspect of open access hasn’t been the feature of many surveys or studies seems bizarre to me.

Perhaps most importantly, I think we need to know a lot more about just how often people outside of academia want to access published research, and if problems with access are stopping them from doing so.

Surely the impetus to move towards universal and open access to published research would grow if more academics realised that actually *everyone* wants access to the complicated equations, to the raw data and numbers, and to the authors’ own words about the breadth and limits of the research that they have undertaken.

Introducing evidence surveillance as a research stream

I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.

Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.

It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.

Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.

It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.

My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.

Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.

How about a systematic review that writes itself?

Guy Tsafnat, me, Paul Glasziou and Enrico Coiera have written an editorial for the BMJ on the automation of systematic reviews. I helped a bit, but the clever analogy with the ticking machines from Player Piano fell out of Guy’s brain.

In the editorial, we covered the state-of-the-art in automating specific tasks in the process of synthesising clinical evidence. The basic problem with systematic reviews is that we waste a lot of time and effort in trying to re-do systematic reviews when new evidence becomes available – and in a lot of cases, systematic reviews are out-of-date nearly as soon as they are published.

The solution – using an analogy from Kurt Vonnegut’s Player Piano, which is a dystopian science fiction novel in which ticking automata are able to replicate the actions of a human after observing them – is to replace the standalone systematic reviews with dynamically and automatically updated reviews that change when new evidence is available.

At the press of a button.

The proposal is that after developing the rigorous protocol for a systematic review (something that is already done), we should have enough tech so that clinicians can simply find the review they want, press a button, and have the most recent evidence synthesised in silico. The existing protocols determine which studies are included and how they are analysed. The aim is to dramatically improve the efficiency of systematic reviews and improve their clinical utility by providing the best evidence to clinicians whenever they need it.

G Tsafnat, AG Dunn, P Glasziou, E Coiera (2013) The Automation of Systematic Reviews, BMJ 346:f139

Dealing with industry’s influence on clinical evidence

I co-wrote a piece for The Conversation about a new article that was published in the Cochrane Database of Systematic Reviews, written by Andreas Lundh and other luminaries from the research area. The authors showed that industry sponsored clinical trials more often report positive outcomes and fewer harmful side effects.

The most interesting result from the article was that the biases that make industry funded clinical trials more likely to produce positive results could not be accounted for  using the standard tools that measure bias. This is amazing because it gives us a strong hint that industry is an independent source of heterogeneity in the systematic reviews that include them.

Too bad it’s the 12th of the 12th 2012 and the world is about to end. We won’t have time to sort it out.

(Feature image from AAP Image/Joe Castro via The Conversation – click the link)