Twitter users with anti-vaccine opinions are relatively easy to spot if we can measure their misinformation exposure

So…I have been systematically collecting tweets about human papillomavirus (HPV) vaccines since October 2013. We now have over two hundred thousand tweets that included keywords related to HPV vaccines, and the first of two pieces of research we have undertaken using these data has just been published in the Journal of Medical Internet Research. It covers 6 months, 83,551 tweets from 30,621 users connected to each other through 957,865 social connections. The study question is a relatively simple one – we wanted to find out about how many people are tweeting “anti-vaccine” opinions about HPV vaccines, the diversity of their concerns, and how the misinformation exposure is distributed throughout the Twitter communities.

What we found was in some ways surprising – around 24% of the tweets about HPV vaccines were classified as “negative” (more on this later). To me, this seems like a very large proportion given that only around 2% of adults are actually refusing vaccinations for their children. In other ways, I’m less surprised because of how many people have so many other unusual beliefs, and the number of surveys that suggest that 20% to 30% of adults believe that vaccines cause autism.

Looking at how people follow each other within the group of 30,621 users, we found that around 29% of everyone who tweeted about HPV vaccines were exposed to a majority of these “negative” tweets because of who they follow.

To classify the tweets as either “negative” or “neutral/positive”, we used supervised machine learning classifiers that were slightly different to the normal kinds of classifiers that just use information about the text to examine the sentiment of a tweet. I’ll be talking about these machine learning classifiers at the MEDINFO conference in Sao Paulo this August.

What we really wanted to know was how many Twitter users were being exposed to this negative kind of information – usually anecdotes about harm, conspiracy theories, complete fabrications, or some strange amalgamation of all of them – whether these users mostly grouped together, and how far their information reached across communities that might be making decisions about HPV vaccines for themselves or their children.

exposure_follower_network
A network of 30,621 Twitter users who posted tweets about HPV vaccines in a six month period. Users in orange were exposed mostly to negative opinions. Circles are users, larger ones have more followers within this group of users. Users more closely connected are generally positioned closer to each other in the picture.

We also wanted to know a bit more about the reach of the actual science and clinical evidence that is being published in the area. As researchers, we know that there are now studies showing that the HPV vaccine is safe and that there is early evidence of effectiveness in the prevention of cervical cancer, but we don’t really know who might be “exposed” to that kind of information.

Perhaps unsurprisingly, the people producing the science of HPV vaccines were located pretty much as far away as they could possibly be from the people exposed mostly to negative opinions. Most of the tweets linked directly to peer-reviewed articles came from the people in the very top left section of the network illustration above.

The main contribution of our study was to determine how much more likely it is that a user who was previously exposed to negative opinions would be to then tweet a negative opinion. The answer was: “a lot more”.

But to address the reasons why users’ opinions were relatively easy to predict if we know about the information they were exposed to in the past, we have to do a lot more work…

It could be that the opinions were “contagious” and spread through the community. It might also be that people end up forming “homophilous” connections with other users who express the same negative opinions about HPV vaccines. The much more likely explanation is that people who share opinions about all kinds of other things besides HPV vaccines (like guns, religion, politics, conspiracies, organic vegetables, crystals, and magical healing water) are more likely to be connected to each other, and their opinions about HPV vaccines are due to the breadth of misinformation that spreads to them from influential news organisations, celebrities, friends, and magical water practitioners.

It is important that we are careful to explain that the study only demonstrates an association between what people are exposed to in the past, and the direction of their expressed opinions after that. It does not show causation, and it does not tell us how those people came to believe what they do.

The study does tell us something important about how we might be able to estimate risks of poor vaccination decision-making within particular communities in space and time. One of the things we would like to be able to do is to examine where the concentrations of misinformation exposure are distributed geographically in a couple of countries (US and Australia – because that is where we know best), as a way of helping public health organisations better understand who might be vaccine anxious (or at risk of becoming vaccine anxious), and the specific concerns they might have. Because remember, only 2% of adults are conscientiously refusing to vaccinate their children, but an awful lot more of them might be forming their opinions based on the awful misinformation that spreads through the communities they inhabit.

Neuropsych trials involving kids are designed differently when funded by the companies that make the drugs

Over the short break that divided 2013 and 2014, we had a new study published looking at the designs of neuropsychiatric clinical trials that involve children. Because we study trial registrations and not publications, many of the trials that are included in the study are yet to be published, and it is likely that quite a few will never be published.

Neuropsychiatric conditions are a big deal for children and make up a substantial proportion of the burden of disease. In the last decade or so, more and more drugs are being prescribed to children to treat ADHD, depression, autism spectrum disorders, seizure disorders, and a few others. The major problem we face in this area right now is the lack of evidence to help guide the decisions that doctors make with their patients and their patients’ families. Should kids be taking Drug A? Why not Drug B? Maybe a behavioural intervention? A combination of these?

I have already published a few things about how industry and non-industry funded clinical trials are different. To look at how clinical trials differ based on who funds them, we often use the clinicaltrials.gov registry, which currently provides information for about 158K registered trials and is made up of about half US trials, and half trials that are conducted entirely outside the US.

Some differences are generally expected (by cynical people like me) because of the different reasons why industry and non-industry groups decide to do a trial in the first place. We expect that industry trials are more likely to look at their own drugs, the trials are likely to be shorter, more focused on the direct outcomes related to what the drug claims to do (e.g. lower cholesterol rather than reduce cardiovascular risk), and of course they are likely to be designed to nearly always produce a favourable result for the drug in question.

For non-industry groups, there is a kind of hope that clinical trials funded by the public will be for the public good – to fill in the gaps by doing comparative effectiveness studies (where drugs are tested against each other, rather than against a placebo or in a single group) whenever they are appropriate, to focus on the real health outcomes of the populations, and to be capable of identifying risk-to-benefit ratios for drugs that have had questions raised about safety.

The effects of industry sponsorship on clinical trial designs for neuropsychiatric drugs in children

So those differences you might expect to see between industry and non-industry are not quite what we found in our study. For clinical trials that involve children and test drugs used for neuropsychiatric conditions, there really isn’t that much difference between what the industry choose to study and what everyone else does. So even though we did find that industry is less likely to undertake comparative effectiveness trials for these conditions, and the different groups tend to study completely different drugs, the striking result is just how little comparative effectiveness research is being done by both groups.

journal.pone.0084951.g003 (1)

A network view of the drug trials undertaken for ADHD by industry (black) and non-industry (blue) groups – each drug is a node in the network; lines between them are the direct comparisons from trials with active comparators.

To make a long story short, it doesn’t look like either side are doing a very good job of systematically addressing the questions that doctors and their patients really need answered in this area.

Some of the reasons for this probably include the way research is funded (small trials might be easier to fund and undertake), the difficulties associated with acquiring ethics and recruiting children to be involved in clinical trials, and the complexities of testing behavioural therapies and other non-drug interventions against and with drugs.

Of course, there are other good reasons for undertaking trials that involve a single group or only test against a placebo (including safety and ethical reasons)… but for conditions like seizure disorders, where there are already approved standard therapies that are known to be safe, it is quite a shock to see that nearly all of the clinical trials undertaken for seizure disorders in children are placebo-controlled or are tested only in a single group.

What should be done?

To really improve the way we produce and then synthesise evidence for children, we really need to consider much more cooperation and smarter trial designs that will actually fill the gaps in knowledge and help doctors make good decisions. It’s true that it is very hard to fund and successfully undertake a big coordinated trial even when it doesn’t involve children, but the mess of clinical trials that are being undertaken today often seem to be for other purposes – to get a drug approved, to expand a market, to fill a clinician-scientist’s CV – or are constrained to the point where the design is too limited to be really useful. And these problems flow directly into synthesis (systematic reviews and guidelines) because you simply can’t review evidence that doesn’t exist.

I expect that long-term clinical trials that take advantage of electronic medical records, retrospective trials, and observational studies involving heterogeneous sets of case studies will come back to prominence for as long as the evidence produced by current clinical trials is hampered by compromised design, resource constraints, and a lack of coordinated cooperation. We really do need better ways to know which questions need to be answered first, and to find better ways to coordinate research among researchers (and patient registries). Wouldn’t it be nice if we knew exactly which clinical trials are most needed right now, and we could organise ourselves into large-enough groups to avoid redundant and useless trials that will never be able to improve clinical decision-making?

Introducing evidence surveillance as a research stream

I’ve taken a little while to get this post done because I’ve been waiting for my recently-published article to go from online-first to being citeable with volume and page numbers.

Last year, I was asked to write an editorial on the topic of industry influence on clinical evidence for the Journal of Epidemiology & Community Health, presumably after I published a few articles on the topic in early 2012. It’s an area of evidence-based medicine that is very close to my heart, so I jumped at the offer.

It took quite a bit of time to find a way to set out the entire breadth of the evidence process – from the design of clinical trials all the way through to the uptake of synthesised evidence in practice. In the intervening period, I won an NHMRC grant to explore patterns of evidence and risks of bias in much more detail, and the theme of evidence surveillance as an entire stream of research started to emerge.

Together with Florence Bourgeois and Enrico Coiera, we reviewed nearly the whole process of evidence production, reporting and synthesis, identifying nearly all the ways in which large pharmaceutical companies can affect the direction of clinical evidence.

It’s a huge problem because industry influence can lead to the widespread use of unsafe and ineffective drugs, as well as the more subtle problems associated with ‘selling sickness’. Even if 90% of the drugs taken from development to manufacture and marketing are safe, useful and improve health around the world, there’s still that 10% that in hindsight should never have been approved in the first place.

My aim is to find them, and to do so faster than has been possible in the past. It’s what we’ve started to call evidence surveillance around here (thanks Guy Tsafnat), and that’s also what we proposed in the last section of the article.

Note: If you can’t access the full article via the JECH website, you can always have a look at the pre-print article available here on this website. It’s nearly exactly the same as the final version.

Social network analysis in health services

Along with Prof. Johanna Westbrook, I have recently had one of my side projects published as a short report in Social Science & Medicine, and the manuscript is now available online. We were looking at some of the issues associated with using hierarchy, clustering and centrality (three old-school network metrics) in small networks, and mainly in the social networks of health care organisations. It’s a neat and simple way to compare lots of small networks against each other, especially when the networks are of different sizes and densities.

a network

Adam G. Dunn, Johanna I. Westbrook (2010)  Interpreting social network metrics in healthcare organisations: a review and guide to validating small networksSocial Science & Medicine. (Article in press)

Social Science & Medicine is the premier journal for medical sociology (it is ranked as an A* journal in the Australian ERA classification). I’m thankful to have more than enough experts around to make sure that my ideas are translatable in so many different disciplines.