Spatial ecological networks – where physics, ecology, geography and computational science meet

It’s part physics, part ecology, and part geography – and that’s probably why it is so much fun. Whenever I fly from city to city my favourite part of the trip is looking out of the window to see the patterns made in the landscapes. Most of the time, the patterns are carved out by humans using the land for agriculture, forestry, mining or just as places to live. Other times the landscape pattern is a consequence of natural stuff like weather and bushfires. It’s even easier to see these patterns with Google Maps – you can just zoom in to the south-west corner of Australia and see a patchwork of farms, towns, roads and less disturbed habitats where the more old-school ecosystems are.

For people working in restoration ecology, the whole idea is to work out the best and most efficient way to improve (or at least maintain) the quality of an ecosystem by helping the right kinds of animals move around and getting the right kinds of plants to disperse seeds around the place. Of course it would be nearly impossible to simply reclaim the vast majority of the land and hand it back over to nature because people still need to eat and also extract stuff out of the ground to make more stuff to put in their homes or store in their garages.

But what restoration can do is to look for the best ways to improve connectivity between the areas of land that are safe from most human disturbance – and that is where the modelling of connectivity and corridors has its place. In this type of work, we look for the locations that are most important to the connectivity and improve or maintain them, having a sort of multiplicative positive effect on the surrounding areas. I’ve worked in this area quite extensively in the past and the science still has quite a way to go. Sadly, I’ve also moved on but it remains a passion of mine to “be more efficient with the resources at your disposal.”

The science itself essentially comes down to finding efficient ways to model, simulate or otherwise estimate the movement of organisms through a landscape. In my summer break, I re-implemented four methods (one based on circuit theory, one firmly established in social network analysis, one based directly on 3rd year shortest path algorithms, and a simulation method I developed based on multi-level cellular automata applications) and wrote it up succinctly for a book chapter, for which the book may yet be a long way from completion.

Admittedly, it’s been quite a while since I have been monitoring recent literature updates in spatial modelling within landscape ecology although I have noticed that one piece of software for doing analysis of corridors has become available and I didn’t notice if they had fixed the issues I wrote about in Ecography, which would mean that people using the application may not be getting the best results.

It also doesn’t help that other research in the area (not the particular methods I discuss above) is mired by unusual discrepancies in the methods – in one case, I found two papers published with the exact same network, yet claiming completely different methods for construction. Let’s hope a new brand of responsible and rigorous researchers can come and revolutionise the field.

Full access to the Twitter API in Matlab via R

[Update: This page is no longer relevant. If you are here to interact with Twitter via the API using Matlab, then you want Twitty. The rest is here for posterity.]

Having slowly degraded my ability to interact with proper operating systems and obscure programming languages (e.g. NSFW), I find it difficult to keep up with “modern” ways of programming. So when it comes to doing something that might be conceivably trivial for serious programmers, I tend to struggle.

One example is accessing the complete Twitter API, including those searches that involve being authorised/authenticated via OAuth. So while I had no problem doing search queries and tweeting via a nice Matlab function, I was unable to find a simple way to find the followers/friends of public users and their retweets/replies using the parts of the API that required authentication/authorisation. So once I did, I thought it would be appropriate to pass along my approach so that it would be available to other fervent Matlab users who may wish to do the same.

And since I have long lost the ability to do anything complicated in programming, this will be necessarily be a beginner’s guide to accessing Twitter via MATLAB. I skip over much of the details and specifics but I hope to cover the particular steps that tripped me up along the way.

1. Create an app on Twitter. The most important thing to remember on this step is to leave the callback URL blank. Or delete it afterwards, which is what I did after much too much mucking around trying to work out why I could not access a PIN (more on this later). The other thing you will need to remember is to include Read and Write privileges.

2. If you don’t already have it, download and install R. I’m using version 2.14.0. In my version under Windows, I immediately installed the ROAuth and twitteR packages from within the R application.

3. Run the following commands inside R. Rather than elaborate on these here, it will be better if you peruse the documentation and examples associated with the packages to understand how they work [because I don’t]. Note that I am downloading a cacert file, which will be used later on [I think it is necessary].

  • setwd(“D:\blah\some-directory\\”); [remember to use \ for directories]
  • library(twitteR)
  • library(ROAuth)
  • download.file(url=””, destfile=”cacert.pem”) [make sure it ends up in the right place]
  • KEY <- “********************” [consumer key from your twitter app]
  • SECRET <- “***********************************” [consumer secret from your twitter app]
  • cred <- OAuthFactory$new(consumerKey = KEY,
  •     consumerSecret = SECRET,
  •     requestURL = “”,
  •     accessURL = “”,
  •     authURL = “”)
  • cred$handshake(cainfo=”cacert.pem”)

At this point you will be presented with a statement containing a URL to go and find a PIN from Twitter. If you have set up your application correctly (no callback URL), you can simply navigate to that website and copy and paste your pin number inside.

4. Inside R again, save the OAuth object (cred) to a suitable filename. I have saved mine as Cred.RData. The command is “save”. It’s easy to find.

5. Create a new R script with the following commands:

  • setwd(“D:\blah\some-directory\\”)
  • args <- commandArgs(TRUE)
  • library(twitteR)
  • library(ROAuth)
  • load(‘D:\blah\some-directory\Cred.RData’)
  • cred$isVerified()
  • print(cred$OAuthRequest(args[1], “GET”, ssl.verifypeer = FALSE))

You will notice that this script includes “args[1]”, which we are going to pass from within MATLAB when we call the script. You will also notice that we have loaded the old OAuth object, which means that you will no longer need to enter the PIN each time you want to access the Twitter API.

Those of you who are following carefully will also notice that this is a particularly unsafe way of requesting information from Twitter, and is prone to man-in-the-middle attacks. I can’t imagine how the resulting strings could be dangerous, and there is no private information contained in what is being sent around, so I am comfortable with this until I am convinced otherwise.

6. In MATLAB, simply create the html you will use to call the Twitter API, for example, noting the method for producing the correct quotation marks (without it, the command line will not know how to interpret your ampersands correctly):

htmlx = ‘”″’;

In this case, the html request will collect up a certain Australian politician’s last two tweets, which always make for interesting reading.

7. Then, to ask R to run the script you have written. To do this, you need only run the following command from within MATLAB (or within a function in MATLAB, of course):

  • [status,result] = system([‘D:ProgramsR-2.14.0binRScript —vanilla —no-save —slave query_script.R ’ htmlx]);

This will return a bunch of junk that you won’t need to use, as well as the tweets/followers/friends whatever you have requested in your properly-formed htmlx variable, passed as an argument to, and parsed by, R.

8. I won’t go into the details of how you can then strip the results of this call to produce what you might be looking for because this depends on the specific API calls you are making. As for me, I have created a miniature library that implements specific API calls, and then devours the results using simple regular expressions to produce structures for the returned tweets and users.

I am unlikely to make the rest of the code for this public in the near future and I don’t plan to answer questions about this [because there are experts who will be able to do a much better job than I can] but if you decide you really need to contact me, then it is not terribly difficult to find my email address, or you can tweet me at @adamgdunn.

“Sipping from the fire hose” – sampling Twitter streams

A quick link for today showing a visualisation of Twitter showing a more practical explanation of a nation’s (presumably referring to the UK in the picture below) mood. I think it is interesting and beautiful.

Financial conflicts of interest in guidelines

A new study published in the BMJ shows the prevalence of financial conflicts of interest in the panel members producing clinical guidelines. For consumers of healthcare delivery (that means everyone), I think it is valuable to know that doctors get their information from guidelines, and about half of the people developing those guidelines have financially-based conflicts of interest (e.g. they get money from pharmaceutical companies). The fact that this is not a surprise is probably the most worrying issue.

This is the second time that we’ve heard that journals have become “an extension of the marketing arm of pharmaceutical companies”.

Unfortunately, the double-edged sword is that many talented people do excellent work, and get money from pharmaceutical companies. Removing financial conflicts of interest would remove their talent from the construction of evidence and guidelines. 

Australians’ views of our own health system

In a data briefing published in the last couple of days in the BMJ, there was an interesting graphic that indicated the public perception of the healthcare system. Although it isn’t particularly easy to find the source of the information in the Health Affairs cited by Appelby (an article with open access), the results are particularly striking for Australia.

While over 60% of the public in the UK believe that only minor changes are needed, around 75% of Australians believe that our health system needs fundamental changes or a complete rebuild. This perception is even more negative than the US, for which the system is widely known to be overly expensive and suffering from huge gaps in access for the disadvantaged.

The Framingham Study, fast food access, and BMI

In the American Journal of Epidemiology, a well-known set of authors that have published widely on the Framingham Study in the past have looked at BMI and proximity to fast food. I find it a bit of a reach to say that “contrary to much prior research, the authors did not find a consistent relation between access to fast-food restaurants and individual BMI” when, at first glance, there are clear confounders.

Regardless of how close the “negatives” of fast food outlets are, easy access to “positives” like parks, swimming pools, gyms and cheap fresh food markets is going to have a significant impact on peoples’ choices about what they do and eat. More simply, it doesn’t really matter how close that McDonalds is (see below) if you have access to safe parks, cycleways and a range of good quality cuisines.