The new Tri-agency open access policy
10 July 2015 /posted in: Science
Earlier this year the triumvirate of Canadian science funding bodies, the Natural Science and Engineering Research Council, the Canadian Institutes of Health Research (CIHR) and the Social Sciences and Humanities Research Council of Canada (SSHRC) (collectively referred to as the Tri-Agencies), announced their new policy of open access to research publications. This followed a period of consultation, begun in the fall of 2013, with the science communities funded by the Tri-Agencies. The policy came into effect, effectively, on May 1st this year (2015) and applies to all Tri-agency-funded grants awarded from May 1st 2015 onward. As part of its awareness programme for the policy, the Tri-Agencies have been holding webinars to explain the new policy and allow for questions from researchers. In the main the Tri-Agency policy is pretty clear, judging by the questions from academics during the webinar session that I attended recently, but we can conclude one or both of two things: i) academics don’t read things unless the absolutely must, and ii) that academics have some interesting views about open access, what it means for them, and what they consider as being good-practice or complying with the new rules. I was asked after tweeting about this to summarise my notes from the webinar and on the Tri-Agency policy on open access in general.
My aversion to pipes
03 June 2015 /posted in: R
At the risk of coming across as even more of a curmudgeonly old fart than people already think I am, I really do dislike the current vogue in R that is the pipe family of binary operators; e.g.
%>%. Introduced by Hadley Wickham and popularised and advanced via the magrittr package by Stefan Milton Bache, the basic idea brings the forward pipe of the F# language to R. At first, I was intrigued by the prospect and initial examples suggested this might be something I would find useful. But as time has progressed and I’ve seen the use of these pipes spread, I’ve grown to dislike the idea altogether. here I outline why.
Something is rotten in the state of Denmark
02 June 2015 /posted in: R
On Twitter and elsewhere there has been much wailing and gnashing of teeth for some time over one particular aspect of the R ecosphere: CRAN. I’m not here to argue that everything is peachy — far from it in fact — but I am going to argue that the problems we face do not begin and end with CRAN or one or more of it’s maintainers.
Drawing rarefaction curves with custom colours
16 April 2015 /posted in: R
I was sent an email this week by a vegan user who wanted to draw rarefaction curves using
rarecurve() but with different colours for each curve. The solution to this one is quite easy as
rarecurve() has argument
col so the user could supply the appropriate vector of colours to use when plotting. However, they wanted to distinguish all 26 of their samples, which is certainly stretching the limits of perception if we only used colour. Instead we can vary other parameters of the plotted curves to help with identifying individual samples.
At the frontiers of palaeoecology
31 March 2015 /posted in: Science
A couple of weeks ago, I had the pleasure of attending and participating in a symposium held to honour John Birks as he retires from the University of Bergen and becomes Professor Emeritus. The symposium, titled “At the Frontiers of Palaeoecology”, took place on 19–20th March in Bergen, Norway, and was a wonderful mix of colleagues old and new discussing John’s contributions to the field of palaeoecology and their collaborations with him. Alongside this reminiscing were several presentations describing new areas of research by colleagues and collaborators of John.
Harvesting Canadian climate data
14 January 2015 /posted in: R
In December I found myself helping one of our graduate students with a data problem; for one of their thesis chapters they needed a lot of hourly climate data for a handful of stations around Saksatchewan. All of this data was and is available for download from the Government of Canada’s website, but with one catch; you had to download the hourly data one month at a time, manually! There is no interface to allow a user of the website to specify the data range they want and download all the data from a single station. I figured there had to be a better way, using R to automate the downloading. Thinking the solution I came up with might help other researchers needing to grab data from the Government of Canada’s website save some time in the future, I wrote this post to document how we ended up doing it.
Analysing a randomised complete block design with vegan
03 November 2014 /posted in: R
It has been a long time coming. Vegan now has in-built, native ability to use restricted permutation designs when testing effects in constrained ordinations and in range of other methods. This new-found functionality comes courtesy of Jari (mainly) and my efforts to have vegan permutation routines use the permute package. Jari also cooked up a standard interface that we can use to drop this and some extra features neatly into any function we want; this allows us to have permutation tests run on many CPU cores in parallel, splitting the computational burden and reducing the run time of tests, and also a mechanism that allows users to pass a matrix of user-defined permutations to be used in tests. These new features are now fully working in the development version of vegan, which you can find on github, and which should be released to CRAN shortly. Ahead of the release, I’m preparing some examples to show off the new capabilities; first off I look at data from a randomized, complete block design experiment analysed using RDA & restricted permutations.
analogue 0.14-0 released
14 October 2014 /posted in: R
A couple of week’s ago I packaged up a new release of analogue, which is available from CRAN. Version 0.14-0 is a smaller update than the changes released in 0.12-0 and sees a continuation of the changes to dependencies to have packages in Imports rather than Depends. The main development of analogue now takes place on github and bugs and feature requests should be posted there. The Travis continuous integration system is used to automatically check the package as new code is checked in. There are several new functions and methods and a few bug fixes, the details of which are given below.
Simulating species abundance data with coenocliner
31 July 2014 /posted in: R
Coenoclines are, according to the Oxford Dictionary of Ecology (Allaby 1998), “gradients of communities (e.g. in a transect from the summit to the base of a hill), reflecting the changing importance, frequency, or other appropriate measure of different species populations”. In much ecological research, and that of related fields, data on these coenoclines are collected and analyzed in a variety of ways. When developing new statistical methods or when trying to understand the behaviour of existing methods, we often resort to simulating data with known pattern or structure and then torture whatever method is of interest with the simulated data to tease out how well methods work or where they breakdown. There’s a long history of using computers to simulate species abundance data along coenoclines but until recently no R packages were available that performed coenocline simulation. coenocliner was designed to fill this gap, and today, the package was released to CRAN.
Allaby, M. 1998. A Dictionary of Ecology. second. Oxford Paperback Reference. Oxford University Press.
Simultaneous confidence intervals for derivatives of splines in GAMs
16 June 2014 /posted in: R
Last time out I looked at one of the complications of time series modelling with smoothers; you have a non-linear trend which may be statistically significant but it may not be increasing or decreasing everywhere. How do we identify where in the series the data are changing? In that post I explained how we can use the first derivatives of the model splines for this purpose, and used the method of finite differences to estimate them. To assess statistical significance of the derivative (the rate of change) I relied upon asymptotic normality and the usual pointwise confidence interval. That interval is fine if looking at just one point on the spline (not of much practical use), but when considering more points at once we have a multiple comparisons issue. Instead, a simultaneous interval is required, and for that we need to revisit a technique I blogged about a few years ago; posterior simulation from the fitted GAM.