Tuesday 12 June 2012

Measuring up


It’s a funny old business, research. There cannot be another industry that so obsessively tracks its outputs, corporately creating measure upon measure to try and establish some sense of hierarchy. Like a deranged marketing team, we produce ever more complex statistics to understand how we relate to our competitors – as journals, funders and individual scholars.

Yet, when you get down to it, the tools we have at our disposal are fairly crude. Most measures rely upon the published journal article as a proxy for ‘achievement’ or ‘discovery’, and most traditional measures (and some of the newer ones) rely on citations as a way of understanding how significant that article has been. Based on his or her authoring record, then, a researcher might be invited to deliver the keynote speech at a conference, given a grant or offered a job.

Now, that is all fine in a system where you have a handful of authors on a paper, and each of them has contributed in a way that’s proportionate to their position on that paper. Hah! In general, of course, the authorship of a paper is much more complex, particularly since each discipline has its own conventions, which can be incomprehensible to an outsider, especially when the number of authors can run into the thousands. And in some cases ‘author’ may not even be the correct word to apply any more. Is someone who creates data an ‘author’? What about someone who writes code? Their contributions are vital, but perhaps underplayed within the current system.

This is why I was really interested to learn about a recent workshop, funded by the Wellcome Trust and held at Harvard last month, which looked at contributorship and scholarly attribution. (Note that deliberate rejection of ‘authorship’ in the title, by the way.)  The programme incorporated a number of perspectives, including authors, editors and funders, and looked at many of the factors that might influence the development and uptake of new ways of tracking contributorship. What kind of taxonomies and ontologies might we need, if we are to reflect new ways of doing research and the new roles that are emerging? How would new conventions be introduced and implemented, and what might be the reaction of scholars? And how would a new way of tracking contributorship intersect with other developments in the scholarly communications environment, especially that old favourite, the article of the future? It’s too soon to say what will come out of this workshop, but apparently there is interest in taking some kind of action based on the discussions so I’ll look forward to developments.

Another project, mentioned at the workshop, is more advanced, and it’s worth taking a brief look at it before winding up this post. FundRef is an initiative from the clever people at CrossRef (could you tell?). Funders and publishers are collaborating to create a standardised way to acknowledge funders within published articles: a kind of ORCID for funding bodies. This will make it much easier to track the outputs from individual research projects: for outputs published in some scholarly journals, at least. Perhaps in time we will see the FundRef ID popping up at conferences, in data centres, even on blogs, to track the wider effect of research funding.

Just think of the impact measures we could start to build then…