Wednesday 5 December 2012

Confused? You are not alone

Caren and I spent an interesting couple of days last week at the Academy of Social Science's snazzily-titled conference, 'Implementing Finch'. (Dame Janet, presiding over the first day, mentioned how strange it is to suddenly find herself something to be 'implemented', 'debated' or, indeed, ignored!).

The first day focused on academic administrators and the second on learned socities, with (as might be expected) a strong social sciences bent. What follows is my overall impression of the two days, and it can really be summed up in a single word - confusion.

This became apparent pretty early on. Dame Lynne Brindley, about halfway through her opening speech, asked whether she needed to give an introductory primer on open access, the differences between Green and Gold and so forth. A good half of the heads in the room began to nod, rather sheepishly in some cases, suggesting that university administrators are not as on top of this agenda as we might hope.

As the conference went on, it became clear that their confusion is, to some extent, justified. Certain key decisions have yet to be taken, others have been taken in a rather inconsistent manner. And some messages are - well, to call them mixed would be an understatement. Perhaps it's not surprising that the academics on the implementation end of this policy are feeling a bit overwhelmed. I'm going to cover three areas which seemed, to me, to cause particular difficulty - this is (as with all my blogs) subjective and I'd recommend anyone seeking full coverage of the conference to go to the AcSS website and look at the presentations/reports made available there. (In open access.)

The first area - and it's a biggie - was the question of what is considered 'publicly-funded' research, for the purposes of open access policy. This hasn't been fully defined yet, but it's clear that there's quite a bit of misinformation out there. The RCUK policy is very decidedly focused upon articles, and Paul Hubbard indicated in his presentation that HEFCE will follow suit, requiring OA only for articles and conference proceedings. The precise scope of the HEFCE OA REF requirement (phew! Acronyms!)  is to be explored in a consultation early next year - there was a general feeling in the room that this amounted to 'too little, too late', but it's an important opportunity for those in HSS disciplines to make their feelings clear.

Despite these clear, and limited, terms of reference, there were questions from people worried about books, about non-written outputs and about data. These are explicitly excluded from existing requirements, but clearly this message isn't getting across yet. One attendee asked a good question about whether using an ESRC-funded dataset would automatically make a project 'publicly funded', even if the research itself was not funded through RCUK money, and that's something that the ESRC could certainly address. Another, less helpful but widely-applauded, questioner suggested that, following the rise in fees, universities should no longer be considered publicly funded. This, in my opinion, is a canard. We don't have comprehensive data yet, but fees make up less than 50% of income for some universities, and in any case they are supposed to pay for teaching, not research. This particular excuse smacks of wriggling.

The second big area of confusion - and those of you who follow my Twitter feed will know that this really got my blood pressure up - was around the CC-BY licence. Now, copyright is confusing. I'm confused by the CC licence terms. But then I don't hold myself up as an expert. When an 'expert' stands up and says that Gold OA takes away your right to be acknowledged as the author of the work (err, that's the 'BY' bit of CC-BY) and that 'you and your work have no protection against derogatory treatment', I can't see how they're adding useful information to the debate. This 'Nazis will steal your credibility' argument was advanced by other presenters too, and it's just not true. Creative Commons is extremely clear about this; in fact it's in the text of the UK CC-BY licence. In international jurisdictions where moral rights exist, CC licences protect them. It took me five minutes to Google all of that. And, despite the factually accurate corrections made by later presenters, I don't doubt that some participants will have gone away from the conference with a fear that their OA work can be used, misattributed or even stolen by all kinds of undesirables. At the very least, they will have understood that there is considerable debate and confusion around this issue. Which is not helpful. Undoubtedly, there are some questions about the CC-BY licence, but these really need to be addressed in a more thoughtful and accurate manner.

The final area of confusion that I noticed is perhaps the most understandable, as (to date) there doesn't seem to be much guidance on it. It relates to how OA polices are actually going to work in institutions. Several speakers and many attendees were concerned about the internal politicking that might be needed to secure publication funds from a limited institutional APC budget. Lynne Brindley - being provocative, by her own admission - suggested that many authors have a better relationship with their publisher than with their institution, and this chimes with things we're hearing on the OAPEN-UK project. It's clear that researchers are concerned about having non-expert university administrators decide which articles should be prioritised for publication funding. The ongoing tension between STM and HSS, evident throughout this conference in several presentations and the questions, is another dimension of this problem. Will STM publications in high-impact journals be prioritised over HSS ones in niche but very important outlets? We all know that REF doesn't take any notice of impact factors (hem hem) but it's not clear that this message has completely percolated through to university administrators...

So what - if anything - is the solution for all this confusion? Well, to quote my mother, fount of all wisdom, 'time is a great healer'. For the issues where there is genuine lack of clarity, time is needed for people to figure out how these new policies are going to work in practice. HEFCE's consultation and subsequent policy for REF 2014 will clear up a lot of the issues around the scope of OA mandates. Individual institutions have to work out how OA is going to fit into their organisational structure, and who will be responsible for allocating funding for APCs. Once researchers know that, they can engage locally to try and address any problems which they see as significant.

On other issues, a more active and cross-sector response is needed. The misinformation, misinterpretations and lack of clarity around the definition of 'public funding' and the CC-BY licence (among other issues) must be addressed if researchers are to feel comfortable with OA publishing. This is the job of funding agencies, research support organisations and institutions. We need some kind of beginner's guide to OA, which can answer FAQs and counter common misperceptions from a researcher's perspective. So, that'll be the next blog post then... ok, next two.

(In a final note of confusion, the sponsored advertising which appeared at the top of my #acssfinch Twitter feed seemed to think we were a bunch of high-net-worth investors. HAH!)



Wednesday 14 November 2012

Will nobody think of the children?

I've been thinking a lot recently about young researchers. May or may not be related to my own recent entry to the fourth decade of my life (erk). Many of my contemporaries who did PhDs are just starting out in academia, which really brings home some of the issues. But it's probably more closely linked to the work I'm doing on the OAPEN-UK project, which looks at open access monographs in the humanities and social sciences. One of the luxuries of a long research project (as this one is) is that unexpected issues and concerns surface and become more important over the course of the work, and - one year in - early career researchers are really beginning to make themselves felt.

Which is interesting, because we haven't actually talked to that many. They're pretty hard to track down, these young researchers, described by one recent interviewee as 'gypsies', moving from job to job, building their experience and trying to write and publish that all-important first monograph or high-impact article. We know from research studies, as well as anecdotal evidence, that younger researchers tend to be more conservative than their senior colleagues when it comes to publication routes. This is a sensible attitude: they need to get the maximum credit for their work if they are to establish themselves in a more stable career, and that means peer-reviewed journals and respected university presses, not experimental publishing houses - no matter how good their editorial board or how noble their ambitions.

But recent studies have also suggested that PhD students are less aware than their older colleagues of open access, and are actively suspicious of the content made available in this way. This, I think, should be a cause for concern for advocates of open access. The current momentum around Finch, and the response to Finch by various government bodies, suggests that there's a window of opportunity for change. But it's no good if this change is embraced by the pre-retirement generation but rejected by the young scholars, plus catholique que le pape, who will be responsible for sustaining new ways of working and - perhaps more importantly - new ideas and ideals about the best way to communicate scholarly work.

With all of this in mind, it's strange to find that the recent RCUK statement on block grants for APCs makes no mention of PhD students. A jump? Bear with me. RCUK are allocating their funding for APCs 'in proportion to the amount of direct labour costs awarded on grants that they [institutions] have received over the three years from April 2009 to March 2012'. Now, many PhD students are funded through studentships attached to specific grants, and I would assume that their time would be factored into the direct labour costs and thereby given some recognition in the RCUK grant. Good. These guys have been 'allocated' some money and should, in theory, be able to claim it to publish any papers arising from their PhD in open access.

But there are lots of students who aren't attached to grants. Many PhD studentships are now awarded through institutions, either by a block grant mechanism or through a Doctoral Training Centre. It's not clear whether the 'direct labour costs' attached to this money (including the student's four years of study) are part of the calculations for APC grants.* If not, we have a problem. Research conducted by these post-graduate students is undoubtedly Research-Council funded. So any journal articles arising from it would presumably fall into the requirement for open access publication. But if the studentships aren't included in RCUK's calcuations for APC block grants, there's no money to pay for them. Furthermore, adding a Research Council funded PhD candidate as co-author on a paper that's otherwise unconnected with Research Council money might mean that paper must now be published in open access even though, again, there's no money to pay for it.

This might seem like a fairly peripheral concern, in the grand scheme of things. After all, who knows how the internal mechanisms of universities will allocate the money that they're given for APCs? Who's to say that research students would see any of it, even if they were factored into the RCUK calculations - or that they won't be given access to it just because they aren't? But I don't think I'm nit-picking for the sake of it. As things stand, we're sending an important message to young researchers - those who are most conservative about open access, let's remember - that we don't care about their work. We don't care if we erect another barrier to achieving those publications which are so critical to establishing a more stable career. Open access becomes yet another hoop to jump through, another difficulty to overcome, and the RCUK requirement becomes another reason to see it in a negative light. I suspect that this message will not turn young researchers into long-term supporters of open access publishing.

*Of course, it's possible that I am making a big fuss about nothing, and that RCUK have considered PhD studentships within their APC block grant calculations. But I haven't seen anything so far to suggest that this is the case.


Wednesday 4 July 2012

'Because it's there'


New technologies, eh. The LIBER conference, which has just finished in Tartu, focused quite a bit on QR codes, augmented reality, mobile platforms and other whizzy tools. Librarians seem to be taking the kind of interest in these technologies that we saw in relation to Web 2.0 a few years ago. But their approach seems to be much more strategic.

Librarian use of web 2.0 sometimes felt rather like George Mallory's approach to climbing Everest: ‘I’m going to do it because it’s there’. Now, this kind of attitude might be alright for mountains, which don’t tend to care whether they’re climbed or not. But we know from research that academics aren’t really engaging with social media for professional purposes, so one has to question the value of librarians picking up their crampons and ice axes and setting out for the Twitter base camp to climb a mountain that doesn’t actually exist. 

A lot of librarians also struggled to fully understand how social media works, and how best to engage with it on its own terms. A 2009 study, which looked at librarian use of Facebook, found that most librarians spend 0-20 minutes a week on their Facebook page: definitely not enough to keep up the steady stream of information and interaction that underpins a successful social media presence. 

But – at least from the small sample of presentations at the LIBER conference – the library approach to new technologies seems to be much more organised and strategic. And it thinks about how people use their smartphones and mobile devices, and links this into library services and content, to produce tools that will actually get used.

Let me mention a few examples, most of which come from Ellyssa Kroski’s excellent presentation. People use their smartphones to physically locate themselves, often in relation to something else that they can’t find. So it makes sense for a local library to provide maps and directions that respond to where their users are: it makes sense for a big university library to use QR codes on library shelves to direct users to the book they want. People use their smartphones to organise their various contact streams: emails addresses, phone numbers, social media. So it makes sense for librarians to display QR codes that directly transfer their contact details into a user’s mobile phone.

Other projects had thought about how to use mobile technologies to bring collections to life in a new way. People use their smartphones to take pictures of their local environment: why not overlay those pictures with augmented reality that links to library images, documents and multimedia? Or even recreate structures that no longer exist, using historic images from the library collection? The Bavarian State Library has done both these things in an incredibly successful app on King Ludwig II

There are still areas of gimmick which smack of ‘because we can’. The Ludwig II app, for example, allows users to scan their entry tickets to the king’s fairytale castles of Herrenchiemsee, Neuschwannstein and Linderhof in order to get a 3D visualisation of the attraction: something that feels a bit redundant considering that the users are probably standing right in front of the real thing!

But on the whole, these services are considered: they look at the possibilities offered by social media, and think about how these might interact with library collections to deliver relevant information in new ways. Which can only be a good thing.

Tuesday 12 June 2012

Measuring up


It’s a funny old business, research. There cannot be another industry that so obsessively tracks its outputs, corporately creating measure upon measure to try and establish some sense of hierarchy. Like a deranged marketing team, we produce ever more complex statistics to understand how we relate to our competitors – as journals, funders and individual scholars.

Yet, when you get down to it, the tools we have at our disposal are fairly crude. Most measures rely upon the published journal article as a proxy for ‘achievement’ or ‘discovery’, and most traditional measures (and some of the newer ones) rely on citations as a way of understanding how significant that article has been. Based on his or her authoring record, then, a researcher might be invited to deliver the keynote speech at a conference, given a grant or offered a job.

Now, that is all fine in a system where you have a handful of authors on a paper, and each of them has contributed in a way that’s proportionate to their position on that paper. Hah! In general, of course, the authorship of a paper is much more complex, particularly since each discipline has its own conventions, which can be incomprehensible to an outsider, especially when the number of authors can run into the thousands. And in some cases ‘author’ may not even be the correct word to apply any more. Is someone who creates data an ‘author’? What about someone who writes code? Their contributions are vital, but perhaps underplayed within the current system.

This is why I was really interested to learn about a recent workshop, funded by the Wellcome Trust and held at Harvard last month, which looked at contributorship and scholarly attribution. (Note that deliberate rejection of ‘authorship’ in the title, by the way.)  The programme incorporated a number of perspectives, including authors, editors and funders, and looked at many of the factors that might influence the development and uptake of new ways of tracking contributorship. What kind of taxonomies and ontologies might we need, if we are to reflect new ways of doing research and the new roles that are emerging? How would new conventions be introduced and implemented, and what might be the reaction of scholars? And how would a new way of tracking contributorship intersect with other developments in the scholarly communications environment, especially that old favourite, the article of the future? It’s too soon to say what will come out of this workshop, but apparently there is interest in taking some kind of action based on the discussions so I’ll look forward to developments.

Another project, mentioned at the workshop, is more advanced, and it’s worth taking a brief look at it before winding up this post. FundRef is an initiative from the clever people at CrossRef (could you tell?). Funders and publishers are collaborating to create a standardised way to acknowledge funders within published articles: a kind of ORCID for funding bodies. This will make it much easier to track the outputs from individual research projects: for outputs published in some scholarly journals, at least. Perhaps in time we will see the FundRef ID popping up at conferences, in data centres, even on blogs, to track the wider effect of research funding.

Just think of the impact measures we could start to build then…

Friday 30 March 2012

Article of the future, part 3,981

If I could reclaim all the time I’ve spent talking about the article of the future in the last few months, I could probably – I don’t know –watch the entire Lord of the Rings trilogy. Extended Edition. Twice. I have to say, I’m not sure I was any the wiser for all these discussions, but something I saw this week at UKSG has helped me clarify what a, if not the, article of the future might look like.

The organisation producing these articles is the Journal of Visualized Experiments (JoVE), the only PubMed listed video journal in the life sciences. I must say, it’s pretty nifty. Articles are submitted as text and sent out for peer review. If they’re accepted, the JoVE video team get to work. A PhD-qualified scientist/director produces a script, which is sent back to the authors for review and validation, and then filmed by high-quality videographers. The video is returned to JoVE for editing and post-production (including, in some cases, whizzy 3D), then sent out for a second round of peer review and then, finally, published. The whole business takes 6-9 months, making it roughly comparable with other life science journals.

Now, lots of that sounds really Star Trek. Video! 3D! But what I think makes this model so compelling is that the innovation exists, not for its own sake, but for the sake of the science. Video is a great way to explain complicated techniques and procedures, and many JoVE publications are focused on methods. (This presents some problems in terms of citation measures, but that’s a separate issue.) The results seem impressive: reproducibility of research is a major problem in life sciences, with a 70% failure rate being fairly common in commercial labs, but JoVE has reduced this to around 35% when scientists use their video articles.

The insistence on peer review at several stages of the process also shows the underlying commitment to research outcomes. Apparently the peer review of the final video is a relatively recent addition, done in order to qualify for an impact factor rating, but to this laywoman the videos didn’t look like promotional tools; they certainly looked like Proper Science. And their insistence on underlying text, published alongside the video article, suggests that the video is seen as an enhancement, rather than a replacement, for traditional content.

The journal’s also exploring Web 2.0 functionality for its video articles, allowing readers to mark points in the video and link them to other relevant content. Again, this reflects a desire to improve the value of the articles, so that they sit within their scholarly context. I’ll be interested to see, though, whether researchers use this functionality, as studies tend to suggest that they’re not too keen on enhancing/marking up content for others to see.

As you might expect, the cost of producing a JoVE article is staggeringly high, and their prices undoubtedly reflect this; for a new, young journal, they are pitching themselves alongside some fairly big hitters. That said, the time saved by researchers who were trying, and failing, to reproduce experiments might justify the cost, especially since prices are fixed at the time of subscription so there are unlikely to be year-on-year rises: JoVE are certainly trying to make themselves affordable. Furthermore, this money question is one that all ‘future articles’ are probably going to have to answer: when you’re doing things in new ways, creating your own infrastructure rather than outsourcing it to established intermediary businesses, how can you become self-sustaining?

If this is going to be an article of the future, the scholarly communications system is going to have to adapt to accommodate it. I’ve mentioned the issue of citations – that’s a problem that we’re aware of in all disciplines with complex technical methods to explain, and which researchers are already trying to address. There are also questions about the skills needed to peer review and edit a video article – are we training those researchers and professional editors to work within the boundaries of a non-textual medium? I look forward to seeing what kinds of answers JoVE and other ‘new’ journal formats come up with….