National Statement of Science Investment

Yesterday Minister Steven Joyce released the  NSSI and called for submissions (deadline 22 August). One useful feature is that it explains the current system.
It has become clear that huge changes have been made to the science funding system in the last few years. There have been some very worrying developments, such as the removal of the NZST postdoc scheme, the appallingly cronyistic way the National Science Challenges have been run, the disorganization of MBIE (look at their website sometime!), etc. The sheer amount and rate of change is perhaps the worst problem. It is surely time that some kind of multi-party consensus on science funding be forged, so such large changes don’t happen so often. Without high quality input from the sector, I don’t see that happening.

I urge everyone in the research sector (and maybe others) to consider participating in a submission. It is annoying that we seem to have to spend so much time on non-core business these days, but this is important enough (in my opinion) to be an exception to the apparent rule that one should ignore such ephemera and concentrate only on one’s own research.

We have met the enemy: part 1, pusillanimous editors

I have been semi-obsessively following developments related to the Elsevier boycott, open access publishing, and related issues for the last 2 years. Perhaps my idealist personality is always in need of a cause to fight. As so often in the past (e.g. the uprisings in Iran and Arab countries in the last few years), my initial hopes that the world would be reorganized in a more reasonable, fairer, and more efficient manner have not been fulfilled. There are many reasons why progressive movements fail. The goals may be unrealistic, there may be powerful individual incentives against collective action, etc. In the next few posts, I want to discuss why progress is so slow in moving to a new system of research publishing that almost everyone seems to think is inevitable and most think is desirable. I am not trying deliberately to be offensive, but I feel that the time is right to start talking more directly about the ethical standards of our research colleagues. Commercial publishers certainly don’t have the interests of science at heart, but they are not the main cause of the current malaise.

One reason for lack of change is the lack of a reason to change. I presume there are some people who still think the current journal system is close to optimal. This is likely a minority opinion, but still needs to be addressed.

Difficulties with the current system (System A)

  • Most journal titles are owned by for-profit companies (usually called “publishers”, but I will call them “owners”).
  • Each journal has a monopoly on its content.
  • Pricing information is deliberately made opaque by owners, using bundling and non-disclosure clauses in contracts.
  • Therefore, journal prices increase at a rate well above true costs, leading to huge profits by the owners and financial strain on libraries. For this, very limited access to content is given to the public.

A better version of the current system (System B)

  • Each journal is owned by a nonprofit society, university library, or similar organization.
  • Any publishing services required by the journal are contracted out transparently and competitively.
  • This should lead to lower overall subscription costs for libraries. The issue of public access is not addressed.

An alternative (System C)

  • Authors pay the up-front cost of publication.
  • Content is freely available to anyone.

This is usually called “gold open access”. Switching to such a system would lead to large savings overall compared to the current system. There are problems with exactly how authors pay, among other things.

System C deserves its own post. In the rest of this post, I want to discuss the more traditional options.

How to change from A to B

Owners accustomed to supernormal profits will resist giving up the exclusive right to use journal titles. Methods to achieve the desired result include

  • Asking assertively.
  • Threatening to move the editorial board to another publisher (changing the name, but making it clear that the “real” journal will be moving and the traditionally named one is not supported by the editorial board.
  • Carrying out such a threat.

Why has so little happened?

Tim Gowers’ latest post includes the following:

There were rumblings from the editorial boards of some Elsevier journals, but in the end, while a few individual members of those boards resigned, no board took the more radical step of resigning en masse and setting up with a different publisher under a new name (as some journals have done in the past), which would have forced Elsevier to sit up and take more serious notice. Instead, they waited for things to settle down, and now, two years later, the main problems, bundling and exorbitant prices, continue unabated: in 2013, Elsevier’s profit margin was up to 39%. (The profit is a little over £800 million on a little over £2 billion.)

I find this very hard to understand. There is a clear path to follow, demonstrated by several editorial boards. I read some comments about deliberations by the editors of Journal of Number Theory in 2012. Apparently: (attributed to Urs Hartl) “in a recent vote among the 36 editors – 19 wanted the divorce – 6 didn’t – 6 were not ready to commit at this time and abstained – 5 didn’t respond.” It would be very interesting to read public comments from some of the editors.

An anecdote: I was asked to referee a paper by an Associate Editor (whom I dont know personally) of the Elsevier journal Discrete Mathematics. After rejecting this because of the Elsevier boycott, I received an email from this Associate Editor.

If you have a colleague who is an Elsevier editor, take a look at
their tools for managing a journal.   Similar open source tools could
be developed, but  serious dedicated resources would be needed.
Working at [redacted] and on software for [redacted] have taught me not to underestimate the task of creating top notch tools.  Besides tools, Elsevier provides large databases of
potential referees, referee reviewing history, and on-line access to
large libraries of papers.  They make it very easy to manage the
editorial process.   Their tools have helped our efforts to improve
the journal Discrete Mathematics.

Having these tools available has made it possible to keep plugging
away given the turmoil in the peer reviewing process.  I have handled
a nontrivial number of papers for which finding willing reviewers was
a challenge.  I’ve found that the Elsevier tools (together with Google
Scholar) have made it possible for me to ultimately end up with two
reports for even the most troublesome paper.   By the way, I feel that
referees should be compensated; I’ve  articulated this view several
times to Elsevier when I’ve had the chance to provide suggestions.
However, there seems to be little opportunity to change things, even
in token ways.  Although I would like to more precise, I think all I
can say is that Discrete Mathematics associate editors get a mid-four
figure salary (in US$), enough not to feel taken advantage of, but
less than it should be.   I also know that the chief editor receives
quite a bit more, as he should.

…  Everyone on the editorial board resonates with many of the
complaints raised by the boycott.   But we all have decided to
continue working on the journal and encouraging Elsevier to change
many of its ways.  I feel I am doing a service to the authors and to
the mathematical community by this work.

Let’s address the main points raised here:

  • The owner provides me with a lot of useful software tools to do my job.

Since I am not an Elsevier editor I can’t comment on their tools. My recollection from the time when I was an Elsevier referee is that the Elsevier Editorial System was nothing special.

However, I do have considerable experience as an editor using Open Journal Systems software. My enquiry to the Associate Editor about which features EES has that OJS doesn’t was met with silence. Does anyone reading this have a good answer?

  • The owner pays me thousands of dollars a year.

My own opinion is that this is scandalous. I am sure not everyone will agree. It certainly creates a strong impression in my mind of conflict of interest.

  • I am working to change the system from the inside.

Concessions made by Elsevier so far to a strong campaign by researchers have amounted to not much more than reduction in rates of price increase and freeing up of archives in some subject areas that don’t substantially impact the profitability of Elsevier. It is not remotely enough. My enquiry to the Associate Editor as to his progress in changing things was met with silence.

My conclusions, in the absence of further information: senior researchers by and large are too comfortable, too timid, too set in their ways, or too deluded to do what is needed for the good of the research enterprise as a whole. I realize that this may be considered offensive, but what else are the rest of us supposed to think, given everything written above? I have not even touched on the issue of hiring and promotions committees perpetuating myths about impact factors of journals, etc, which is another way in which senior researchers are letting the rest of us down (there are very few prepared to do what Randy Schekman has). That might be a topic for another post.

I very much hope that this post will stimulate serious discussion and we can really hear some principled reasons why, at the very least, we haven’t seen more progress toward System B, or a cheaper version of System A. Senior researchers invested in the current system, please let us know your views!

University ranking analysis

Warren Smart has analysed the recent performance of Australian and NZ universities in the three most prominent international university rankings (ARWU, QS, THE). There is a lot of detail there, not all of it depressing. It is going to be hard for NZ to keep being satisfied with “punching above its weight” in the face of lower income per student than just about anywhere we want to compare ourselves with. As a country, we may indeed have too many universities for them all to rank well internationally. But the good thing about NZ is that change can happen rapidly. So, please consider the policies of all parties in the areas of tertiary education, research, innovation, etc when voting in the general election on 20 September 2014.

Edit: the situation with NZ university rankings has been discussed quite widely recently. Some links:

Kiwifoo 2014

I was invited this year to KiwiFoo camp run 11-13 April by Nat Torkington and his crew in Warkworth. Before I went, I expected from reading others’ accounts of past camps that it would be (over?)stimulating and not to be missed, and so it proved to be. The opportunity to mingle with and listen to a diverse group of around 200 intelligent and articulate people (mostly with a common belief that technology can make the world better) doesn’t come along often. Certainly it is the first time I have seen journalists, bureaucrats, politicians, scientists, entrepreneurs, programmers and teachers thrown together in this way. Although there is always the danger of sessions degenerating into discussions about society that generalize from the experience of the participants (who are certainly not representative of NZ society) without sufficient data, this must be how major changes in society start. I hope that many good actions are inspired by our discussions.

If you ever get an invitation to KiwiFoo, accept it!

Open access news

Sometimes it is easy to forget that there may still be people who are not informed about this issue.

  • A nice summary by Samuel Gershman. He doesn’t mention one reason for the status quo being so hard to change: each journal has a monopoly on papers, and publisher packages (“the Big Deal”) make it hard to cancel individual journals – in any case, authors are insulated from having to make decisions about publication venues based on price.
  • Peter Murray-Rust is rightly angry about devious/incompetent publishers getting in the way
  • A great title: Causes for the persistence of impact factor mania
  • Meanwhile, Elsevier (anagram of Evil Seer) just keeps on going, with rising profits
  • The University of Waikato now has an open access “mandate” (a bit toothless for that name, really a policy, but a reasonable start). I have seen claims that it is the first in NZ, but it seems Lincoln University beat them to that. I know the University of Auckland has a working group on this issue. So, some slow progress, and maybe in my lifetime we will get where we ought to be already.

Is there a better way to fund research?

As I submit yet another low-probability grant bid that took up too much of my time, once again thoughts that “there must be a better way” come to mind. It seems that many colleagues feel the same way. Some interesting reading:

I have always felt that adding a random component to the grant award system, so as not to waste so much time trying to distinguish between very similar proposals, and giving out more, but smaller, grants, would be improvements. Reading comments on the above articles shows that several others agree. Perhaps it is time to try out some more modelling!

The peer review system for research

The disclaimer

In the last few years I have often read about the crisis in scholarly (mostly scientific) peer review.
I share the belief that the current system is surely suboptimal and must be changed. Much of what I say below is not original: I have read so many posts and books by Tim Gowers, Bjoern Brembs, Mike Taylor, Michael Nielsen, Michael Eisen, Noam Nisan, and many other people, that I can’t remember them all. I have a year’s experience as managing editor of an open access no-fee journal, two years’ experience as an editor (= referee) for a Hindawi journal, and many years experience as a journal referee. My research area is mathematics and various applications, so there may be some discipline-specific assumptions that don’t work for other fields. And it is not possible to cover every issue in a blog post, so I don’t claim to be comprehensive.

The latest online furore was occasioned by a so-called “sting” operation published in Science (unlike most articles in that magazine, this one is freely readable). I don’t think it worth commenting in detail on that specific article. It tells us little we didn’t know already, and missed a big opportunity to do a more comprehensive study. It does show by example that pre-publication peer review can fail spectacularly. Some other (often amusing) instances from the last few years involve computer-generated papers that are much low quality than the one submitted by Science, presumably accepted by computer-generated editors (even mathematics is not immune and some journals have done this more than once).

Some people have claimed that these weaknesses in peer review are exacerbated by the pay-to-publish model (they are certainly not exclusive to such journals, as the examples above, some published by Elsevier in toll access journals, show). This model certainly does lead to clear incentives for “Gold OA” journals to publish very weak papers. However, since authors have to pay, there are countervailing incentives for authors. If the reward system is poorly organized (as it seems to be in China, for example), then authors may still choose these predatory journals. But since papers in them are unlikely to be read or cited much, it seems unlikely to create a large problem. Journal reputation counts too – most predatory journals receive few submissions, for good reason. The existence of many low quality outlets (which predates the rise of open access journals) is a nuisance and possible trap for inexperienced researchers, and reduces the signal/noise ratio, but is not the main problem.

The main problem is: the currently dominant model of pre-publication peer review by a small number of people who don’t receive any proper payment for their time, either in money or reputation, is unlikely to achieve the desired quality control, no matter how expert these reviewers are. Furthermore, our post-publication system of review to ensure reliability is rudimentary, and corrections and retractions are not well integrated into the literature.

Both deliberate fraud (still quite rare, given the reputational risks, but apparently much more common than I would have thought) and works that are “not even wrong” and thus can’t be checked (poorly designed experiments, mathematical gibberish, etc) slip through far too often. It is bad enough that there are too many interesting papers to read, and then a lot of solid but uninteresting ones. Having to waste time with, or be fooled by, papers that are unreliable is inefficient for readers, and allowing this to go on creates wrong incentives for unscrupulous authors.

It seems now that “publication” doesn’t mean much, since the barrier is so low. A research paper now has no more status than a seminar talk (perhaps less in many cases). Self-publication on the internet is simple. There are so many journals that almost anything can be published eventually. How can we find the interesting and reliable research?

The only good solution that I can see involves the following steps. Clear proposals along these lines have been made by by Tim Gowers and by Yann LeCun.

  • adopt the open research model

    This means more than just making the polished research article freely available. It includes circulation of preliminary results and data. Certainly a paper that doesn’t allow readers to make their own conclusions from the data should be considered anecdotal and not even wrong. Imagine a mathematics paper that doesn’t give any proofs.

  • decouple “peer review” from publication

    There can be two kinds of services: assistance (writing tips, pointers to literature, spotting errors) with the paper before it is ready for “publication”, and comment and rating services (which can give more refined grades on quality, not just the current yes/no score.)

    Journal peer review focuses on the second type, but only gives yes/no scores (sometimes, a recommendation to submit to another journal). Computer science conferences are good for the first type of review, in my experience, but bad at the second. The first type of service was is offered by Rubriq, Peerage of Science, Science Open Reviewed. The second type is currently offered by Publons, SelectedPapers.net (no ratings yet), PubPeer.

    This allows people with more time to specialize in reviewing, rather than writing. And they should get credit for it! A colleague in our mathematics department told me in June that he had just received his 40th referee request for the year. He is too busy writing good papers to do anything like that amount of work. Yet PhD students and postdocs, or retired researchers, or those with good training whose job description does include intensive research (such as teaching colleges) could do this job well. To keep this post from getting even longer, I will not discuss anonymity in reviewing, but it is an important topic.

    Other advantages are that post-publication review boards could bid for papers, so the best ones are reviewed quickly by the best reviewers, multiple review boards could review papers, and reviews are not wasted by being hidden in a particular journal’s editorial process.

  • decouple “significance” from inherent quality measures

    Journals also routinely reject on grounds of their own idea of “significance”, which is inefficient (especially when they publish “important” work that is “not even wrong”). The real determination of how important and interesting a paper is can only be done after publication and takes a long time. In some fields, replication must be attempted before importance can be determined. PLoS does this kind of filtering and seems to be successful. Pre-registration of experimental trials which will lead to publication whatever the result, and registered replication reports, are other ways to reduce the bias toward “glamour mag science”.

  • if you want attention for your work, you may have to pay for it

    There ought to be a barrier to consuming expert time. It is limited, and refereeing junk papers for free is a big waste of it. I would like to see a situation where it costs authors something (money, reputation points, in-kind work) to command attention from someone else (if the work is exciting enough that people will do it for free, then so much the better). This doesn’t preclude authors making drafts available and seeking freely given feedback. However, more detailed pre-publication attention might be obtained by various means: give seminar talks and present at conferences, pay via money or formalized reciprocal arrangement. Post-publication attention is another matter.

  • complete the feedback loop

    No system can work well unless information on performance and opinions is allowed to flow freely. Reviewers must themselves be able to be reviewed and compared. Strong ethical guidelines for reviewers should be set, and enforced. The current system allows anonymous referees to do a poor job or an excellent one, and only their editor knows both who they are and their performance level.

Freedom and security

I have followed the PRISM revelations with dismay. Despite attempts to downplay its significance by those who assert that “privacy is old-fashioned” or “if you have done nothing wrong then you have nothing to fear”, a line has been crossed that ought not to have been without major public discussion. There has been a presumption of privacy for hundreds of years, and totalitarianism is not unthinkable in our so-called “free” societies.

In New Zealand the increasingly unimpressive-looking government has put forward legislation in this area that seems ill-conceived and is at the very least far too rushed.

There is Public meeting tonight, and a national protest planned for Saturday. It is true that some people attend far too many protests, but it seems to me that if you are ever going to protest anything, it should be this. Selling off state assets seems potentially much less serious. I really wonder what Richard Nixon, or even Robert Muldoon, would have done with the proposed spying powers.