Research happenings

Publications 

I have updated my local archive  to reflect some new publications:

  • Simulator for the 2011 NZ Referendum (with Geoffrey Pritchard), Parliamentary Affairs, to appear.
  • Best Reply Dynamics for Scoring Rules (with Reyhaneh Reyhani), Proceedings of ECAI 2012.
  • Coordination via Polling in Plurality Voting Games under Inertia(with Reyhaneh Reyhani and Javad Khazaei), Proceedings of COMSOC 2012.
  • Asymptotics of coefficients of multivariate generating functions: improvements for multiple points. Online Journal of Analytic Combinatorics 2012.
  • Power measures derived from the sequential query process (with Geoffrey Pritchard and Reyhaneh Reyhani), Mathematical Social Sciences, to appear.
  • Random Cayley digraphs of diameter 2 and given degree (with Manuel Lladser, Primoz Potocnik and Jozef Siran), Discrete Mathematics and Theoretical Computer Science 2012.

 

Reinventing Discovery

I highly recommend the book (published in 2011, but I have only just read it – it’s hard to be on the cutting edge) Reinventing Discovery by Michael Nielsen. He “wrote this book with the goal of lighting an almighty fire under the scientific community”. His overview of Open Science, of which Open Access to publications is just one component, is very compelling and optimistic, without losing sight of difficulties.

Submission to the Electoral Commission Review of MMP

I missed the first deadline for proposals for submissions to the review, but now that the Proposals Paper has been released, it has focused attention on a smaller number of issues. With Michael Fowlie (current COMPSCI 380 project student) I have made a submission based on simulations of what we hope are “realistic” elections. We find that the party vote threshold should be lower than the 4% recommended by the commission. I have been told by the EC that our submission will be an appendix to their report due out on 31 October. It will be interesting to see a) their recommendations b) whether they lead to any actual change.

Addendum: our submission appears as Appendix D in the commission’s final report to Parliament. They went with the 4% recommendation in the end.

Open access update

There is a lot of new material out there, and some older stuff I hadn’t yet seen. These may be useful.

Division of labour in prepublication peer review

It seems to me to be a good idea to separate out the traditional refereeing (pre-publication review) functions. In mathematics at least, a paper should be “true, new and interesting”. It is often easier to check the first rather than the second, especially for less experienced researchers.  It makes sense for more experienced researchers to be asked for a quick opinion on how interesting and new a paper is, while more junior ones check correctness. This has some other advantages: if it becomes widespread, authors will have an incentive to write in a way that is understandable to new PhDs or even PhD students, which will probably improve exposition quality overall. It would reduce the load on senior researchers (I received an email yesterday from a colleague who said he had just received his 40th refereeing request for the year!) Doing a good job as a junior researcher could lead to a good CV item, so there would be an incentive to participate. Some sort of rating of reviewers will probably need to be undertaken: just as with papers that pass “peer review”, postpublication feedback from the whole community will be involved.

Peer review

I intend to present ideas (mostly not my own) about how to improve the current peer review system. This is a background post.

What is the purpose of peer review of scholarly publications?

  • Certification of correctness of the work
  • Filtering out work of low interest to the research community, to allocate attention more efficiently
  • Improving the quality of the work

Michael Eisen (among others) has argued that the current system is broken. Michael Nielsen debunks three  myths about scientific  peer review. Daniel Lemire has several interesting posts, including: the perils of filter-then-publish, peer review is an honor-based system.

Certification is still important, and very discipline-specific. In (parts of?) physics it seems to be a fairly low standard: not obviously wrong. The journal PLoSOne seems to check more rigorously for correctness, but is very relaxed on significance (see here). Mathematics journals I have experience with seem to be more finicky, and traditional journals with a high reputation are much tougher in assessing significance, often rejecting without looking at the technical details.

It seems clear to me that improvements in the current system are sorely needed. Excessive attention to whether work is “interesting” risks reducing science to a popularity contest, and there are too many boring but correct papers to read. Who has time to help others improve their work, if refereeing is anonymous and there is so much pressure to publish yourself?

 

Better citation indices

Daniel Lemire and colleagues are aiming to find a better algorithm to measure importance of research articles by incorporating the context in which the citation is made (for example, distinguishing between “courtesy citations” inserted to placate referees and real pointers to important work). They need some data and it looks like a low burden for each researcher to provide it. Check out this site  for more.

I think we have passed the point of no return with bibliometrics in evaluating researchers and articles. They will be used, so it is to our benefit to ensure that less bad ones are used.