Survey of opinions on mathematical journal reform

I am running a survey (via Google Forms) on behalf of an international group of researchers and librarians interested in improving overall performance of the publication system in mathematics and other subjects. Its results will be made public later this year. We aim to get responses from a large and representative sample of the world mathematical community. The results will be used to focus efforts on improvements that have borad community support. As far as I know nothing like this has been tried before. Some commercial publishers have undertaken author surveys on open access, but our survey is much more.

The survey (for editors, referees, authors, readers on mathematical journals) can be accessed at Google login is required for authentication and to safeguard data integrity, but no personal data will be stored.

Average-case analysis of random assignment algorithms

With summer scholarship student Jacky Lo, I have just submitted this paper to COMSOC 2016. This is the first time I have seriously looked at resource allocation in social choice. It was
interesting and I may do more on this topic in future.

Abstract: The problem of one-sided matching without money (also known as house allocation), namely computing a bijection from a finite set of items to a finite set of agents each of whom has a strict preference order over the items, has been much studied. Symmetry considerations require the use of randomization, yielding the more general notion of random assignment. The two most commonly studied algorithms (Random Serial Dictatorship (RP) and Probabilistic Serial Rule (PS)) dominate the literature on random assignments.
One feature of our work is the inclusion of several new algorithms for the problem. We adopt an average-case viewpoint: although these algorithms do not have the axiomatic properties of PS and RP, they are computationally efficient and perform well on random data, at least in the case of sincere preferences. We perform a thorough comparison of the algorithms, using several standard probability distributions on ordinal preferences and measures of fairness, efficiency and social welfare.
We find that there are important differences in performance between the known algorithms. In particular, our lesser-known algorithms yield better overall welfare than PS and RP and better efficiency than RP, with small negative consequences for envy, and are computationally efficient. Thus provided that worst-case and strategic concerns are relatively unimportant, the new algorithms should be seriously considered for use in applications.

Los Angeles

Today marks the end of an expensive and rewarding 5-week visit to Los Angeles. Most of it was vacation. We experienced some excellent museums (Petersen Automotive, Getty Center, La Brea Tar Pits) and the Santa Monica caught up with some old friends and colleagues, and spent a lot of time with many relatives. As expected, there was a lot of driving and eating, and not a lot of exercise. Weather was excellent, around 16-20C most days and with a lot of sunshine, and smog much less than I had expected. In the end we forwent the delights of the big theme parks, couldn’t stomach the crowds around the Chinese Theatre, and missed out on being part of the audience for a TV show recording.

I did manage to do a small amount of professional work: a talk at UCLA (in Igor Pak’s Combinatorics seminar, my first ever visit to the campus – it looks like a wonderful place, and I ran into Terry Tao in the line for lunch!) and my first ever discussant appearance, at a very interesting political science workshop in Laguna Beach organized by Bernie Grofman. Overall this has been the longest break from work I can remember, and it’s time to start serious research and teaching for 2016.

Predicting the 2015 Canadian election

The Canadian general election will be held on 19 October. The most basic prediction method uses the full district (“riding”) vote information from the last election (in 2011), the current poll estimate for national level support for each party, and a model of changes in district votes. There are two main models used in predicting elections under First Past the Post (single-winner plurality in districts), namely Uniform (additive) Swing and Proportional (multiplicative) Swing.

Based on the aggregate poll at, these two models predict the following point estimates for the seat distributions (after scaling up to account for the increase in parliament size since 2011):

Multiplicative: CON 133 NDP 71 LIB 125 BQ 7 GRE 1
Additive: CON 145 NDP 85 LIB 101 BQ 6 GRE 1

NDP have lost a lot of support in recent weeks, but it still looks as though no party will have an absolute majority and CON will be the largest party.

UPDATE 19 October (NZ time): using the latest poll estimate the models now give:

Multiplicative: CON 131 NDP 72 LIB 128 BQ 3 GRE 1
Additive: CON 137 NDP 86 LIB 109 BQ 5 GRE 1 predict: CON 120, NDP 71, LIB 141, BQ 5, GRE 1
Toronto Star predict: CONS 124, NDP 71, LIB 140, BQ 2, GRE 1

Let’s see the results tomorrow.

PolNet 2015 and APSA 2015

In June and September I attended two contrasting political science conferences on the West Coast of the USA. PolNet 2015 was small (of the order of 100 attendees) and focused on political networks. APSA 2015 was huge (about 7000 attendees), covering all areas of the field, with many business meetings and other professional activities in addition to research talks. Both were very well organized and had unusually good food available. These trips were very tiring physically, but very stimulating intellectually.

Personal highlights:


  • according to Skyler Cranmer, statistical inference on survey data usually makes an assumption of independence, which makes no sense for networks, and he found 700 published papers making this methodological error
  • David Lazer (and many collaborators) Moneybomb video
  • Kathleen Carley: 25-50% of tweeters are not human – biased data for social scientists!
  • really useful introductory tutorials by Skyler Cranmer, Lorien Jasny, Katherine Ognyanova, Scott Pauls


  • plenary talk by Robert Reich – in addition to the main substance, the anecdotes were very entertaining, especially his interview by a journalist askingabout a date he went on with Hillary Clinton about 50 years ago
  • British Election Forecasting session – interesting to see how so many different predictions were all so wrong, and concerns about “forecaster herding”
  • political networks session where several people agreed that it is a fad, but that is OK because it gives good results
  • APSA business meeting into which I was press-ganged in order to make a quorum. It seems that elections use Australian House of Representatives rules. The quorum was lost partway through, and a complaint made … The political scientists can sure talk!
  • talk by Carey, Masoud and Reynolds showing how close Tunisia came to failing like Libya and Egypt (small differences in the voting rules used)
  • Alex Montgomery’s favourite international organization is the  African GroundNut Council
  • many other interesting topics: why is there no right-wing party in many countries? ; effect of preferential voting on representation of minorities; poll herding hypothesis – low quality pollsters adjust results to match high quality ones; how can polarization be measured?


Barriers to open access – reading

Here is a list of references that may be useful when considering barriers to open access.

Measures of partial balance in signed networks

Networks in which the edges can be positive or negative occur in many applications (for example, in international relations, states may be allied or official enemies). A widely-used theory of “balance” predicts that networks should become more balanced over time. However there are no clearly agreed measures of partial balance. Samin Aref and I made a start on improving this situation.

Abstract: Is the enemy of an enemy necessarily a friend, or a friend of a friend a
friend? If not, to what extent does this tend to hold? Such questions were
formulated in terms of signed (social) networks and necessary and sufficient
conditions for a network to be “balanced” were obtained around 1960. Since then
the idea that signed networks tend over time to become more balanced has been
widely used in several application areas, such as international relations.
However investigation of this hypothesis has been complicated by the lack of a
standard measure of partial balance, since complete balance is almost never
achieved in practice.
We formalise the concept of a measure of partial balance, compare several
known measures on real-world and synthetic datasets, as well as investigating
their axiomatic properties. We use both well-known datasets from the sociology
literature, such as Read’s New Guinean tribes, and much more recent ones
involving senate bill co-sponsorship. The synthetic data involves both
ErdH{o}s-R’enyi and Barab’asi-Albert graphs.
We find that under all our measures, real-world networks are more balanced
than random networks. We also show that some measures behave better than others
in terms of axioms, computational tractability and ability to differentiate
between graphs. We make some recommendations for measures to be used in future

Link to preprint:

Distance-based voting rules

After a long gestation period in which I seemed to be publishing nothing, a few projects have reached maturity. With Benjamin Hadjibeyli, I have a preprint studying so-called distance rationalizable voting rules, which we recently submitted. These are voting rules in which we specify some notion of consensus winner, set up a distance measure on elections, and then choose the winner(s) based on minimizing the distance to a consensus election. This framework has been used by other authors over the years, particularly by Edith Elkind, Piotr Faliszewksi and Arkadii Slinko in a series of papers.

We deal with a lot of foundational issues and give some insight into the way Minkowski geometry relates to decisiveness of such rules. This includes both positive and negative results.

This work was inspired by discussions with Elchanan Mossel and Miklos Racz in Berkeley in 2013, about whether the Dodgson rule is a so-called hyperplane rule. That question is still not answered, but now the underbrush has been cleared, it makes sense to go back to it.