Quarkstrudel im Kopf

July 13, 2014 / Martin

TrueSkill in Haskell

I have recently implemented TrueSkill for a world champion bet.

The model assumes that every team (could be split up into players) can be characterized by an unobserved skill variable, which is just a simple real. The better a team is the higher is this variable. For each game we generate a game skill variable which is a noisy version of the skill variable. You can think of it as a daily skill. So even if two teams with fixed skills play against each other over and over again the outcome might be noisy. The observations – the team won, lost, or played a draw against its opponent – can be modeled by the sign of the difference of the two game skills.

For inference message passing is used and the messages itself are described by a Gaussian distribution. An actual message consists of a mean and a variance. This can be done without any error for nearly all steps and is approximated by expectation propagation where this is not possible.

I mostly followed the paper when in comes to the inference. Since there are not too many games for international soccer I decided to go for loopy belief propagation (section 28.7). The original work uses some kind of on-line updates; similar to ELO rankings.

You can find the code at github.

The application expected a training csv file (as first command line argument) that might look something like this:

2010,Guadeloupe,Martinique,2,0,2337,French Territory Cup
2012,Martinique,New Caledonia,2,0,3544,French Territory Cup
2011,Albania,Bosnia and Herzegovina,0,2,2517,European Cup qualifier
2010,Myanmar,Tajikistan,0,1,2449,Asian Challenge Cup

Every row contains the year, home team, guest team, home score, guest score, game ID, and other information that might be useful for other processing. You can implement a certain loopy belief propagation schedule by repeatedly including a row of the same game (with the same game ID).

The original work allows to reason about winning, loosing and ties but our bet required to predict the final score of the soccer games. I did not really have too much time to implement something better so in the end I just used a nearest neighbor search among similar games. This at least found reasonable results. The second command line argument hence points to a file that holds all the results that should be used for the lookup. It follows the same format as the training file.

Querying games can be done by piping in a csv file (again with the same format as above).

I am really curious how this model would perform on richer data, e.g. by using full player information of the teams. Player transfers would then also transfer skill between team. This might be particularly interesting for national soccer leagues.