locked
Counter-intuitive updates in TrueSkill RRS feed

  • Question

  • I am working through the Chapter 3 "Meeting your match" of the MBML book http://mbmlbook.com/index.html. I tried to implement the basic part of the TrueSkill algorithm and expected to see skill updates as described in the book. The explanations seemed to me very reasonable:

    • If the top skilled player wins, this amounts to little surprise. His/her skill update changes little from prior to posterior.
    • If the top skilled player loses, this is surprising. This decreases his/her skill posterior but also great upgrades skill estimate for the lower skilled player who managed to snatch the win.

    The problem is that having implemented the model, using the same factors and parameter values, I don't get the posteriors as depicted in Figures 3.29 (Jill, top player wins) and 3.30 (Fred, less skilled player wins).

    Here's the setting:

    Jill is a skilled player, her skill prior is Normal(120, 40*40). Fred is less skilled but with lower variance in his skill prior - Normal(100, 5*5).

    I separately observe one game outcome: (a) Jill wins = True and (b) Jill wins = False.

    In case (a), the skill marginals are:

    Jskill marginal = Gaussian(140.1, 810)
    Fskill marginal = Gaussian(99.69, 24.81)

    In case (b), the skill marginals are:

    Jskill marginal = Gaussian(75.71, 484.8)
    Fskill marginal = Gaussian(100.7, 24.73)

    Why Jill's skill estimate mean changes a lot but this is not the case for Fred? If Jill wins, I expect little update to Jills skill posterior as there is little suprise. If Jill loses, I expect Jills skill posterior mean decrease a bit but also Fred's skill posterior mean go up considerably. Here I see that Fred's skill posterior mean stays almost unchanged. It changes a lot for Jill.

    Another question: Why variance for Jill's skill decrease much more when she loses (1600 -> 484) compared to the game where she wins (1600 -> 810)?

    Here is my model:

    bool[] outcomes_data = new bool[] { true }; // Jill wins
    //bool[] outcomes_data = new bool[] { false }; // Fred wins
    
    int numGames = outcomes_data.Length;
    Range n = new Range(numGames);
    
    // prior beliefs on skill
    var Jskill = Variable.GaussianFromMeanAndVariance(120, 40 * 40);
    var Fskill = Variable.GaussianFromMeanAndVariance(100, 5 * 5);
    
    // performance for 1+ games for Jill
    var Jperfs = Variable.Array<double>(n);
    Jperfs[n] = Variable.GaussianFromMeanAndVariance(Jskill, 5 * 5).ForEach(n);
    
    // performance for 1+ games for Fred
    var Fperfs = Variable.Array<double>(n);
    Fperfs[n] = Variable.GaussianFromMeanAndVariance(Fskill, 5 * 5).ForEach(n);
    
    // game outcomes (true - Jill wins, false - Fred wins)
    var outcomes = Variable.Array<bool>(n);
    
    // model
    using (Variable.ForEach(n))
       outcomes[n] = Jperfs[n] > Fperfs[n];
    
    // attaching data
    outcomes.ObservedValue = outcomes_data;
    
    InferenceEngine engine = new InferenceEngine();
    
    Gaussian JskillMarginal = engine.Infer<Gaussian>(Jskill);
    Gaussian FskillMarginal = engine.Infer<Gaussian>(Fskill);
    
    Console.WriteLine("Jskill marginal = {0}", JskillMarginal);
    Console.WriteLine("Fskill marginal = {0}", FskillMarginal);


    • Edited by usptact Wednesday, November 8, 2017 8:10 PM
    Wednesday, November 8, 2017 8:05 PM

Answers

  • I think I've found the problem. The issue is that the text in MBML and the figure does not correspond. Jill's prior skill variance must be "5*5" and not "40*40". Respectively, Fred's prior skill variance must be "5*5". When swapping those two values, the posteriors correspond to the figures.

    The model is so simple that I started to think about yet another bug in Infer.NET. But no, the framework works just great!

    Sorry for the false alarm!


    • Marked as answer by usptact Wednesday, November 8, 2017 11:41 PM
    • Edited by usptact Wednesday, November 8, 2017 11:42 PM
    Wednesday, November 8, 2017 11:41 PM