locked
VMP algorithm regularisation / initialisation? RRS feed

  • Question

  • Hello Infer.net team,

    I have a model where I have a parameter I'm trying to estimate which changes (fairly slowly) over time. I've set up a variable in an analogous method shown in the Chess Analysis example for the Skill variables (A variable array indexed by time for the variable in question) in order to model this change. However, when I run inference I find that the VMP algorithm tends to underestimate the analogous 'skillChangePrecision' variable and as a result the parameter estimate fluctuates far too much over time due to noise.

    Is this due to the way the VMP algorithm works where it tries to maximise the model evidence (and hence tends to decrease the ChangePrecision variable to allow it to over-fit the model)? Is there a way to introduce some form of regularisation to the algorithm to prevent this, or could initialisation of the ChangePrecision variable help improve this problem?

    Saturday, January 3, 2015 8:15 PM

Answers

  • If I fix performanceChangePrecision to the true value, then skillChangePrecision is estimated correctly.  So I think this is just an ambiguity in trying to learn these two parameters at once.
    • Marked as answer by MarkG87 Monday, January 12, 2015 6:46 PM
    Monday, January 12, 2015 3:09 PM
    Owner
  • In the simplified model, if I remove the setting for UseSerialSchedules, then I get:

    skillChangePrecision = 135.543264891683 (truth = 199.564405970014)

    • Marked as answer by MarkG87 Monday, January 12, 2015 6:46 PM
    Monday, January 12, 2015 5:53 PM
    Owner

All replies

  • When you say the precision is underestimated, have you sampled data from the model and found that the true parameter values are not recovered?  If so, what is the simplest model where this problem happens?
    Tuesday, January 6, 2015 9:39 PM
    Owner
  • I was running my model against real world data so don't have the true parameters to compare against but from analysing the changes in the parameter estimations over time they were fluctuating far too much compared to what would make sense in the real world situation.

    I've recreated the effect using the model in my other post which is based on simulated data (https://social.microsoft.com/Forums/en-US/0cc91f06-fdb4-41e1-9ae3-9a2173dae883/logistic-factor-problems?forum=infer.net):

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using MicrosoftResearch.Infer;
    using MicrosoftResearch.Infer.Models;
    using MicrosoftResearch.Infer.Utils;
    using MicrosoftResearch.Infer.Distributions;
    using MicrosoftResearch.Infer.Maths;
    using MicrosoftResearch.Infer.Factors;
    
    namespace ChessAnalysis
    {
        class Program
        {
            static void Main(string[] args)
            {
                InferenceEngine engine = new InferenceEngine(new VariationalMessagePassing());
                engine.Compiler.UseSerialSchedules = false;
                engine.Compiler.UseParallelForLoops = true;
    
                int nPlayers = 10;
                int nYears = 10;
                Rand.Restart(1);
    
                var skillPrior = new Gaussian(0, 2 * 2);
                var performancePrecisionPrior = Gamma.FromShapeAndRate(2, 2 * 1 * 1);
                var skillChangePrecisionPrior = Gamma.FromShapeAndRate(2, 2 * 0.1 * 0.1);
    
                var performancePrecision = Variable.Random(performancePrecisionPrior).Named("performancePrecision");
                var skillChangePrecision = Variable.Random(skillChangePrecisionPrior).Named("skillChangePrecision");
    
                Range player = new Range(nPlayers).Named("player");
                Range year = new Range(nYears).Named("year");
                VariableArray<int> firstYear = Variable.Array<int>(player).Named("firstYear");
                var skill = Variable.Array(Variable.Array<double>(player), year).Named("skill");
    
                using (var yearBlock = Variable.ForEach(year))
                {
                    var y = yearBlock.Index;
                    using (Variable.If(y == 0))
                    {
                        skill[year][player] = Variable.Random(skillPrior).ForEach(player);
                    }
                    using (Variable.If(y > 0))
                    {
                        using (Variable.ForEach(player))
                        {
                            Variable<bool> isFirstYear = (firstYear[player] >= y).Named("isFirstYear");
                            using (Variable.If(isFirstYear))
                            {
                                skill[year][player] = Variable.Random(skillPrior);
                            }
                            using (Variable.IfNot(isFirstYear))
                            {
                                skill[year][player] = Variable.GaussianFromMeanAndPrecision(skill[y - 1][player], skillChangePrecision);
                            }
                        }
                    }
                }
    
                // Sample parameter values according to the above model
                firstYear.ObservedValue = Util.ArrayInit(nPlayers, i => Rand.Int(nYears));
                Parameters parameters = new Parameters();
                parameters.performancePrecision = performancePrecisionPrior.Sample();
                parameters.skillChangePrecision = skillChangePrecisionPrior.Sample();
                parameters.skill = Util.ArrayInit(nYears, y => Util.ArrayInit(nPlayers, i => skillPrior.Sample()));
                for (int y = 0; y < nYears; y++)
                {
                    for (int i = 0; i < nPlayers; i++)
                    {
                        if (y > firstYear.ObservedValue[i])
                        {
                            parameters.skill[y][i] = Gaussian.Sample(parameters.skill[y - 1][i], parameters.skillChangePrecision);
                        }
                    }
                }
    
                // Sample game outcomes
                int[][] whiteData, blackData, outcomeData;
                GenerateData(parameters, firstYear.ObservedValue, out whiteData, out blackData, out outcomeData);
    
                bool inferParameters = true;  // make this true to infer additional parameters
                if (!inferParameters)
                {
                    // fix the true parameters
                    performancePrecision.ObservedValue = parameters.performancePrecision;
                    skillChangePrecision.ObservedValue = parameters.skillChangePrecision;
                }
    
                // Learn the skills from the data
                int[] nGamesData = Util.ArrayInit(nYears, y => outcomeData[y].Length);
                var nGames = Variable.Observed(nGamesData, year).Named("nGames");
                Range game = new Range(nGames[year]).Named("game");
                var whitePlayer = Variable.Observed(whiteData, year, game).Named("whitePlayer");
                var blackPlayer = Variable.Observed(blackData, year, game).Named("blackPlayer");
                var outcome = Variable.Observed(outcomeData, year, game).Named("outcome");
                using (Variable.ForEach(year))
                {
                    using (Variable.ForEach(game))
                    {
                        var w = whitePlayer[year][game].Named("w");
                        var b = blackPlayer[year][game].Named("b");
                        Variable<double> white_performance = Variable.GaussianFromMeanAndPrecision(skill[year][w], performancePrecision).Named("white_performance");
                        Variable<double> black_performance = Variable.GaussianFromMeanAndPrecision(skill[year][b], performancePrecision).Named("black_performance");
                        Variable<bool> white_delta = Variable.Bernoulli(Variable.Logistic(white_performance - black_performance)).Named("white_delta");
                        using (Variable.Case(outcome[year][game], 0))
                        { // black wins
                            Variable.ConstrainFalse(white_delta);
                        }
                        using (Variable.Case(outcome[year][game], 1))
                        { // white wins
                            Variable.ConstrainTrue(white_delta);
                        }
                    }
                }
                //year.AddAttribute(new Sequential());   // helps inference converge faster
    
                engine.NumberOfIterations = 50;
                var skillPost = engine.Infer<Gaussian[][]>(skill);
    
                // compare estimates to the true values
                if (inferParameters)
                {
                    Console.WriteLine("performancePrecision = {0} (truth = {1})", engine.Infer<Gamma>(performancePrecision).GetMean(), parameters.performancePrecision);
                    Console.WriteLine("skillChangePrecision = {0} (truth = {1})", engine.Infer<Gamma>(skillChangePrecision).GetMean(), parameters.skillChangePrecision);
                }
                int countPrinted = 0;
                for (int y = 0; y < nYears; y++)
                {
                    for (int p = 0; p < nPlayers; p++)
                    {
                        if (y >= firstYear.ObservedValue[p])
                        {
                            if (++countPrinted > 10)
                                break;
                            Console.WriteLine("skill[{0}][{1}] = {2} (truth = {3:g4})", y, p, skillPost[y][p], parameters.skill[y][p]);
                        }
                    }
                }
                while (true) { }
            }
    
            public class Parameters
            {
                public double performancePrecision, skillChangePrecision;
                public double[][] skill;
            }
    
            public static void GenerateData(Parameters parameters, int[] firstYear, out int[][] whiteData, out int[][] blackData, out int[][] outcomeData)
            {
                int nYears = parameters.skill.Length;
                int nPlayers = parameters.skill[0].Length;
                int nGames = 100000;
                var whitePlayer = Util.ArrayInit(nYears, year => new List<int>());
                var blackPlayer = Util.ArrayInit(nYears, year => new List<int>());
                var outcomes = Util.ArrayInit(nYears, year => new List<int>());
                for (int game = 0; game < nGames; game++)
                {
                    while (true)
                    {
                        int w = Rand.Int(nPlayers);
                        int b = Rand.Int(nPlayers);
                        if (w == b)
                            continue;
                        int minYear = Math.Max(firstYear[w], firstYear[b]);
                        int year = Rand.Int(minYear, nYears);
                        double performance_diff = Gaussian.Sample(parameters.skill[year][w], parameters.performancePrecision)
                            - Gaussian.Sample(parameters.skill[year][b], parameters.performancePrecision);
                        bool white_delta = Bernoulli.Sample(MMath.Logistic(performance_diff));
                        int outcome;
                        if(white_delta == true)
                        {
                            outcome = 1;  // white wins
                        }
                        else
                        {
                            outcome = 0;  // black wins
                        }
                        whitePlayer[year].Add(w);
                        blackPlayer[year].Add(b);
                        outcomes[year].Add(outcome);
                        break;
                    }
                }
                whiteData = Util.ArrayInit(nYears, year => whitePlayer[year].ToArray());
                blackData = Util.ArrayInit(nYears, year => blackPlayer[year].ToArray());
                outcomeData = Util.ArrayInit(nYears, year => outcomes[year].ToArray());
            }
        }
    }

    Even though there are many matches in this simulation the parameter estimate for the skillChangePrecision variable is significantly underestimated - model estimate of 23.1 compared to true value of 177.8.

    I've tried recreating this effect on an even simpler model (below) but I get errors after model compilation which I'm not sure how to fix:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using MicrosoftResearch.Infer;
    using MicrosoftResearch.Infer.Models;
    using MicrosoftResearch.Infer.Utils;
    using MicrosoftResearch.Infer.Distributions;
    using MicrosoftResearch.Infer.Maths;
    using MicrosoftResearch.Infer.Factors;
    
    namespace ChessAnalysis
    {
        class Program
        {
            static void Main(string[] args)
            {
                InferenceEngine engine = new InferenceEngine(new VariationalMessagePassing());
                engine.Compiler.UseSerialSchedules = false;
                engine.Compiler.UseParallelForLoops = true;
    
                int nPlayers = 10;
                int nYears = 10;
                Rand.Restart(1);
    
                var skillPrior = new Gaussian(0, 2 * 2);
                var skillChangePrecisionPrior = Gamma.FromShapeAndRate(2, 2 * 0.1 * 0.1);
    
                var skillChangePrecision = Variable.Random(skillChangePrecisionPrior).Named("skillChangePrecision");
    
                Range player = new Range(nPlayers).Named("player");
                Range year = new Range(nYears).Named("year");
                VariableArray<int> firstYear = Variable.Array<int>(player).Named("firstYear");
                var skill = Variable.Array(Variable.Array<double>(player), year).Named("skill");
    
                using (var yearBlock = Variable.ForEach(year))
                {
                    var y = yearBlock.Index;
                    using (Variable.If(y == 0))
                    {
                        skill[year][player] = Variable.Random(skillPrior).ForEach(player);
                    }
                    using (Variable.If(y > 0))
                    {
                        using (Variable.ForEach(player))
                        {
                            Variable<bool> isFirstYear = (firstYear[player] >= y).Named("isFirstYear");
                            using (Variable.If(isFirstYear))
                            {
                                skill[year][player] = Variable.Random(skillPrior);
                            }
                            using (Variable.IfNot(isFirstYear))
                            {
                                skill[year][player] = Variable.GaussianFromMeanAndPrecision(skill[y - 1][player], skillChangePrecision);
                            }
                        }
                    }
                }
    
                // Sample parameter values according to the above model
                firstYear.ObservedValue = Util.ArrayInit(nPlayers, i => Rand.Int(nYears));
                Parameters parameters = new Parameters();
                parameters.skillChangePrecision = skillChangePrecisionPrior.Sample();
                parameters.skill = Util.ArrayInit(nYears, y => Util.ArrayInit(nPlayers, i => skillPrior.Sample()));
                for (int y = 0; y < nYears; y++)
                {
                    for (int i = 0; i < nPlayers; i++)
                    {
                        if (y > firstYear.ObservedValue[i])
                        {
                            parameters.skill[y][i] = Gaussian.Sample(parameters.skill[y - 1][i], parameters.skillChangePrecision);
                        }
                    }
                }
    
                // Sample game outcomes
                int[][] whiteData, blackData, outcomeData;
                GenerateData(parameters, firstYear.ObservedValue, out whiteData, out blackData, out outcomeData);
    
                bool inferParameters = true;  // make this true to infer additional parameters
                if (!inferParameters)
                {
                    // fix the true parameters
                    skillChangePrecision.ObservedValue = parameters.skillChangePrecision;
                }
    
                // Learn the skills from the data
                int[] nGamesData = Util.ArrayInit(nYears, y => outcomeData[y].Length);
                var nGames = Variable.Observed(nGamesData, year).Named("nGames");
                Range game = new Range(nGames[year]).Named("game");
                var whitePlayer = Variable.Observed(whiteData, year, game).Named("whitePlayer");
                var blackPlayer = Variable.Observed(blackData, year, game).Named("blackPlayer");
                var outcome = Variable.Observed(outcomeData, year, game).Named("outcome");
                using (Variable.ForEach(year))
                {
                    using (Variable.ForEach(game))
                    {
                        var w = whitePlayer[year][game].Named("w");
                        var b = blackPlayer[year][game].Named("b");
                        Variable<double> skill_difference = (skill[year][w] - skill[year][b]).Named("skill_difference");
                        Variable<double> chance_of_winning = Variable.Logistic(skill_difference).Named("chance_of_winning");
                        Variable<bool> white_delta = Variable.Bernoulli(chance_of_winning).Named("white_delta");
                        using (Variable.Case(outcome[year][game], 0))
                        { // black wins
                            Variable.ConstrainFalse(white_delta);
                        }
                        using (Variable.Case(outcome[year][game], 1))
                        { // white wins
                            Variable.ConstrainTrue(white_delta);
                        }
                    }
                }
                //year.AddAttribute(new Sequential());   // helps inference converge faster
    
                engine.NumberOfIterations = 50;
                var skillPost = engine.Infer<Gaussian[][]>(skill);
    
                // compare estimates to the true values
                if (inferParameters)
                {
                    Console.WriteLine("skillChangePrecision = {0} (truth = {1})", engine.Infer<Gamma>(skillChangePrecision).GetMean(), parameters.skillChangePrecision);
                }
                int countPrinted = 0;
                for (int y = 0; y < nYears; y++)
                {
                    for (int p = 0; p < nPlayers; p++)
                    {
                        if (y >= firstYear.ObservedValue[p])
                        {
                            if (++countPrinted > 10)
                                break;
                            Console.WriteLine("skill[{0}][{1}] = {2} (truth = {3:g4})", y, p, skillPost[y][p], parameters.skill[y][p]);
                        }
                    }
                }
                while (true) { }
            }
    
            public class Parameters
            {
                public double skillChangePrecision;
                public double[][] skill;
            }
    
            public static void GenerateData(Parameters parameters, int[] firstYear, out int[][] whiteData, out int[][] blackData, out int[][] outcomeData)
            {
                int nYears = parameters.skill.Length;
                int nPlayers = parameters.skill[0].Length;
                int nGames = 100000;
                var whitePlayer = Util.ArrayInit(nYears, year => new List<int>());
                var blackPlayer = Util.ArrayInit(nYears, year => new List<int>());
                var outcomes = Util.ArrayInit(nYears, year => new List<int>());
                for (int game = 0; game < nGames; game++)
                {
                    while (true)
                    {
                        int w = Rand.Int(nPlayers);
                        int b = Rand.Int(nPlayers);
                        if (w == b)
                            continue;
                        int minYear = Math.Max(firstYear[w], firstYear[b]);
                        int year = Rand.Int(minYear, nYears);
                        double performance_diff = parameters.skill[year][w] - parameters.skill[year][b];
                        bool white_delta = Bernoulli.Sample(MMath.Logistic(performance_diff));
                        int outcome;
                        if (white_delta == true)
                        {
                            outcome = 1;  // white wins
                        }
                        else
                        {
                            outcome = 0;  // black wins
                        }
                        whitePlayer[year].Add(w);
                        blackPlayer[year].Add(b);
                        outcomes[year].Add(outcome);
                        break;
                    }
                }
                whiteData = Util.ArrayInit(nYears, year => whitePlayer[year].ToArray());
                blackData = Util.ArrayInit(nYears, year => blackPlayer[year].ToArray());
                outcomeData = Util.ArrayInit(nYears, year => outcomes[year].ToArray());
            }
        }
    }

     
    Wednesday, January 7, 2015 6:58 PM
  • If I fix performanceChangePrecision to the true value, then skillChangePrecision is estimated correctly.  So I think this is just an ambiguity in trying to learn these two parameters at once.
    • Marked as answer by MarkG87 Monday, January 12, 2015 6:46 PM
    Monday, January 12, 2015 3:09 PM
    Owner
  • Tom,

    I've tried to recreate this by Observing the performancePrecision value however I don't then get correct skillChangePrecision estimations (Est 19 vs 178 truth). What have I done differently to you in this code below?

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using MicrosoftResearch.Infer;
    using MicrosoftResearch.Infer.Models;
    using MicrosoftResearch.Infer.Utils;
    using MicrosoftResearch.Infer.Distributions;
    using MicrosoftResearch.Infer.Maths;
    using MicrosoftResearch.Infer.Factors;
    
    namespace ChessAnalysis
    {
        class Program
        {
            static void Main(string[] args)
            {
                InferenceEngine engine = new InferenceEngine(new VariationalMessagePassing());
                engine.Compiler.UseSerialSchedules = false;
                engine.Compiler.UseParallelForLoops = true;
    
                int nPlayers = 10;
                int nYears = 10;
                Rand.Restart(1);
    
                var skillPrior = new Gaussian(0, 2 * 2);
                var performancePrecisionPrior = Gamma.FromShapeAndRate(2, 2 * 1 * 1);
                var skillChangePrecisionPrior = Gamma.FromShapeAndRate(2, 2 * 0.1 * 0.1);
    
                var performancePrecision = Variable.Random(performancePrecisionPrior).Named("performancePrecision");
                var skillChangePrecision = Variable.Random(skillChangePrecisionPrior).Named("skillChangePrecision");
    
                Range player = new Range(nPlayers).Named("player");
                Range year = new Range(nYears).Named("year");
                VariableArray<int> firstYear = Variable.Array<int>(player).Named("firstYear");
                var skill = Variable.Array(Variable.Array<double>(player), year).Named("skill");
    
                using (var yearBlock = Variable.ForEach(year))
                {
                    var y = yearBlock.Index;
                    using (Variable.If(y == 0))
                    {
                        skill[year][player] = Variable.Random(skillPrior).ForEach(player);
                    }
                    using (Variable.If(y > 0))
                    {
                        using (Variable.ForEach(player))
                        {
                            Variable<bool> isFirstYear = (firstYear[player] >= y).Named("isFirstYear");
                            using (Variable.If(isFirstYear))
                            {
                                skill[year][player] = Variable.Random(skillPrior);
                            }
                            using (Variable.IfNot(isFirstYear))
                            {
                                skill[year][player] = Variable.GaussianFromMeanAndPrecision(skill[y - 1][player], skillChangePrecision);
                            }
                        }
                    }
                }
    
                // Sample parameter values according to the above model
                firstYear.ObservedValue = Util.ArrayInit(nPlayers, i => Rand.Int(nYears));
                Parameters parameters = new Parameters();
                parameters.performancePrecision = performancePrecisionPrior.Sample();
                parameters.skillChangePrecision = skillChangePrecisionPrior.Sample();
                parameters.skill = Util.ArrayInit(nYears, y => Util.ArrayInit(nPlayers, i => skillPrior.Sample()));
                for (int y = 0; y < nYears; y++)
                {
                    for (int i = 0; i < nPlayers; i++)
                    {
                        if (y > firstYear.ObservedValue[i])
                        {
                            parameters.skill[y][i] = Gaussian.Sample(parameters.skill[y - 1][i], parameters.skillChangePrecision);
                        }
                    }
                }
    
                // Sample game outcomes
                int[][] whiteData, blackData, outcomeData;
                GenerateData(parameters, firstYear.ObservedValue, out whiteData, out blackData, out outcomeData);
    
                bool inferParameters = false;  // make this true to infer additional parameters
                if (!inferParameters)
                {
                    // fix the true parameters
                    performancePrecision.ObservedValue = parameters.performancePrecision;
                    //skillChangePrecision.ObservedValue = parameters.skillChangePrecision;
                }
    
                // Learn the skills from the data
                int[] nGamesData = Util.ArrayInit(nYears, y => outcomeData[y].Length);
                var nGames = Variable.Observed(nGamesData, year).Named("nGames");
                Range game = new Range(nGames[year]).Named("game");
                var whitePlayer = Variable.Observed(whiteData, year, game).Named("whitePlayer");
                var blackPlayer = Variable.Observed(blackData, year, game).Named("blackPlayer");
                var outcome = Variable.Observed(outcomeData, year, game).Named("outcome");
                using (Variable.ForEach(year))
                {
                    using (Variable.ForEach(game))
                    {
                        var w = whitePlayer[year][game].Named("w");
                        var b = blackPlayer[year][game].Named("b");
                        Variable<double> white_performance = Variable.GaussianFromMeanAndPrecision(skill[year][w], performancePrecision).Named("white_performance");
                        Variable<double> black_performance = Variable.GaussianFromMeanAndPrecision(skill[year][b], performancePrecision).Named("black_performance");
                        Variable<bool> white_delta = Variable.Bernoulli(Variable.Logistic(white_performance - black_performance)).Named("white_delta");
                        using (Variable.Case(outcome[year][game], 0))
                        { // black wins
                            Variable.ConstrainFalse(white_delta);
                        }
                        using (Variable.Case(outcome[year][game], 1))
                        { // white wins
                            Variable.ConstrainTrue(white_delta);
                        }
                    }
                }
                //year.AddAttribute(new Sequential());   // helps inference converge faster
    
                engine.NumberOfIterations = 50;
                var skillPost = engine.Infer<Gaussian[][]>(skill);
    
                // compare estimates to the true values
                //if (inferParameters)
                //{
                    Console.WriteLine("performancePrecision = {0} (truth = {1})", engine.Infer<Gamma>(performancePrecision).GetMean(), parameters.performancePrecision);
                    Console.WriteLine("skillChangePrecision = {0} (truth = {1})", engine.Infer<Gamma>(skillChangePrecision).GetMean(), parameters.skillChangePrecision);
                //}
                int countPrinted = 0;
                for (int y = 0; y < nYears; y++)
                {
                    for (int p = 0; p < nPlayers; p++)
                    {
                        if (y >= firstYear.ObservedValue[p])
                        {
                            if (++countPrinted > 10)
                                break;
                            Console.WriteLine("skill[{0}][{1}] = {2} (truth = {3:g4})", y, p, skillPost[y][p], parameters.skill[y][p]);
                        }
                    }
                }
                while (true) { }
            }
    
            public class Parameters
            {
                public double performancePrecision, skillChangePrecision;
                public double[][] skill;
            }
    
            public static void GenerateData(Parameters parameters, int[] firstYear, out int[][] whiteData, out int[][] blackData, out int[][] outcomeData)
            {
                int nYears = parameters.skill.Length;
                int nPlayers = parameters.skill[0].Length;
                int nGames = 100000;
                var whitePlayer = Util.ArrayInit(nYears, year => new List<int>());
                var blackPlayer = Util.ArrayInit(nYears, year => new List<int>());
                var outcomes = Util.ArrayInit(nYears, year => new List<int>());
                for (int game = 0; game < nGames; game++)
                {
                    while (true)
                    {
                        int w = Rand.Int(nPlayers);
                        int b = Rand.Int(nPlayers);
                        if (w == b)
                            continue;
                        int minYear = Math.Max(firstYear[w], firstYear[b]);
                        int year = Rand.Int(minYear, nYears);
                        double performance_diff = Gaussian.Sample(parameters.skill[year][w], parameters.performancePrecision)
                            - Gaussian.Sample(parameters.skill[year][b], parameters.performancePrecision);
                        bool white_delta = Bernoulli.Sample(MMath.Logistic(performance_diff));
                        int outcome;
                        if(white_delta == true)
                        {
                            outcome = 1;  // white wins
                        }
                        else
                        {
                            outcome = 0;  // black wins
                        }
                        whitePlayer[year].Add(w);
                        blackPlayer[year].Add(b);
                        outcomes[year].Add(outcome);
                        break;
                    }
                }
                whiteData = Util.ArrayInit(nYears, year => whitePlayer[year].ToArray());
                blackData = Util.ArrayInit(nYears, year => blackPlayer[year].ToArray());
                outcomeData = Util.ArrayInit(nYears, year => outcomes[year].ToArray());
            }
        }
    }


    • Edited by MarkG87 Monday, January 12, 2015 3:23 PM Error in naming variable performancePrecision
    Monday, January 12, 2015 3:20 PM
  • In the simplified model, if I remove the setting for UseSerialSchedules, then I get:

    skillChangePrecision = 135.543264891683 (truth = 199.564405970014)

    • Marked as answer by MarkG87 Monday, January 12, 2015 6:46 PM
    Monday, January 12, 2015 5:53 PM
    Owner
  • Thanks Tom, I get the same result now.

    With regards to the SerialSchedules parameter, I read here (http://research.microsoft.com/en-us/um/cambridge/projects/infernet/docs/Inference%20engine%20settings.aspx) that this only applied to EP. Does this now affect the VMP algorithm too? Also, does this mean that the default parameter is now 'true'?

    Monday, January 12, 2015 7:55 PM
  • It looks like I forgot to update that part of the documentation.  Thanks for pointing that out.  UseSerialSchedules is now used by all algorithms and the default is true.
    Monday, January 12, 2015 8:27 PM
    Owner
  • Thanks once again Tom, glad I could be of some small help!
    Monday, January 12, 2015 10:22 PM