locked
Threading Infer.net models RRS feed

  • Question

  • Dear Infer.net,

    I have an Infer net model which I wish to sample from under different observed values rapidly. The abbreviated model code is essentially:

    Variable<double> x1 = Variable.GaussianFromMeanAndPrecision(mean1, prec1);
    Variable<double> x2 = Variable.GaussianFromMeanAndPrecision(x2, prec2);
    Variable<bool> xBool = x2 > 0;

    What I wish to do is to sample xBool repeatedly, given a particular sample of x1, for lots of different samples of x1. The method I tried to do this in parallel was to create an array of length(number of threads) for my Infer Net model class and then in each thread iterate through first sampling a Gaussian created with mean = mean1, precision = prec1 and setting x1.ObservedValue to the result of this sample, then inferring the Bernoulli distribution for xBool and using that to create my xBool samples. The (abbreviated) code looks something like:

    foreach(thread)
    {
       foreach(x1_trial)
       {
          Model[thread].x1.ObservedValue = x1Gaussian.Sample();
          Bernoulli xBernoulli = Model[thread].engine.Infer<Bernoulli>(Model[thread]xBool);
          foreach(xBool_trial)
          {
             bool sample = xBernoulli.Sample();
             // Now use sample in other simulation...
          }
       }
    }
    

    N.B. I had to create the models using an incremented counter to generate different .Named() attributes for each variable, else when it came to compile time I got "there's already a variable named vBool48" errors.

    Whilst this method works when I use multiple threads it actually runs slower per x1_trial. When I did a concurrency performance analysis it seems the lock on the ObservedValueChanged method is the cause of why parallelisation doesn't improve performance. Is there a way I can modify my method to scale in parallel or is there a better approach to solving the problem I should try?


    Thursday, May 7, 2015 5:25 PM

Answers

  • (I assume x2's mean is x1, not x2)

    For a simple model like this you can probably get away with not using the compiler at all. An example of similar sampling is shown in the Generating data from the model here.

    Otherwise, I think what you can do is work with multiple instances of the generated algorithm (one per thread/task). This is explained here and our two learners (Matchbox and BPM) also do it.

    -Y-

    • Marked as answer by MarkG87 Thursday, May 7, 2015 8:31 PM
    Thursday, May 7, 2015 8:05 PM

All replies

  • (I assume x2's mean is x1, not x2)

    For a simple model like this you can probably get away with not using the compiler at all. An example of similar sampling is shown in the Generating data from the model here.

    Otherwise, I think what you can do is work with multiple instances of the generated algorithm (one per thread/task). This is explained here and our two learners (Matchbox and BPM) also do it.

    -Y-

    • Marked as answer by MarkG87 Thursday, May 7, 2015 8:31 PM
    Thursday, May 7, 2015 8:05 PM
  • Yes, x2's mean was meant to be x1. Thanks again Yordan, that's exactly what I was looking for!
    Thursday, May 7, 2015 8:32 PM
  • Yordan,

    I have one question on how to change the pseudo-code I posted before to use the directly compiled model class: Where before in the code I called:

    Bernoulli xBernoulli = Model[thread].engine.Infer<Bernoulli>(Model[thread]xBool);

    In place of Infer(), should I always call CompiledModel.Execute() after I each time I update the Observed variables before calling Marginal() or do I only need to do the Execute() once? Do I need to do a Reset() after every Observed value change? When I ran the code before using Infer() it didn't look like the model was doing any iterations at all (there were no '.....|.....' writes to the console) so I'm a little confused as to what's going on under the hood when I change the ObservedValues and call Infer.

    Thanks very much again.

    Sunday, May 10, 2015 11:57 AM
  • Yordan,

    Apologies - I think I have the answers to all my questions in http://research.microsoft.com/en-us/um/cambridge/projects/infernet/docs/Controlling%20how%20inference%20is%20performed.aspx - problem solved! Thanks very much for all your help again.

    Sunday, May 10, 2015 1:02 PM