none
XOR inference (Migrated from community.research.microsoft.com)

    السؤال

  • redstr posted on 12-06-2009 1:36 PM

    Hello All,

    I tried a simple XOR setup with observed result, as follows:

                Variable<bool> x = Variable.Bernoulli(0.5), y = Variable.Bernoulli(0.5);
                var xt = x & (~y);
                var yt = (~x) & y;
                var z = yt | xt;
                z.ObservedValue = true;
                InferenceEngine ie = new InferenceEngine();
                Console.WriteLine("x={0}", ie.Infer(x));

    I get:

    x=Bernoulli(0)

    Which is no good, since it really should be Bernoulli(0.5). Gibbs sampler can't infer this, complaining that the model has zero probability. I suppose I'm doing something wrong, but still this indicates some hidden, non-obvious limitations as to what can be done. Could somebody explain why I get this result and what is the correct setup for such discrete problems?

    Also, is it possible to get the value of a deterministic variable? For example if I specify

          var t = ~z;

    in my example, I can't simply infer it, and there is no obvious way to retrieve its value.

    BTW, OpenID login to the forum doesn't work for me. And how do you post colored text from Visual Studio here?

    Thanks!

    01/رجب/1432 05:27 م
    المالك

الإجابات

  • redstr replied on 12-07-2009 4:29 PM

    Thank you, that was helpful! Your suggestion worked, except that the Gibbs Sampling algorithm still gives me Bernoulli(0) or Bernoulli(1) depending on its mood.

    The loop with deterministic variables was the root of the problem in this case, you say. Is there a more or less systematic description of what kind of models may cause problems for these approximate inference algorithms?

    Inferring deterministic variables can make sense in a scenario where there may be observations assigned to different variables. Then some of the variables may become deterministic and thus "hidden". It was the case with the XOR problem, where I put a statement to print values of all variables at the end.

    01/رجب/1432 05:27 م
    المالك

جميع الردود

  • minka replied on 12-07-2009 3:31 PM

    Yes, the behavior of approximate inference can often be non-obvious!  In this case, the Bernoulli(0) comes from the way you have defined z.  Infer.NET performs inference by constructing a factor graph from the exact sets of operations used in your program.  In your case, the operations are 'not' (twice), 'and' (twice) and 'or'. Because x is used twice in defining z, the resulting factor graph has a loop.  Also, the loop consists entirely of deterministic operations.  This is a particularly bad case for all of the inference algorithms in Infer.NET.  VMP and Gibbs sampling cannot run at all in this situation and throw errors (admittedly, the error from Gibbs could be more helpful).  Expectation Propagation (the default algorithm) can be executed in this situation, but the result is far from the exact marginal.

    For this particular problem, there is a simple fix.  You can change your definition of z to involve fewer operations.  Here is an equivalent definition of z using one operation:

    z = (x != y);

    Using this definition gives the exact marginal.  So the moral of the story is to simplify your programs as much as possible.  Not only will they run faster, but also give more accurate results.

    Your other question about inferring t = ~z is not possible in the current release of Infer.NET.  Basically Infer.NET assumes that you want to infer a random variable.  For a deterministic variable, you don't really need Infer.NET since C# can already compute the value.  However, we could add this functionality in the future.

    01/رجب/1432 05:27 م
    المالك
  • redstr replied on 12-07-2009 4:29 PM

    Thank you, that was helpful! Your suggestion worked, except that the Gibbs Sampling algorithm still gives me Bernoulli(0) or Bernoulli(1) depending on its mood.

    The loop with deterministic variables was the root of the problem in this case, you say. Is there a more or less systematic description of what kind of models may cause problems for these approximate inference algorithms?

    Inferring deterministic variables can make sense in a scenario where there may be observations assigned to different variables. Then some of the variables may become deterministic and thus "hidden". It was the case with the XOR problem, where I put a statement to print values of all variables at the end.

    01/رجب/1432 05:27 م
    المالك