locked
Using Variable.ConstrainPositive on a Gaussian to approximate a Gamma random variable RRS feed

  • Question

  • Hi guys,

    In this thread Tom mentioned using a Gaussian distribution together with Variable.ConstrainPositive as a replacement for a Gamma distribution for data that was positively real-valued.

    Would it be possible to do something like this:

    Variable<double> x = Variable.GaussianFromMeanAndVariance(0, 1).Named("x");
    Variable.ConstrainPositive(x);
    
    InferenceEngine engine = new InferenceEngine();
                
    Console.WriteLine("Dist over positive x =" + engine.Infer<TruncatedGaussian>(x));
    Running this code right now produces the following error:

     

    What's the right way to use Variable.ConstrainPositive (as described above), and is it ever possible to infer a TruncatedGaussian from a constrained Gaussian? Or should we just assume that the inferred Gaussian can be approximated by a truncated Gaussian with a lower bound of 0, and moment-matched mean and variance?

    In using a constrained Gaussian instead of a Gamma for modeling purposes, where would we insert the Variable.ConstrainPositive - when the priors are being trained, or when trying to making predictions? Do the priors represent the means/precisions of constrained gaussian distributions, or unconstrained distributions that will then be constrained during inference?

    Thanks,

    Andrew

    Monday, January 12, 2015 5:38 AM

Answers

  • In this simple example, to infer a TruncatedGaussian you can either change the marginal prototype of x to be TruncatedGaussian:

    x.AddAttribute(new MarginalPrototype(TruncatedGaussian.Uniform()));

    or you can change x to be drawn from a TruncatedGaussian (thus setting its marginal prototype implicitly):

    var x = Variable.TruncatedGaussian(0, 1, double.NegativeInfinity, double.PositiveInfinity).Named("x");

    The approach that you suggest of converting the inferred Gaussian into a TruncatedGaussian would work, but Infer.NET doesn't provide any routines to do this.

    The ConstrainPositive constraint should be applied only once, during training.

    • Marked as answer by Andrew Mao Wednesday, January 14, 2015 9:17 PM
    Monday, January 12, 2015 2:12 PM
    Owner

All replies

  • In this simple example, to infer a TruncatedGaussian you can either change the marginal prototype of x to be TruncatedGaussian:

    x.AddAttribute(new MarginalPrototype(TruncatedGaussian.Uniform()));

    or you can change x to be drawn from a TruncatedGaussian (thus setting its marginal prototype implicitly):

    var x = Variable.TruncatedGaussian(0, 1, double.NegativeInfinity, double.PositiveInfinity).Named("x");

    The approach that you suggest of converting the inferred Gaussian into a TruncatedGaussian would work, but Infer.NET doesn't provide any routines to do this.

    The ConstrainPositive constraint should be applied only once, during training.

    • Marked as answer by Andrew Mao Wednesday, January 14, 2015 9:17 PM
    Monday, January 12, 2015 2:12 PM
    Owner
  • Hi Tom,

    Thanks for the clarification. I'm still slightly confused about how to use ConstrainPositive in the way you described. I'll try and explain:

    Suppose we are learning a distribution over Gaussians in the canonical way, with a Gaussian prior on the mean and a Gamma prior on the precision. In the model we apply Variable.ConstrainPositive on the Gaussian variable, set an array of observed data to non-negative values, and infer a new Gaussian mean and Gamma precision posterior.

    • Does the Variable.ConstrainPositive do anything during training, given that all of our data is non-negative anyway?
    • What do the posteriors that we learned correspond to? Are they the mean and precision of a Gaussian that would correspond to the right distribution after being truncated at 0, or are they the mean and precision of a Gaussian that approximates a truncated Gaussian? (I know it wouldn't exactly be either, due to approximation, but is one closer than the other?)

    I'm guessing that your last comment above suggests that the former is true, not the latter, and that the non-truncated Gaussian inferred from the posterior would have the same mean as the truncated Gaussian that we're trying to learn.

    What would it mean to apply ConstrainPositive only during training? Does that mean we'll need to create a separate model for prediction that doesn't include the Variable.ConstrainPositive, and use the Gaussian and Gamma posterior distribution that we learned during training?

    Thanks again for your help.

    Monday, January 12, 2015 3:52 PM
  • I assume in your example that ConstrainPositive is being applied to the mean.  It does have an effect during training since it eliminates the possibility of the true mean being negative (which is possible when the data is non-negative).  The Gaussian posterior that you get approximates a truncated Gaussian.  The prediction model does need to be different since it must exclude the constraint.
    Monday, January 12, 2015 5:59 PM
    Owner
  • Hi Tom,

    I had actually thought that Variable.ConstrainPositive was being applied to the variable from Gaussian.FromMeanAndPrecision(meanPrior, precisionPrior). I didn't realize that you were applying it to the mean. Is this what you meant by the following?

    The prediction model does need to be different since it must exclude the constraint.

    Applying it to the mean seems like it would only constrain the mean to be positive, not actual realizations of the variable.

    In either case, I'd be predicting the mean of the posterior distribution. Are you saying that if we add the constraint again in prediction, we'd then be truncating a Gaussian that is already approximating a truncated Gaussian, which would further inflate the mean prediction?

    I'm sorry if I'm being unclear. I'll use some code to illustrate. Suppose I'm trying to train a simple model like the following, where I have some non-negative data and I'd like to learn a posterior distribution. I'd like to use the mean and precision distributions I learned to make some other inferences. Are you suggesting to use Variable.ConstrainPositive in the first way, or the second way?

    Variable<double> mean = Variable.GaussianFromMeanAndPrecision(0, 1);
    Variable<double> prec = Variable.GammaFromShapeAndScale(1, 1);
    
    Range k = new Range(100);
    
    VariableArray<double> x = Variable.Array<double>(k).Named("x");
    x[k] = Variable.GaussianFromMeanAndPrecision(mean, prec).ForEach(k);
    
    Variable.ConstrainPositive(mean);
    
    using (Variable.ForEach(k))
    {
        Variable.ConstrainPositive(x[k]);    
    }
                
    // Observe some values
    x.ObservedValue = Util.ArrayInit(k.SizeAsInt, i => 42d);
    
    InferenceEngine engine = new InferenceEngine();
                            
    Console.WriteLine(engine.Infer<Gaussian>(mean));
    Console.WriteLine(engine.Infer<Gamma>(prec));

    Thanks,

    Andrew


    Monday, January 12, 2015 6:25 PM
  • In the code you gave, applying ConstrainPositive to x[k] has no effect since x is observed to non-negative values.  So it reduces to only constraining the mean as in my previous post.
    Tuesday, January 13, 2015 2:20 PM
    Owner