# How to model a continuous variable evaluated at discrete values? (Migrated from community.research.microsoft.com)

• ### Question

• rivuletish posted on 12-06-2010 8:44 AM

suppose the model is

p(x)p(y|x)

x is a Gaussian variable. However, in practice x is discrete, say, it is discrete pixel position.

y is the observed variable, say it is an image of an apple, p(y|x) would be the probability of seeing the apple at pixel location x.

I don't know the form of p(y|x). All I have is a classifier to evaluate the probability of seeing the apple at position x, that is p(y|x) at disrete x.

How to model this case? If x is modeled as discrete variable, in the case of large image, the sample size of x is large. Then it is computationaly inefficient.

BTW, can I use  Infer.Net to do MAP inference or selecting the most probable joint configuration?

Friday, June 3, 2011 6:09 PM

• rivuletish replied on 12-11-2010 8:04 AM

Hi John,

Thank you very much for your reply. You mentioned that the discretised representation can be fitted as a Gaussian. Maybe mixture of Gaussian distribution would be better.

BTW, I read the following paper. In the case y~Gaussian(x,Sigma), "Message passing can be performed exhaustively and efﬁciently with convolutions." I am not sure what general conditions are required for the convolution trick to be applied. Maybe Infer.Net can take this into account.

Ramanan, D. "Learning to Parse Images of Articulated Bodies.Neural Info. Proc. Systems(NIPS) To appear. Dec 2006.

Have a nice weekend.

Kevn

Friday, June 3, 2011 6:09 PM

### All replies

• jwinn replied on 12-07-2010 11:26 AM

I'm not sure I quite understand your situation - do you have an existing classifier giving p(y|x) or are you trying to build one in Infer.NET?

If x is a discrete variable with large cardinality, you may be able to  handle this efficiently in Infer.NET by using sparse messages.

For MAP inference, Infer.NET has very limited support for max product belief propagation - otherwise you can just pick the mode of the returned marginals.  There is no support for finding the most probable joint configuration, although I can think of ways you could do it using Infer.NET as a subroutine.  Alternatively, Gibbs sampling will returns a (joint) sample which may be sufficient for your application.  Our philosophy is that it is generally preferable to do inference/sampling rather than optimisation since the inherent uncertainty in the solution is retained.

Best

John W.

Friday, June 3, 2011 6:09 PM
• rivuletish replied on 12-07-2010 8:19 PM

Sorry I didn't make myself clear.

My question comes from part model of objects. Let us assume the object consists of two related parts, A and B. And x and y are their positions. y~Gaussian(y| x, Sigma). We want to recognize the two parts from an image of the object.

The model would be p(x) p(y|x) p(A|x) p(B|y). The factor graph is B------[p(B|y)]----y------[Gau(y|x,Sigma)]---x---[p(A|x)]----A       (p(x) is omitted in the graph, and [. ] represent factors)

Because modelling p(A|x) and p(B|y) is hard, we can use existing part classifiers to calculate them given that the image is observed. These p(A|x) and p(B|y) are accurate, but they work reasonably in practice.

So the message sent from B to y is known at discrete y. But Gau(y|x,Sigma) is a continuous function. How to calculate the message from y to x in Infer.Net.

Friday, June 3, 2011 6:09 PM
• jwinn replied on 12-09-2010 4:17 AM

Your question is partly an Infer.NET question and partly a research question.

The naive answer is that you can use a giant discrete factor that simply evaluates the Gaussian function for each discretised value of x and y.  In Infer.NET, this could be implemented as a custom factor. However, such a factor would take O(N^2)  to evaluate a message, where N is the cardinality of x and y.  More efficient methods of evaluation of such factors have been proposed - see, for example:
Efficient Belief Propagation for Early Vision
Pedro F. Felzenszwalb and Daniel P. Huttenlocher
International Journal of Computer Vision, Vol. 70, No. 1, October 2006
which suggests distance transform methods for max product belief propagation and also mentions sum-product options.
There is example code implementing the max product methods in this paper in the supplied factor source code file Undirected.cs, as described on this page.

Another alternative would be to transform your discretised representation into a Gaussian (e.g. by moment matching), but then you would lose desirable properties of the discrete distribution such as the ability to capture multiple modes.

In general, I would say that finding a really good solution to this problem is still an open research question.

Best

John W.

Friday, June 3, 2011 6:09 PM
• rivuletish replied on 12-11-2010 8:04 AM

Hi John,

Thank you very much for your reply. You mentioned that the discretised representation can be fitted as a Gaussian. Maybe mixture of Gaussian distribution would be better.

BTW, I read the following paper. In the case y~Gaussian(x,Sigma), "Message passing can be performed exhaustively and efﬁciently with convolutions." I am not sure what general conditions are required for the convolution trick to be applied. Maybe Infer.Net can take this into account.

Ramanan, D. "Learning to Parse Images of Articulated Bodies.Neural Info. Proc. Systems(NIPS) To appear. Dec 2006.

Have a nice weekend.

Kevn

Friday, June 3, 2011 6:09 PM