In the CyclingTime3 example in Infer.NET 101, a Gaussian mixture model is set up and some data is passed through it to infer the parameters. The posteriors on the distributions on means come back as Gaussians. However, mathematically, the posterior
is not Gaussian. It should converge to a bi-modal distribution with equal weight point masses at the means of the Gaussians that generate the data.
There is some hope in that example that since the inference engine is called like this : InferenceEngine.Infer<Gaussian[]>(AverageTime), maybe the inference engine is being forced to shoe-horn the distribution into a Gaussian. However, the Mixture
of Gaussians example in the tutorials and examples section of the page has no such obvious type-cast, and yet seemingly returns a Gaussian posterior as well. An experiment shows that if I remove this type-cast in the CyclingTime3 example, the result
is still Gaussian.
I understand that by initializing the message passing or by having different means on the prior distribution on means, you can get convergence to (seemingly) useful Gaussian posteriors. However 1) even those posteriors are wrong. No matter how
asymmetric the priors are, the posteriors should eventually converge to a bi-modal distribution. 2) the much bigger problem in my mind : what if I want the bi-modal distribution?
To expand on 2 : I now know that in the second-least complicated model you can create in Infer.NET, the posteriors do not converge properly. I therefore feel like I have very little assurance that the posteriors generated in significantly more complicated
models will bare any resemblance to the correct posteriors.
Are there any flags I can flip to make the computations more exact? Are there any general principles I can follow to know which models do converge properly or at least usefully?
Thanks.