Hi Zahra

There are two slightly different treatments here. (A) When your shared variable is defined directly from a prior, and (B) when your shared variable is defined in terms of another variable. In each case you need to decide whether you want to do (i) full inference
or (ii) online inference.

Ai) For full inference, you loop over the batches many times, but you must take out the effect of when you previously visited a batch otherwise you will double count the data. This mean that after running to convergence on a batch, you must extract
and store (keyed by batch index) the 'marginal divided by prior'. Then for any batch, you must set its prior to the current marginal divided by the batch's saved 'marginal divided by prior' (this calculation is equivalent to collapsing all the message
from all the other batches in to one prior.

Aii) For online inference, you only visit batch once, you don't need to save the 'marginal divided by prior', and you just set the prior to the posterior.

Bi) Collapse the messages as in (a). However, in this collapsed message we must not include the prior message (i.e. we just want the product for 'marginal divided by prior' messages from each other batch), and we implement the constraint enforced by these
messages via a ConstrainEqualRandom factor as described in detail in Yordan's reply to your original post.

Bii) For online inference, you only visit batch once, you just need the 'marginal divided by prior' from the previous batch which you use to set the constraint to the next batch.

I did not understand your accumulator comment.

John