locked
Mapped Knowledge Producing Odd Results RRS feed

  • Question

  • Hello,

     

    I'm working on my custom provider, and functionally, everything is working. However, I've noticed that items that haven't been updated are still getting placed in the list of changes to make. I've traced it to when I call the Contains method on mapped knowledge. For some strange reason, items that should be in the mapped knowledge are evaluating to being not there. I think there may be an issue with mapping the destination knowledge to the local knowledge. When debugging, I found that my local knowledge had a scope of (0,2), and the destination provider had a scope of (0,1), however, when the two were mapped together, the scope was (0,0),(1,1). 0 Is the base value for the tick count of a replica, but the first time GetChangeBatch or ProcessChangebatch is called, it is incremented, so should always be greater than 0. Am I correct in thinking that these results are a little strage, or am I barking up the wrong tree? Thanks in advance for any help.

     

    -A Chapin

    • Moved by Max Wang_1983 Thursday, April 21, 2011 10:13 PM forum consolidation (From:SyncFx - Technical Discussion [ReadOnly])
    Tuesday, March 11, 2008 1:22 PM

Answers

  • (Just a note: It would be really nice if these forums stopped erroring out. This is the third time recently I've had to re-post something)

     

    I deleted the metadata, cleared the destination of all items, and ran the sync for the "first" time. The direction of the Sync was set to be DownloadandUpload (to match the NTFS provider.) The first "round" went as expected, with the items being brought from the source to the destination. However, on the second pass, the destination knowledge (which was the orignial source knowledge) didn't have an entry for the other replica. Hence, the mapped knowledge lacked said entry, resulting in every item being added to the ChangeBatch again. After the ProcessChangeBatch method, however, the destination knowledge was correct. On subsequent syncs, the knowledge scopes are correct, but all items are still added.

     

    As far as metadata goes, My provider has a List of ItemMetaData objects. ItemMetaData is an abstract class that requires the basic item metadata (id, creation and change version.)

     

    I'm going to go through my code, and look for any "gotchas" like the SaveItemChange. If that doesn't solve the problem, is there a chance that I could send you my solution for evaluation?

     

    The issue has been FIXED! When I was creating or updating an item, I was creating a new changeversion using the tick count of the replica. I switched it to using the changeversion of the itemchange, and everything now works perfectly. Egg on my face, sorry for wasting your time.

     

    (Note: though mentioning that in the MSDN entry would be helpful!)

    Tuesday, March 18, 2008 3:35 PM

All replies

  • When you remap the knowledge from remote to local, it doesn't do anything to combine the scope vectors contained in the two knowledge objects.  It just remaps the replica ids and replica keys to match the local knowledge.  It's essentially an operation to create a new knowledge that represents the same data as the remote knowledge, but has the replica id of the local knowledge.

     

     Say, for example, that your local knowledge has replica id "A".  Its replica key map will map 0 to "A".  The remote knowledge has replica id "B", which is mapped to a 0 replica key in its knowledge.  When you use your local knowledge to remap the remote knowledge, it creates a new knowledge with a replica keys that match A, but knowledge that represents B, with the changed replica key mapping.  This means that your remapped knowledge will have a replica kep map like this:

     

    0 - "A"

    1 - "B"

     

    Its remote scope vector (in the case of your example destination provider), will remap (0, 1) to (1, 1) to match changed replica key.  Since Knowledge always needs an entry for the local replica (the zero key), it will insert an entry of (0, 0) in the scope vector, since it's not using any information from the local knowledge other than the replica key map.

     

    Hope this helps,

    Aaron

    Wednesday, March 12, 2008 6:39 PM
  • Alright, that's understandable. If that's the case, though, then why is it that when I call .Contains for a metadatum that hasn't changed, that it's still recorded as a change that needs to occur (specifically an Update.) What does that method actually use to determine if the metadatum is contained in the metadata or not? Thanks in advance.

     

    -A Chapin

    Wednesday, March 12, 2008 7:03 PM
  • Just a few clarifying questions:

     

    Are your replicas already in sync?  If you do a sync for the first time between replicas, items on the source that haven't changed at all will still be reported, since they are not contained in the knowledge.

     

    Do you know what the version you're passing in to the contains call is?  Do you know where it comes from (ie, is it from a change that was saved previously from another replica, or is it a local item, etc)?  Ultimately, if the .Contains call returns S_FALSE, it means that it is not contained in the destination knowledge, so there may be an issue with either the item metadata maintenance, or the destination provider knowledge maintenance.

     

    Aaron

    Wednesday, March 12, 2008 7:27 PM
  • Yes, even when the replicas are in prefect sync, every item is added to the changebatch, and nothing is actually changed (because there was no change!). When I make the Contains call, I'm using the changeversion of the item, which when updated uses the tickcount of the replica. In other words, if my replica is at tick count 7, and I make an update, then the item's changeversion will have a tickcount of 7 as well.

     

    I set it up that way because that's how it was done in the ManagedNTFSProvider. That should be how it's done, correct? The documentation, especially for Custom sync providers, has been pretty spotty at times. Thanks again!

     

    -A Chapin

    Wednesday, March 12, 2008 7:50 PM
  • That seems like it should be fine.  The issue is potentially the way that your Knowledge for the replicas is maintained.  If your replicas are in perfect sync, then the destination item's knowledge should contain an entry for the local replica, and the tick count should be greater than the tick count for any of the local items, if no changes are made.  Another thing that you should see, if the replicas are in perfect sync, is that the remapped knowlede is equal to the local knowledge for the replica ids they both have seen, and they should both have entries for "each other".

     

    Essentially the knowledge that is getting passed from the destination is indicating that the replicas are not in sync, so the issue may be in the way that your knowledge is being maintained.  One thing to check is to make sure that you are replacing your scope knowledge in the INotifyingChangeApplierTarget.StoreKnowledgeForScope method.

     

    Also, if you have specific things in the documentation that you think could be improved, please leave feedback, either in the feed forum or here, so that we can take it into consideration as we move forward :-).

     

    Aaron

    Wednesday, March 12, 2008 8:54 PM
  • Okay... so when I'm debugging, what I'm seeing is:

     

    Local Knowledge: {Scope:[(0,6279134622) (1,6278887440)]}

    Destination Knowledge: {Scope:[(0,6279127498)]}

    Mapped Knowledge: {Scope:[(0,0) (1,6279127498)]}

    (Note, the TickCounts have been changed to be seconds since Jan 1, 2008)

     

    So, it appears that for some reason, the destination knowledge hasn't synced it's knowledge with the local knowledge. When does StoreKnowledgeForScope get called? Here's my code for it, pretty simple

     

    void INotifyingChangeApplierTarget.StoreKnowledgeForScope(SyncKnowledge knowledge, ForgottenKnowledge forgottenKnowledge)

    {

    provider.Knowledge = knowledge.Clone();

    provider.KnowledgeForgotten = (ForgottenKnowledge)forgottenKnowledge.Clone();

    provider.SaveKnowledge();

    }

     

    SaveKnowledge just sets up a binary formatter, serializes the knowledge and forgottenknowledge, and writes it to a filestream. My provider implementaion is actually an abstract provider, as an abstract class. I have two different stores saving a similar type data.

    Thursday, March 13, 2008 4:31 PM
  • Hi,

     

    The StoreKnowledgeForScope method is called by the notifying change applier at the end of each change batch. Also, you need to make sure that you save the knowledge for each change:

     

    void INotifyingChangeApplierTarget.SaveItemChange(SaveChangeAction saveChangeAction, ItemChange change, SaveChangeContext context)

    {

    // Save the change data and/or version

    // ...

     

    SyncKnowledge knowledge;

    ForgottenKnowledge forgottenKnowledge;

    context.GetUpdatedDestinationKnowledge(out knowledge, out forgottenKnowledge);

     

    // Save both knowledge and forgottenKnowledge

    // ...

    }

     

    The reasoning behind this is that after saving a change your destination knowledge is updated, so you need to reflect that.
    Thursday, March 13, 2008 6:08 PM
  • So you're saying that I should be calling StoreKnowledgeForScope every time there's an itemchange? That doesn't make a lot of sense to me. If it's going to happen automatically at the end, why waste time doing it for every item change?

    Thursday, March 13, 2008 7:23 PM
  • You want to make sure that your knowledge always represents the changes that you have saved.  This guarantees that if sync is interrupted for some reason, that your knowledge correctly represents the times you've seen.  There are also some intricacies about the way that knowledge is constructed during sync that makes the knowledge passed to StoreKnowledgeForScope a more efficient representation, since it can make a decisions knowing that there are no more changes in the batch. 

     

    On the last line of the SaveItemChange method in the ManagedNTFSSample, you can see that we don't persist the knowledge to disk (we wait for the StoreKnowledgeForScope) since it would be inefficient, but we replace the local knowledge with the updated destination knowledge in the following line:

     

    Code Snippet
    saveChangeContext.GetUpdatedDestinationKnowledge(
    out myKnowledge, out myForgottenKnowledge);

     

     

     

    Aaron

    Thursday, March 13, 2008 8:04 PM
  • This is a shining example of what I've been saying about the documentation. If you go to the MSDN page for SaveItemChange, it makes no mention of anything even related to knowledge. Yes, it's done in the ManagedNTFSSample, but the same samples deals with the custom MetadataStore class is as well. It is difficult to tell what is required by the sync framework to sync properly, and what isn't, especially when the information isn't in the MSDN. Perhaps some sort of "___should happen here" or flowchart would be helpful.

     

    Also, I'm still somewhat confused as to what the mapping does, because after this call:

     

    Code Snippet

    SyncKnowledge mappedknowledge = provider.Knowledge.MapRemoteKnowledgeToLocal(destinationKnowledge);

     

     

    I try the following three tests in my immediate window:

     

     

    ? destinationKnowledge.Contains(destinationKnowledge.ReplicaId, item.ItemId, item.ChangeVersion)
    true
    ? mappedknowledge.Contains(provider.ReplicaId, item.ItemId, item.ChangeVersion)
    false
    ? provider.Knowledge.Contains(provider.ReplicaId, item.ItemId, item.ChangeVersion)
    true

     

    If the mapped knowledge is just the destination knowledge with the local knowledge's replica id, then shouldn't the mappedknowledge return true if both the local and destination knowledge return true?

     

    Sorry to be so dogged about this, but I've been trying to get this project finished for a while now, and it feels like once this problem is solved, that everything else will fall into place.

    Monday, March 17, 2008 3:26 PM
  • Thanks for the feedback on the documentation.  I'll make sure the right people know about it so we can take it into consideration going forward.

     

    Regarding the Contains check, it does not necessarily follow that if a change version is contained in one knowledge and then the other, that it will be contained in both.  This is because if you use the same change version, the replica key in the change version represents a different replica Id.

     

    Consider the following scenario, where the item id is represented as 'I': 

     

    Local knowledge for replica 'A' looks as follows:

     

    Replica Key Map:

    0 - A

    1 - B

     

    Knowledge:

     Scope Vector (key, tick count): (0, 3), (1, 2)

     

    The destination knowledge, which represents replica 'B' looks as follows:

     

    Replica Key Map:

    0 - B

    1 - A

     

    Knowledge:

     Scope Vector: (0, 0)

     Single Item Exception 'I': (0, 3), (1, 2).

     

    Now, if your change version is (0, 3), the Contains call for both will be true.  However, when you remap the destination knowledge using the local knowledge it will look as follows:

     

    Replica Key Map (Same as for A):

    0 - A

    1 - B

     

    Knowledge:

      Scope Vector: (0, 0), (1, 0)

      SingleItemException 'I': (0, 2), (1, 3)

     

    In this case, calling contains with a version of (0, 3) will return false.

     

    Obviously, this only explains how knowledge works, but doesn't solve your problem. 

     

    One thing that might be worth doing (if possible), is to clear out your stored sync metadata for the two endpoints and make it so that your destination endpoint is empty of items and perform a sync.  At the end of that sync, the knowledge for each side should be the same (with different replica keys), and all of the items should be on both sides.  Also, you seemed to imply that you weren't using the sample metadata store...what metadata store are you using (ie, SQL CE-backed metadata store we provide, or one you wrote)?

     

    Aaron

    Monday, March 17, 2008 11:27 PM
  • (Just a note: It would be really nice if these forums stopped erroring out. This is the third time recently I've had to re-post something)

     

    I deleted the metadata, cleared the destination of all items, and ran the sync for the "first" time. The direction of the Sync was set to be DownloadandUpload (to match the NTFS provider.) The first "round" went as expected, with the items being brought from the source to the destination. However, on the second pass, the destination knowledge (which was the orignial source knowledge) didn't have an entry for the other replica. Hence, the mapped knowledge lacked said entry, resulting in every item being added to the ChangeBatch again. After the ProcessChangeBatch method, however, the destination knowledge was correct. On subsequent syncs, the knowledge scopes are correct, but all items are still added.

     

    As far as metadata goes, My provider has a List of ItemMetaData objects. ItemMetaData is an abstract class that requires the basic item metadata (id, creation and change version.)

     

    I'm going to go through my code, and look for any "gotchas" like the SaveItemChange. If that doesn't solve the problem, is there a chance that I could send you my solution for evaluation?

     

    The issue has been FIXED! When I was creating or updating an item, I was creating a new changeversion using the tick count of the replica. I switched it to using the changeversion of the itemchange, and everything now works perfectly. Egg on my face, sorry for wasting your time.

     

    (Note: though mentioning that in the MSDN entry would be helpful!)

    Tuesday, March 18, 2008 3:35 PM
  • No worries .  I'm glad you figured out the issue.  We appreciate the feedback, too.

     

    Aaron

     

    Tuesday, March 18, 2008 9:59 PM