locked
Can change appliers skip items as a simple filtering mechanism? RRS feed

  • Question

  • I want to run sync sessions within a service where the client will only provide a single change at a time. This change needs to be run within a sync session on the server so the session needs to be filtered so as to only apply the single change. I've had a look at the Sync 101 with custom filtering sample and it seems to be incredibly complex for such a simple requirement.

    I've written a POC that proves that I can do a manual filter in a change applier by recording a recoverable error for items that are not in the filter. For example:

            public void SaveItemChange(SaveChangeAction saveChangeAction, ItemChange change, SaveChangeContext context)
            {
                ItemData item = context.ChangeData as ItemData;
    
                if (item != null
                    && FilterItem != null
                    && FilterItem.Id != item.Id)
                {
                    context.RecordRecoverableErrorForItem(new RecoverableErrorData(new NotSupportedException()));
    
                    return;
                }
    
                switch (saveChangeAction)
                {
                    case SaveChangeAction.Create:
                    case SaveChangeAction.UpdateVersionOnly:
                    case SaveChangeAction.UpdateVersionAndData:
                    case SaveChangeAction.UpdateVersionAndMergeData:
    
                        CreateUpdateData(item, change);
    
                        break;
                    case SaveChangeAction.DeleteAndStoreTombstone:
                        break;
                    case SaveChangeAction.DeleteAndRemoveTombstone:
                        break;
                    case SaveChangeAction.RenameSourceAndUpdateVersionAndData:
                        break;
                    case SaveChangeAction.RenameDestinationAndUpdateVersionData:
                        break;
                    case SaveChangeAction.DeleteConflictingAndSaveSourceItem:
                        break;
                    case SaveChangeAction.StoreMergeTombstone:
                        break;
                    case SaveChangeAction.ChangeIdUpdateVersionAndMergeData:
                        break;
                    case SaveChangeAction.ChangeIdUpdateVersionAndSaveData:
                        break;
                    case SaveChangeAction.ChangeIdUpdateVersionAndDeleteAndStoreTombstone:
                        break;
                    case SaveChangeAction.ChangeIdUpdateVersionOnly:
                        break;
                    case SaveChangeAction.CreateGhost:
                        break;
                    case SaveChangeAction.MarkItemAsGhost:
                        break;
                    case SaveChangeAction.UnmarkItemAsGhost:
                        break;
                    case SaveChangeAction.UpdateGhost:
                        break;
                    case SaveChangeAction.DeleteGhostAndStoreTombstone:
                        break;
                    case SaveChangeAction.DeleteGhostWithoutTombstone:
                        break;
                    default:
                        throw new ArgumentOutOfRangeException("saveChangeAction");
                }
    
                // Save the knowledge in the metadata store as each change is applied. 
                // Saving knowledge as each change is applied is not required. 
                // It is more robust than saving the knowledge only after each change batch, 
                // because if synchronization is interrupted before the end of a change batch, 
                // the knowledge will still reflect all of the changes applied. 
                // However, it is less efficient because knowledge must be stored more frequently.
                SyncKnowledge knowledge;
                ForgottenKnowledge forgottenKnowledge;
    
                context.GetUpdatedDestinationKnowledge(out knowledge, out forgottenKnowledge);
    
                StoreKnowledgeForScope(knowledge, forgottenKnowledge);
            }
    The only disadvantages seem to be that sync statistics identify all other changes as failures (not an issue for my implementation) and the performance involved with this type of filter when the replica may have a significant number of items.

    Is there any other reason why I should not to this and use the filter sample code instead?
    Friday, January 8, 2010 2:29 AM

Answers

  • Hi Rory,

    The major reason that you probably don't to use this approach is that when you do another sync, all of those items that weren't in the filter in the first sync will be sent again.  And your knowledge size will grow because it will have to store exceptions for every item you skip.  You could maybe get away with doing the filter on the source, but it probably won't be particularly robust for things like items moving in and out of the filter or other complications that arise.

    Aaron
    SDE, Microsoft Sync Framework
    Friday, January 8, 2010 6:42 PM
    Answerer

All replies

  • Hi Rory,

    The major reason that you probably don't to use this approach is that when you do another sync, all of those items that weren't in the filter in the first sync will be sent again.  And your knowledge size will grow because it will have to store exceptions for every item you skip.  You could maybe get away with doing the filter on the source, but it probably won't be particularly robust for things like items moving in and out of the filter or other complications that arise.

    Aaron
    SDE, Microsoft Sync Framework
    Friday, January 8, 2010 6:42 PM
    Answerer
  • Thanks Aaron. With regard to items moving in and out if the filter, is it an issue for each session filtering a different item? I will only be able to apply one item per session.
    Friday, January 8, 2010 10:58 PM
  • This was not as hard to figure out as I expected as item filtering turned out to be quite simple. I've blogged an example of item filtering as the filtering sample provided is very complex and doesn't provide an example of just item filtering. It only shows examples of change unit filtering and custom filtering.

    Blog post is at http://www.neovolve.com/post/2010/01/15/Filtering-items-in-the-MSF-20.aspx.
    Thursday, January 28, 2010 10:01 PM
  • Rory, 

    Sorry for the delayed response, I totally missed the notification e-mail.  The really tricky issues with filters are when items move in and out (how should that be represented to the other replicas so that it gets propagated correctly, as opposed to representing it as a delete).  I'm not sure what you're using your filter for, but that's what the custom filtering support provides.  One thing you want to think about is what if you delete an item on the source that was in the filter on the destination previously?  Do you want that item to be deleted on the destination filter or would you rather have it stay?

    Like I said before the main thing you'll hit will be performance.  If you filter on the destination, you will always enumerate and send all unknown changes from the source, and your knowledge on the destination will grow (it will eventually have an exception for every item from the source).

    Aaron



    SDE, Microsoft Sync Framework
    Thursday, January 28, 2010 10:07 PM
    Answerer
  • Hi Aaron,

    I'm assuming here that your response is still with regard to the original suggestion in this thread of simply skipping items when they are applied.

    In my scenario, application processing for a "session" will run an unfiltered sync session in preview mode in order to identify all the changes between two replicas, followed by a sync session for each change using a filter that identifies the item for that change. This is done because the sync framework is operating behind a distributed service.

    As demonstrated in my blog post, my current design implements ISupportFilteredSync and IRequestFilteredSync interfaces on the provider and then use GetFilteredChangeBatch on the metadata to only deal with the item to be changed. I am using the same custom provider for source and destination as I was finding too many problems trying to use the FileSyncProvider as one of the providers (NotImplementedException etc etc).

    Will this current design also suffer from what you have mentioned in this thread?
    Friday, January 29, 2010 2:21 AM