locked
Sync framework 2.1 static filters issue with existing rows coming into filter RRS feed

  • Question

  • I have a solution where I would like to use static filters (althought it could be dynamic, I prefer the simplicity of static) to filter some data.  The filters need to be applied for a number of tables through a common foreign key.  A simple example of this would be two tables (I have about 60 all of which FK into the equivalent of the user table below):

    1. a person table with information about a person
    2. a user table which identifies which persons a given user can sync

    The static filter on the person table is something like [side].[person_id] in (SELECT person_id from [user] where <insert criteria here, I am using suser_name>)

    This works very well.  An issue arises when a new record is added to the user table that brings an existing person into the filter (and it is important that it is an existing person).  The person record also has to be update to qualify per basic sync logic.  In this situation:

    1. The first sync after this change will try to propagate the person record to the client, but it will detect the record as a delete because (I think) the creation time stamp predates the sync timestamp and sync (reasonably) decides that this record must have been deleted locally, and inserts a tombstone record for that row.
    2. The second sync after this change will propagate the tombstone row back to the server and the row will be deleted.

    This is not what most people would want.  I have a solution, which is:

    1. Prior to sync detect if the filter has changed e.g. are there different users in the server version of the table than my local version
    2. Sync once only, and when succesful delete the client scope.  Repeat for all scopes
    3. Delete the local database and reprovision

    The deletion and reprovisioning of the local database was always an intended course of action, so that's not really the issue.

    What I am interested in is, has anyone else had this problem, and how did they address it?


    Nick Caramello

    Friday, February 17, 2012 10:42 PM

All replies

  • adding the user record for the person that goes in scope is recorded as an insert operation in the metadata. the update on the person row itself is also added as an update operation. so the sync should not be propagating this as a delete. if you look up the metadata for the rows in the corresponding _tracking table, you will find that their istombstone flag is not set as deleted.

    when the sync is initiated, the update for the person row will actually fail on the client because there is no row to update after all (the client doesnt have the record yet) and may actually result to an UpdateDelete conflict. if your user table has an fk on the person table, the insert on the client would also fail.

    how are you resolving conflicts?

    reprovisioning is always the easiest way for this. i would suggest that for these types of tables where rows goes in and out of scope, you put them in a separate scope, that way if you re-provision, only that scope is affected and the other tables dont need to be reinitialized.

    the other approach would be to do a delete/re-insert operation on the person table instead of an update. but if the person row is on different clients, this may pose an issue as well since they'll get the delete and insert as well.

    Saturday, February 18, 2012 2:09 AM
  • June

    thanks for the response.

    Your email prompted me to look into whether there was an update failure, and there is: the row is a conflict of type LocalCleanedUpDeleteRemoteUpdate.  We are applying an action of ApplyAction.Continue, which is to say, that we are doing "nothing" since that is already the action on the conflict.

    Specifically, from SQL Profiler, this is what is being executed on the client is (note that the actual table is Program in this case, not Person)

    declare @p10 int
    set @p10=1
    exec [Program_insertmetadata] @P_1='07DE85CF-CE1E-E2D1-3171-650938ABD2B7',@sync_scope_local_id=2,@sync_row_is_tombstone=1,@sync_create_peer_key=0,@sync_create_peer_timestamp=24789,@sync_update_peer_key=0,@sync_update_peer_timestamp=16198,@sync_check_concurrency=0,@sync_row_timestamp=16198,@sync_row_count=@p10 output
    select @p10

    What is certain is that this combination in inserting a tombstone record in the tracking table on the client, which is then resulting in a delete on the server on the next sync.  This is the "default behavior" of sync (unless I am missing something, which is possible) in FW 2.1 and might be a bug. 

    I think I can address this using a combination of techniques - I am going to follow the approach I laid out above (detect filter change, sync, delete scope, reprovision) because that's really what I need to do from a business perspective, but I am also going to address the Conflict type of LocalCleanedUpDeleteRemoteUpdate. 

    I think I have to do this because I am not sure that I can make the detection of a filter change transactional with sync.


    Nick Caramello

    Saturday, February 18, 2012 12:37 PM
  • yes, your findings are right.

    the default behaviour is if the ApplyAction.Continue, it checks if the local metadata is null and if it is, it creates a metadata entry with tombstone set to true.

    the reprovision approach is actually the preferred approach given the Sync Framework doesnt really support partition realignment or rows going in and out of scope.

    Sunday, February 19, 2012 12:27 AM