locked
Template with static filter : update not match by target RRS feed

  • Question

  • Good morning

    I use for provisioning my client database a template with static filter for reduce the number of rows. The synchronization direction is only in download.
    The server had for this table 2 000 000 rows and I synchronize only useful rows (about 1%)

    The first synchronization is OK with tracking table on the server for all rows and inserted rows on the client with corresponding tracking table.

    In some use case, server UPDATE rows for match the filter parameters and enlarge the client database.
    This rows are never INSERTED on the client database and the tracking table flag then with ROW_IS_TOMBSTONE to 1.

    With SqlProfiler, I see the Synchronization Framework use for these rows BULKUPDATE (and not BUILINSERTED) stored procedure. But like the client database never knew these rows, the MERGE is not possible because the stored procedure created by framework doesn’t have NOT MATCHED BY TARGET instruction.

     I Think ALTER the stored procedure created by framework is not a good way. And I don’t want to synchronize all the rows

    What solution can I use ?

     

    Regards

    Wednesday, January 4, 2012 3:32 PM

Answers

  • Ok I repro'ed the issue and debugged.  The row comes down to the filtered client as an update as you mentioned, but the client hasn't seen it before.  It then assumes because it doesn't know about the item yet it's an update, that it must have been deleted locally and tombstones cleaned up, so it raises a conflict.  The default resolution policy it chooses is local wins, so it creates a tombstone.  To make the update go through you have to set the resolution policy to remote wins.

    It seems like the default behavior in this case is a borderline bug.  I would not expect the provider to choose to resurrect a tombstone in this case - it seems like a dangerous decision.  However, you do have a workaround of resolving the conflict in the correct way.

    -Jesse

    • Marked as answer by Psyko664 Monday, January 9, 2012 11:24 AM
    Friday, January 6, 2012 8:01 PM

All replies

  • Hi there,

    Moving existing rows in and out of the filter is not supported.  The problem is the tracking table filter value is not updated when the row is updated and so it is never enumerated.

    -Jesse

    Wednesday, January 4, 2012 7:11 PM
  • Hello Jesse,

    I understand the problem you mention about the tracking table in client database that has no colulmn for filter parameter. But this is not really my problem, like the direction is in download I just need client database grow.

    About the tracking table in server database, the row and the filter column is update by trigger.

    Like the row is update by server, the procedure use by client is BULKUPDATE (without NOT MATCHED BY TARGET in MERGE command). So the row is not inserted in client database, but the tracking row associeted is INSERTED with is_tombstone to 1.

    I try a synchronisation adding NOT MATCHED BY TARGET in MERGE command for stored procedure BULKUPDATE and it's working. But this solution will be hard to maintain in case of scopeDeprovisioning.

    Thursday, January 5, 2012 9:09 AM
  • Ok I will try to repro and get back to you.

    -Jesse

    Thursday, January 5, 2012 9:25 PM
  • Ok I repro'ed the issue and debugged.  The row comes down to the filtered client as an update as you mentioned, but the client hasn't seen it before.  It then assumes because it doesn't know about the item yet it's an update, that it must have been deleted locally and tombstones cleaned up, so it raises a conflict.  The default resolution policy it chooses is local wins, so it creates a tombstone.  To make the update go through you have to set the resolution policy to remote wins.

    It seems like the default behavior in this case is a borderline bug.  I would not expect the provider to choose to resurrect a tombstone in this case - it seems like a dangerous decision.  However, you do have a workaround of resolving the conflict in the correct way.

    -Jesse

    • Marked as answer by Psyko664 Monday, January 9, 2012 11:24 AM
    Friday, January 6, 2012 8:01 PM
  • Thank you for the aproach of the solution

    The conflict for this probleme was of type DbConflictType.LocalCleanedupDeleteRemoteUpdate {6}.

    I just in the ApplyChangeFailed Event make :

                If sender.Configuration.CollisionConflictResolutionPolicy = CollisionConflictResolutionPolicy.SourceWins And e.Conflict.Type = DbConflictType.LocalCleanedupDeleteRemoteUpdate Then
                    e.Action = ApplyAction.RetryWithForceWrite
                End If

     Regards


    • Edited by Psyko664 Monday, January 9, 2012 11:30 AM
    Monday, January 9, 2012 11:30 AM
  • Jesse,

    I believe this is a dangerous decision.  The next sync will try to delete the record on the server. 

    A suggestion was made below to detect the situation and apply an Action of RetryWithForceRewrite.  This is not, in my opinion, an entirely workable situation, or at least it is not if the filter applies to more than one table.  I have a situation where multiple tables are brought into filter scope because the filter clause is based on a foreign key.  If this situation occurs, only the tables that have been updated since the last sync will be propagated to the client, and, if we use RetryWithForceRewrite and not all tables have been updated (which I would presume is the default situation) then we will either hit a problem with FK constraints, or we will end up with partial replication of a set of related records.

    My solution is different.  I am detecting the situation where records come into filter scope, and I am requiring that the client sync, delete the scope and reprovision (which is a good solution for us because the overhead is so much less than synchronizing without a filter).  The issue is that I want to be able to ignore this LocalCleanedUpDeleteRemoteUpdate conflict, and I cannot

    1. RetryWithForceRewrite will insert the records locally which can result in a partial update per the dsicussion above
    2. RetryApplyingRow, which looks like it should do the right thing, does not try to apply the row "one more time".  It tries to do it indefinitely rather than once, which has no effect, but is just really annoying and I dont really like infinite loops in my production code
    3. Continue inserts a tombstone record which will cause a delete on the server on the next sync
    4. RetryNextSync is the best bet since it defers the next failed retry to the the next sync

    It would be great if there was a way to simply ignore this sync situation.


    Nick Caramello

    Saturday, February 18, 2012 1:14 PM