locked
How to manually insert a delete in ChangesSelected’s e.Context.DataSet on the remote peer RRS feed

  • Question

  • I am using filtered scopes ... so I need to manually handle partition management.  In current projects I handle it on the client side by removing a client-deleted record from the dataset in ChangesSelected.  It is now clear that it would be more efficient to hande it on the server side.

    On my SQL-Server peer (remote/server) I have a record I want an SDF peer (local/client) to purge (delete without the delete flowing back to the SQL-Server).

    Here is what I am doing on the server side in the ChangesSelected event:

    -  create a new temp DataTable that contains the KeyField value plus the 4 _tracking fields (timestamps, versions etc) for each record to be deleted by the client

    -  AcceptChanges() on the new DataTable (so the RowStates will not be “Added” when I delete them)

    -  delete all rows in the new DataTable (don't AcceptChanges() here)

    -  merge the new DataTable with e.Context.DataSet.Table[]

    In my test with one record, this appears to create the desired record in the server’s e.Context.DataSet.  The manually added record shows up on the client side in the ApplyingChanges event containing the expected data and with RowState == Delete.

    Problem: when the client-side framework processes the e.Context.DataSet in ApplyingChanges ... no errors are thrown; ApplyChangesFailed is not raised; SyncTrace file reports "1 Deletes Applied" ... but the record is NOT deleted.

    SyncTrace file fragment:

    ----- Deletes for Table "psv" -----

    RowId: psv-182778 UV: 4,38154264 CV: 4,37656532 IsTomb: True

    Deleting row with PK: svKey="182778" on C:\...\abc.sdf

    Local UV: 4,38153965 CV: 4,37656532 IsTomb: False US: SideTable CS: SideTable

    Local peer contains remote change. Returning LocalSupersedes.

    1 Deletes Applied

    --- End Deletes for Table "psv" ---

    Question: What does “Local peer contains remote change. Returning LocalSupersedes.” mean?  (Google returns nothing for either sentence)

    (sounds like client is ignoring server record because it already contains the same record)

    Friday, June 22, 2012 7:45 PM

All replies

  • what values where you setting for the tracking columns?
    Monday, June 25, 2012 1:42 AM
  • I looked at a deleted row (for a non-filtered table) in e.Context.DataSet that was being sent to the client and was able to re-create the delete record with this SQL:

    SELECT  T.pkID,

    CONVERT(BIGINT, T.local_update_peer_timestamp) AS sync_update_peer_timestamp, 

    T.local_update_peer_key AS sync_update_peer_key, 

    T.local_create_peer_timestamp AS sync_create_peer_timestamp, 

    T.local_create_peer_key AS sync_create_peer_key

    FROM psv_tracking T 

    WHERE T.pkID IN ({0})

    ... so I am using this script to build my manual insert rows also.

    Along the lines of your question ... here is the beginning of the SyncTrace file with Source/Destination ReplicaKeyMap info that may be of help:

    Connecting to database: C:\...abc.sdf
    Executing Command: SELECT [schema_major_version], [schema_minor_version], [schema_extended_info] FROM [schema_info]
    Connecting to database: C:\...abc.sdf
    Executing Command: select scope_local_id, scope_id, scope_sync_knowledge, scope_tombstone_cleanup_knowledge, scope_timestamp, scope_tomb_mark, scope_config_id from [scope_info] with (HOLDLOCK, XLOCK) where sync_scope_name = @sync_scope_name
           Parameter: @sync_scope_name Len: 22 Value: myScope
    Closing Connection
    Connecting to database: C:\...abc.sdf
     Executing Command: SELECT [schema_major_version], [schema_minor_version], [schema_extended_info] FROM [schema_info]
    Connecting to database: C:\...abc.sdf
    Executing Command: select scope_local_id, scope_id, scope_sync_knowledge, scope_tombstone_cleanup_knowledge, scope_timestamp, scope_tomb_mark, scope_config_id from [scope_info] with (HOLDLOCK, XLOCK) where sync_scope_name = @sync_scope_name
    Parameter: @sync_scope_name Len: 22 Value: myScope
    Source ReplicaKeyMap: [(0:0689d4157de144029a4e813562b29172) (1:737053a5b8834cbeaf68185921ccfdbf) (2:748811fff2c94b15a863ccfb18e1e427) (3:72ded3e8db914e1aa07215ee536fe96b) (4:b5cc82dd88b04f09a947a77db6620abb)] ScopeRangeSet: [00:[(0:38154482) (1:216356) (2:37252643) (3:11258) (4:35847623)]]
    Destination ReplicaKeyMap: [(0:0689d4157de144029a4e813562b29172) (1:737053a5b8834cbeaf68185921ccfdbf) (2:748811fff2c94b15a863ccfb18e1e427) (3:72ded3e8db914e1aa07215ee536fe96b) (4:b5cc82dd88b04f09a947a77db6620abb)] ScopeRangeSet: [00:[(0:38154469) (1:216812) (2:37252643) (3:11258) (4:35847623)]]
    Min Timestamp 216356

    Monday, June 25, 2012 1:05 PM
  • Update:

    At some point the above delete did get applied and I’m not sure what I did to make it apply.

    In subsequent tests I found other deletes that would not apply … the pattern for them seems to be that the server UV value was equal to the client’s UV value … but this was not the case with the record in my original post.

    Notice the UV values in the 2 rows in the SyncTrace from my original post:

    RowId: psv-182778 UV: 4,38154264 CV: 4,37656532 IsTomb: True

    Local UV: 4,38153965 CV: 4,37656532 IsTomb: False US: SideTable CS: SideTable

    where the server UV:38154264 is greater than client UV:38153965 by 299.

    To get the deletes to apply in the subsequent tests I found I could force the server UV to be 1 greater than the client UV (ie: change the SQL statement to CONVERT(BIGINT, T.local_update_peer_timestamp + 1)) and the delete would be applied.

    I also have discovered that if I “touch” a record on the server then insert it into my e.Context.DataSet I don’t have this issue … if on the other hand I don’t “touch” the record, but just add it to my recordset I have the issue.

    Monday, June 25, 2012 9:13 PM
  • sync fx does sync based on incremental changes. to check whether a changes has been received already or not, it checks the timestamps if the change's timestamp is greater that what it already has.
    Tuesday, June 26, 2012 1:27 AM
  • Thanks, is there anything that makes sense about the original delete that would not apply?:

    Notice the UV values in the 2 rows in the SyncTrace from my original post:

    RowId: psv-182778 UV: 4,38154264 CV: 4,37656532 IsTomb: True

    Local UV: 4,38153965 CV: 4,37656532 IsTomb: False US: SideTable CS: SideTable

    where the server UV:38154264 is greater than client UV:38153965 by 299.

    Do you think I am on the right track by incrementing by 1 the UV at the server so it will be greater than the client?

    Once i increment it by 1 I will need to delete the _tracking record on the client (or else decrement it by 1) after the base record is deleted or else if the record moves back into scope the client version will always be higher than the server's and the record will get a ClientCleanedUpDeleteServerUpdate (or something like that - can't remember) - does this sound reasonable?

    Thanks!

    Tuesday, June 26, 2012 1:38 AM
  • have you tried comparing the update peer timestamps between the two rows?

    not sure exactly what your scenario is. are you trying to delete rows in the client without deleting them from the server? because your original solution of simply deleting them from the client and intercepting the deletes in the ChangesSelected event should work out fine already.

    if its the server that determines what needs to be purged, i'll just create a new table containing the keys of the rows to be purged, send them down via sync and let the client delete them.

    am not a very big fan of manipulating the timestamps as it may have unintended results that may even be harder to detect and solve.

    Tuesday, June 26, 2012 2:00 AM
  • "have you tried comparing the update peer timestamps between the two rows?"

    It turns out update_peer_timestamp is called UV in the SyncTrace file (create_peer_timestamp is called CV) ... so those are the values I am comparing/discussing above.  That is what does not make sense ... the server row's timestamp was newer than the client row's timestamp ... but the client wouldn't apply the delete (and the ApplyChangeFailed event never fired).

    "deleting them from the client and intercepting the deletes in the ChangesSelected event should work"

    Yes, it works 100% ... and yes, I am moving to a model where the server determines what needs to be purged ... I am just trying to make the purge process more efficient for the new model.

    "am not a very big fan of manipulating the timestamps as it may have unintended results that may even be harder to detect and solve"

    Agree - that is exactly my fear, especially re migrating to Sfx 4 eventually!!!  My earlier posts is me trying to make my solution "fit in" with the framework by adding rows to be purged to the existing e.Context.DataSet tables and letting the framework handle the actual base table deletes and _tracking knowledge and on the client ...

    ... but it is looking like I am going to continue to work outside the framework with my server-controlled purges as I currently do with my client-controlled purges.

    Regarding purging orphaned records (no longer in the filter's scope), do you see any problem with me deleting the _tracking row when I delete the base table row?  I think this will keep me from having to intersept the delete from trying to flow back to the server.

    Tuesday, June 26, 2012 1:07 PM
  • the scope stores some knowledge about tombstones, if you delete the tracking table entries manually, these knowledge will no be cleaned up when you do metadata cleanup.

    Tuesday, June 26, 2012 11:54 PM