locked
Is this a bug with the Sync Framework? RRS feed

  • Question

  • This appears to be a serious bug within the Sync Framework 1.0 (using Sync Services for ADO.NET 2.0 as the provider)

    When any changes submitted by the local sync provider failed to be applied by the remote sync provider, the UploadChangesFailed counter will have a non-zero value inside the SyncStatistics returned by the SyncAgent.Synchronize(). It appears that the SyncAgent (or client side sync framework) would still go ahead calling the SetSentAnchor() method. This causes those failed changes no longer to be picked up in the subsequent GetChanges() call. They are "forgotten" completely from now on. I can simulate this scenario by simply denying the Execute permission on the Insert stored procedure on the destination database.

    This would be catastrophic. Can anyone elaborate if this is a bug? or by design? or simply it wasn't used correctly.
    Thursday, May 14, 2009 4:02 AM

Answers

  • Thanks for the info.  We're currently looking into n-tiered "peer to peer" model for the previous hub-spoke scenario.

    • Marked as answer by MichaelZh Tuesday, June 16, 2009 4:07 AM
    Tuesday, June 16, 2009 4:06 AM

All replies

  • this is by-design. the rows that caused the applyChangeFailed will be consisdered as conflicts and users application should have the logic to handle this by implementing the ApplyChangeFailed event on the providers. in this case, if no action was chosen ( i.e. the event is not implemented by your app ), the default action( Action.Continue ) will be used.

    the  finla SetSentAnchor() is needed so other changes( that has applied on the other side ) will not be sent again in the subsequent syncs.

    thanks
    Yunwen
    This posting is provided "AS IS" with no warranties, and confers no rights.
    Friday, May 22, 2009 7:49 PM
    Moderator
  • Thanks for your reply. 

    To start, let me quota the text from MS documentation on the SyncGroup class:

    "A synchronization group is a mechanism to ensure consistent application of changes for a set of tables. If tables are included in a synchronization group, changes to those tables are transferred as a unit and applied in a single transaction. If any change in the group fails, changes for the whole group are retried on the next synchronization."

    If I handle the conflict within ApplyChangeFailed event in this scenario (ConflictType.ErrorsOccurred because no Execute permission on the Insert stored procedure), the most appropriate action I should take is to log the error and then abort the transaction on the server.  Sadly, the Sync Services doesn't provide me with an option to "abort" the transaction.  None of the available four options (Continue, RetryApplingNow, RetryWithForceWrite, RetryNextSync) will abort the transaction.  To me, this is a pretty bad design

    You also indicated that the Client provider would SetSendAnchor regardless so that the successful updates wouldn't be sent again in future Syncs.  I found this statement is even more confusing.  Because this would be in contradiction to your basic synchronisation principle that is based on the atomic "batch".   This has also suggested that it's by design failed updates will never be tried again by the Client. 

    Michael

    Monday, May 25, 2009 12:57 AM
  • to Clarify one thing first : are you using sync to sync Sql server to SqlCe client, right ?

    the sync group can ensure that the changes of the tables within it can be done in single transaction as the doc stated. since the user app implements the ApplyChangeFailed event, the transaction can be aborted by throwning application level exceptions -- i.e. to abort the sync.

    the change batch is not supported inthe Sync Service V2 release. that might cause the confusion here.

    Hope this clears some doubts.

    thanks
    Yunwen


    This posting is provided "AS IS" with no warranties, and confers no rights.
    Tuesday, May 26, 2009 7:48 PM
    Moderator
  • we're syncing from Sql server to Sql Server using DBServerSyncProvider and it's Upload only.

    it'd be bad to throw an exception inside ApplyChangeFailed just to abort the transaction.  Internally, the SyncServices/Framework would have already caught the exception before raising the ApplyChangeFailed event.  It doesn't make any sense to throw an exception again here for all the good reasons.  It should be simply able to abort the transaction gracefully.  I believe that this is one of the areas in this Framework that Microsoft should look into and improve.

    we're not using batching for Upload at all.  However, we did look into the SyncGroupMetadata (passed in) and SyncContext (returned) for ApplyChanges implemented by the server.  What seems to be odd is that both MaxAnchor value and NewAnchor value passed into and returned by the server are null at all times.  They appear to be redundant.  Again, is this by design? 

    cheers,

    Michael
    Wednesday, May 27, 2009 4:35 AM
  • Sorry Michael for the delay, had been heads down on some other stuff during the past couple of days.

    Thanks for your feedback for the Action item for aborting the sync transaction. originally design thinking is that this is a real cornor case and user appl can surely handle this at the application level. hence it is not there. this is a good feedback from usability's perspective.

    based on the thread, my understanding is that you are implementing sql to sql server sync. right ?  Regarding with the MaxAnchor and NewAnchor, I don't have your implementation details so I cannnot comments on why they are NULL. but they do needed ( internally these two values are used to as part of logic to determine if this is the last batch).

    some more info for the batching support can be found at ms-help://MS.SynchronizationServices.v1.EN/syncdata1/html/5e824364-474b-47a3-8c14-d0dd75b3da33.htm.

    hope this helps.

    thanks
    Yunwen


    This posting is provided "AS IS" with no warranties, and confers no rights.
    Tuesday, June 2, 2009 7:57 PM
    Moderator
  • In regard to MaxAnchor and NewAnchor being null, this could be reproduced by your sample code under the Client-Server synchronisation scenario.

    Local provider is based on Microsoft.Synchronization.Data.ClientSyncProvider.
    Remote provider is based on Microsoft.Synchronization.Data.DBServerSyncProvider.

    Synchronization is Upload only.

    Put break points at the begining and end of server side ApplyChanges method and examin the incoming and outgoing NewAnchor and MaxAnchor values.  You'll find that they are both Null.

    Thanks, 

    Michael Zhu

    Thursday, June 4, 2009 12:09 AM
  • Hi Michael,

    with the current release, batching only supports for DOWNLOAD scenarios. so in your case, which is UPLOAD only, that likely is the reason you are seeing NULLs for those anchors. 

    BTW, for the Hub-spoke scenarios, the "officially" supported client is SQLCE. it looks like you are using SQL server as you client, you will need to implement your own db sync provider.

    Hope this clarifies some doubts.

    Thanks
    Yunwen
    This posting is provided "AS IS" with no warranties, and confers no rights.
    Friday, June 12, 2009 7:56 PM
    Moderator
  • Thanks for the info.  We're currently looking into n-tiered "peer to peer" model for the previous hub-spoke scenario.

    • Marked as answer by MichaelZh Tuesday, June 16, 2009 4:07 AM
    Tuesday, June 16, 2009 4:06 AM
  • You can use DbSyncProvider which supports peer-peer scenario.  I use this since beta release it works nice but lacks on batching support.  You need to write custom implementation of transferring dataset between server and server[if you use WCF] otherwise it is easy to get into insufficient memorey issue.
    Wednesday, June 17, 2009 5:16 PM