Improve Performance of Sync for Devices RRS feed

  • Question

  • I'm using Sync Services for Devices and I'm encountering slowness applying changes dowloaded from the server.  It seems to be worse on devices with less memory.  The devices are windows mobile 6.1.  I've applied this hotfix (http://support.microsoft.com/kb/973058) and I don't really notice much of a change.  I see in Remove Programs that it now says SP1 so it looks like it has been installed on the device correctly now.  We use a WCF service to get the data, but I'm confident that the main issue isn't network transfer time.  We're putting compression in to help with that, but from our logging I can see the majority of the time is after the data is downloaded and being applied. 

    Is there anything that can be done to improve performance?  I saw some old threads but didn't get a lot out of them.  Any more ideas appreciated.

    Monday, February 28, 2011 8:37 PM

All replies

  • if your using WCF, you can clear the datasets from the SyncContext returned by ApplyChanges. When you upload changes, the same change dataset is actually returned inside the SyncContext plus additional datasets for conflicts. (imagine uploading 1000rows and getting the same dataset on the WCF call).

    you may also check binary serialization. have a look at http://jtabadero.wordpress.com/2010/03/08/sync-framework-wcf-based-synchronization-for-offline-scenario-%e2%80%93-using-custom-dataset-serialization/

    Monday, February 28, 2011 11:28 PM
  • Also see this post if it helps - Sync Provider for Devices 1 (SP1 + Hotfix) slow during Download state


    This posting is provided AS IS with no warranties, and confers no rights
    Tuesday, March 1, 2011 2:12 AM
  • JuneT,

    My slowness isn't when applying changes from client to server but when applying changes from server on to the client.  We have some nightly processes that update a lot of rows (~3000) most of which are updates.  Like I said, I've got some custom logic (see this post for the original roughed out idea) and compression that makes the transfer ok.  It seems to be the application (i.e. doing all the inserts/updates/deletes against the local db) that is slow.

    Your suggestion seems to be for saving some time with what comes back after pushing updates from client to server.  Any time saving is a plus, of course.  So to make sure I understand your suggestion, are you're saying in my service or in my server side class that derives from DbServerSyncProvider I can actually clear out the dataset on the syncContext that I return in order to reduce some traffic and memory usage? 



    public override SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession)
    SyncContext syncContext = base.ApplyChanges(groupMetadata, dataSet, syncSession);
    syncContext.DataSet = null;
    return syncContext;

    Tuesday, March 1, 2011 2:21 AM
  • yes. clearing the dataset will minimize the payload and consequently the memory usage on the client (unless you need something from that dataset). there's actually another dataset under SyncProgress as well that contains the conflicting rows (if you have a conflict, you'll have two rows, one for each side, multiply that by the number of conflicting rows).

    binary serialization may also help reduce the payload and memory.

    the blog link i have above contains links to another blog from another Sync Fx team member detailing the memory issues.

    another thing you might want to check out is if there are collation differences between client and server. I seem to recall some issues around SQL CE rebuilding indeces when there's a difference in collation.

    Tuesday, March 1, 2011 2:42 AM
  • Thanks for the thoughts.  I really think my issue is the application of the changes on the client (doing all the inserts, updates, deletes).  I saw this post about intercepting and using table direct command to do the inserts which I'm exploring.
    Tuesday, March 1, 2011 3:15 AM
  • Yeah - I've reviewed that.  Looking for other ideas for speeding up the application of the updates from the server to the client.
    Tuesday, March 1, 2011 3:16 AM
  • Thanks for the thoughts.  I really think my issue is the application of the changes on the client (doing all the inserts, updates, deletes).  I saw this post about intercepting and using table direct command to do the inserts which I'm exploring.

    One thing you have to note with this approach is handling conflicts since you're overriding the way Sync Fx inserts/updates/deletes the rows by doing it directly. so you'll have to add some more bits to check if an operation caused a conflict or not.
    Tuesday, March 1, 2011 3:45 AM
  • Missing out on handling conflicts is a good thought to weigh against the gains.
    Tuesday, March 1, 2011 1:06 PM
  • Any one out there have thoughts on speeding up updates?  It looks like it is taking over 30 minutes to apply approximately 11000 to a table.  This is on the very high end of work for us, but even 3000 updates is taking 10 minutes or so.
    Tuesday, March 1, 2011 4:10 PM
  • Right now taking over the I/U/D for my main problem table with a table direct sqlceresultset and then clearing the dataset so the sync framework doesn't apply the changes seems like the best approach. I'll post sample code if I get this working well.
    Wednesday, March 2, 2011 2:00 AM
  • Hi Bryan,

    how do you manage to "reset" the resulting changes in the local database so your custom I/U/Ds do not get uploaded to server?

    My current approach is to store all involved tables and after sync finished I call


     I would be interested in alternative and better performing solutions.



    Wednesday, March 2, 2011 4:05 PM
  • I'm still working that out.  The table that I'm doing the custom work for is download only.  But I probably still have to clean up the change tracking, right?  Or will the next batch of updates look like conflicts? 

    I've added code to call acceptchanges and the call executes fine, but now i'm getting a nullreference exception from the applyingdeletes in the framework.  I'm not sure what the problem is.

    Wednesday, March 2, 2011 6:03 PM
  • Actually if I move the AcceptChanges(MY_TABLE_NAME) to the _localProvider_ChangesApplied event handler I'm okay.  Plus since this table is download only and I control the updates to it I may not need to do this, but it seems to be the right thing so I'm leaving it in there.
    Wednesday, March 2, 2011 6:40 PM