none
Sync Services Failing on a Large Table RRS feed

  • Question

  • I have a custom built sync services application built in .Net3.5, SQL Server CE  3.5 and SQL Server 2008

    In testing it has been working fine except for one table that unfortunately contains a large set of rows (~100000).

    On the initial synchronization after a user installs the application (downloads with an empty sdf), the application first resets all of the anchors to allow for the app to pull all of the data down from the server database. On this one table, I am getting the "Unable to enumerate changes at the DbServerSyncProvider for table 'myTable1' in synchronization group 'myTable1SyncTableSyncGroup'.

    I have set the retention period to 60 days and recently turned off the auto cleanup to see if this makes a difference but so far it continues to fail.

    I find that if I delete all the data in the table causing problems, synch the app, then add the additional data back into the table that it oftens work. This leads me to believe that there is possibly an issue with change tracking on large data sets but I'm not sure how to proceed.

    I also have occasionally been getting a system out of memory exception on the web service, but if I reduce the size of the dataset that I'm trying to synch, I can get around that issue so I'm not convinced the two are related.

    Has anyone experienced similar issues and found a way to mitigate similar issues with large data sets?

    The stack trace on the WCF service I'm using is as follows:

    WcfSyncServiceLibrary.custom2009CacheSyncService.GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession) in C:\projects\custom\WcfSyncServiceLibrary\custom2009Cache.Server.SyncContract.cs:line 96
    SyncInvokeGetChanges(Object , Object[] , Object[] )
    System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)
    System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)
    System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)

    Thanks,
    Matthew

    Thursday, March 4, 2010 5:32 PM

Answers

  • Hi Matthew,

    have you tried turning on the tracing or inspecting the inner exception if it contains any further information aside from the "Unable to enumerate changes..."?

    As for your second question, it's not uncommon to encounter memory issues when serializing large datasets. The dataset has two copies for each row (a before and after if i remember correctly). So for every row you download/upload you actually have two copies in the serialize message. You can turn on WCF Message logging to see for yourself.

    As a workaround to serializing a dataset, you may want to look up post on using a DatasetSurrogate (you convert the dataset to a custom object with rows contain in a list). In my previous project, I ended up converting the dataset to a datasetsurrogate then converting the datasetsurrogate to a byte array and passing the byte array to WCF. Then on the receiving side, you simply do the opposite.

    Another option you have is to move to a Collaboration scenario in SyncFx v2 and use the built-in memory-size based batching.

    Cheers,

    JuneT 
    • Proposed as answer by Patrick S. Lee Friday, March 5, 2010 12:16 AM
    • Marked as answer by MattMJB Friday, March 5, 2010 4:52 PM
    Thursday, March 4, 2010 6:23 PM
    Moderator
  • Thanks, I will check those out.

    I figured out my initial enumeration problem, and the root cause appears to be the style of testing I was doing. I would load the table with a large amount of records, based on a date range, and then test the synch. If it didn't work, then I would then delete all the records, resynch, and then try loading a slightly smaller subset. I think the net effect of this was that I was generating way too much change tracking, so that even when I tried to synch against the table with a very small record set after reseting the anchor, the sheer volume of previous change tracking data was crashing it.

    It makes sense now that the only scenario that would work was synching when the problem table was empty. If I then reloaded the data and resynched, synch services was able to properly set my initial min_valid_version and not have to go back through all the enumerations...

    To resolve this in my test environment, I simply turned off change tracking on the table, and then turned it back on. Now I'm down to just my memory issue, which I will try to resolve thanks to junet's suggestions.

    Cheers,
    Matthew
    • Marked as answer by MattMJB Friday, March 5, 2010 4:53 PM
    Friday, March 5, 2010 4:52 PM

All replies

  • Hi Matthew,

    have you tried turning on the tracing or inspecting the inner exception if it contains any further information aside from the "Unable to enumerate changes..."?

    As for your second question, it's not uncommon to encounter memory issues when serializing large datasets. The dataset has two copies for each row (a before and after if i remember correctly). So for every row you download/upload you actually have two copies in the serialize message. You can turn on WCF Message logging to see for yourself.

    As a workaround to serializing a dataset, you may want to look up post on using a DatasetSurrogate (you convert the dataset to a custom object with rows contain in a list). In my previous project, I ended up converting the dataset to a datasetsurrogate then converting the datasetsurrogate to a byte array and passing the byte array to WCF. Then on the receiving side, you simply do the opposite.

    Another option you have is to move to a Collaboration scenario in SyncFx v2 and use the built-in memory-size based batching.

    Cheers,

    JuneT 
    • Proposed as answer by Patrick S. Lee Friday, March 5, 2010 12:16 AM
    • Marked as answer by MattMJB Friday, March 5, 2010 4:52 PM
    Thursday, March 4, 2010 6:23 PM
    Moderator
  • Thanks June. 

    I have tried adding additional tracing to see the inner exception but wasn't able to find any. Below are the additional traces I added to the WCF service. Is there anything else that I could add that might be helpful in terms of debugging?

    <sources>
        <source name="System.ServiceModel"
          switchValue="Information, ActivityTracing"
          propagateActivity="true">
         <listeners>
          <add name="xml" />
         </listeners>
        </source>
        <source name="CardSpace">
         <listeners>
          <add name="xml" />
         </listeners>
        </source>
        <source name="System.IO.Log">
         <listeners>
          <add name="xml" />
         </listeners>
        </source>
        <source name="System.Runtime.Serialization">
         <listeners>
          <add name="xml" />
         </listeners>
        </source>
        <source name="System.IdentityModel">
         <listeners>
          <add name="xml" />
         </listeners>
        </source>
       </sources>


    Thanks for your work around suggestions as well. I will look into those futher as wel.

    Matthew
    Thursday, March 4, 2010 10:36 PM
  • here's some link for enabling tracing : http://msdn.microsoft.com/en-us/library/ee617387(SQL.105).aspx & http://msdn.microsoft.com/en-us/library/cc807160(SQL.105).aspx

    it also helps if you subscribe to the different events such as SelectingChanges, ApplyChanges, etc...so you get to see where it's exactly failing.

    cheers,

    junet
    Friday, March 5, 2010 1:27 AM
    Moderator
  • Thanks, I will check those out.

    I figured out my initial enumeration problem, and the root cause appears to be the style of testing I was doing. I would load the table with a large amount of records, based on a date range, and then test the synch. If it didn't work, then I would then delete all the records, resynch, and then try loading a slightly smaller subset. I think the net effect of this was that I was generating way too much change tracking, so that even when I tried to synch against the table with a very small record set after reseting the anchor, the sheer volume of previous change tracking data was crashing it.

    It makes sense now that the only scenario that would work was synching when the problem table was empty. If I then reloaded the data and resynched, synch services was able to properly set my initial min_valid_version and not have to go back through all the enumerations...

    To resolve this in my test environment, I simply turned off change tracking on the table, and then turned it back on. Now I'm down to just my memory issue, which I will try to resolve thanks to junet's suggestions.

    Cheers,
    Matthew
    • Marked as answer by MattMJB Friday, March 5, 2010 4:53 PM
    Friday, March 5, 2010 4:52 PM
  • JuneT I know we've talked about this before, but do you have an example of where you used dataset surragates?
    Travich
    Friday, March 5, 2010 8:23 PM
  • travich,

    let me write one and post later.

    junet

    Saturday, March 6, 2010 1:47 AM
    Moderator