none
Batching in Collaboration scenario error RRS feed

  • Question

  • I am using sql server 2008 in collaboration scenario.  I have a database with a table with about 3.5 million records.  I am having troubles doing an initial sync on this table.  Initially, I was not using any batching, and on some machines I would get timeouts.  This was to be expected due to memory issues.   So I turned batching and it seemed to alleviate timeout issues, however sometimes I get Out of Memory errors on the 1st big initial sync.   Could anyone explain to me why I get this error and what is the fix.  I have set the batch memory size to 5 megs, 10 megs, 32 megs etc.   When I set it too low I still get timeouts, but if I start setting it higher I get out of memory errors sometimes.  What is the recommended memory cache size for batching?

    I have looked at this other post about unexpected errors while batching: http://social.microsoft.com/Forums/en-US/syncdevdiscussions/thread/1980f01f-274c-4fb7-aa23-6020e6f4a641 and I split out my big table into its own scope, but I still have problems.  Can anyone else suggest any other remedies?   Thanks.

     

    Thursday, July 15, 2010 3:12 PM

All replies

  • Anyone have any ideas?
    Thursday, July 15, 2010 11:01 PM
  • have you tried playing around with the WCF config entries?
    Friday, July 16, 2010 4:32 AM
    Moderator
  • I am not using WCF.  I am doing the syncing within a private network.  Basically, I sync a sql server 2008 down to a sql server express, using a SqlSyncProvider. <
    Friday, July 16, 2010 3:31 PM
  • Anyone else have any insights to this problem?  Is there something else I can try?
    Tuesday, July 20, 2010 5:17 PM
  • Hi prem_data,

    Does a particular batch size solve the out of memory issue? Also what is your machine configuration (32/64 bit, amount of RAM)? The recommendation regarding batch size is to tune based on the data size, machine configuration etc.

    Tuesday, July 20, 2010 7:48 PM
  • My machine that I'm running it on right now that I get the error is 64 bit with 8 gigs of ram.  But I have also seen it on 32 bit machines with 4 gigs or ram.  I have not tried anything above 32 meg.  So I don't know if the out of memory issues will disappear because I have not tried with a larger batch size.   Should I try higher? What size do you guys recommend?
    Tuesday, July 20, 2010 9:44 PM
  • In theory if batching is used, you should not be seeing "out of memory" errors from the framework as long as the available memory size is greater than the batch size (+ 10% overhead). Can you look at the call stack and confirm that the error is being thrown by the provider? If yes, then can you please paste the call stack to the thread?

    Also what is the maximum size of any single row in the tables you are trying to sync?

    Tuesday, July 20, 2010 11:17 PM
  • max size of any single row of tables that i am syncing is 1692
    Wednesday, July 21, 2010 5:13 PM
  • Can you also answer the first part of the my question "Can you look at the call stack and confirm that the error is being thrown by the provider? If yes, then can you please paste the call stack to the thread?"
    Wednesday, July 21, 2010 5:51 PM
  • Sorry about that, it takes awhile to get to the error state.


    Here is the error messages along with inner exceptions and stack traces:

     

    initial error msg: An unexpected error occurred when applying batch file C:\Users\admin\AppData\Local\Temp\sync_85349be4b4ec4768984ece905934f78a1ce432c355d34af6bb68d0965edc041c\b5cdd4ea-3764-4b13-a56c-b9d5666913f5.batch. See the inner exception for more details.
    
    Stack Trace: at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata, DbSyncSession syncSession, SyncSessionStatistics sessionStatistics)
      at Microsoft.Synchronization.Data.RelationalSyncProvider.ProcessChangeBatch(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, Object changeDataRetriever, SyncCallbacks syncCallbacks, SyncSessionStatistics sessionStatistics)
      at Microsoft.Synchronization.KnowledgeProviderProxy.ProcessChangeBatch(CONFLICT_RESOLUTION_POLICY resolutionPolicy, ISyncChangeBatch pSourceChangeManager, Object pUnkDataRetriever, ISyncCallback pCallback, _SYNC_SESSION_STATISTICS& pSyncSessionStatistics)
      at Microsoft.Synchronization.CoreInterop.ISyncSession.Start(CONFLICT_RESOLUTION_POLICY resolutionPolicy, _SYNC_SESSION_STATISTICS& pSyncSessionStatistics)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWaySyncHelper(SyncIdFormatGroup sourceIdFormats, SyncIdFormatGroup destinationIdFormats, KnowledgeSyncProviderConfiguration destinationConfiguration, SyncCallbacks DestinationCallbacks, ISyncProvider sourceProxy, ISyncProvider destinationProxy, ChangeDataAdapter callbackChangeDataAdapter, SyncDataConverter conflictDataConverter, Int32& changesApplied, Int32& changesFailed)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWayKnowledgeSync(SyncDataConverter sourceConverter, SyncDataConverter destinationConverter, SyncProvider sourceProvider, SyncProvider destinationProvider, Int32& changesApplied, Int32& changesFailed)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize()
      at Microsoft.Synchronization.SyncOrchestrator.Synchronize()
      at .DatabaseAccess.DatabaseSyncing.DataSyncManager.SynchronizeProviders(SyncDirectionOrder syncOrder, String scopeName) in C:\Users\admin\TeamProjects\Main\Trunk\Source\DatabaseAccess\Sync\DataSyncManager.cs:line 292
    
    inner exeception: Failed to execute the command 'InsertCommand' for table 'ElementTag'; the transaction was rolled back. Ensure that the command syntax is correct.
    	
    	StackTrace = at Microsoft.Synchronization.Data.ChangeHandlerBase.CheckZombieTransaction(String commandName, String table, Exception ex)
      				 at Microsoft.Synchronization.Data.DbChangeHandler.ApplyInsert()
      			     at Microsoft.Synchronization.Data.ChangeHandlerBase.ApplyChange(DbSyncBatchConsumer batchConsumer)
                     at Microsoft.Synchronization.Data.RelationalSyncProvider.ApplyChangesInternal(DbSyncScopeMetadata scopeMetadata, DataSet dataSet, ChangeApplicationType applyType)
                     at Microsoft.Synchronization.Data.RelationalSyncProvider.ApplyChanges(DbSyncScopeMetadata scopeMetadata, DataSet dataSet, DbSyncSession DbSyncSession, Boolean commitTransaction, String batchFileName, ChangeApplicationAction& action)
                     at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata, DbSyncSession syncSession, SyncSessionStatistics sessionStatistics)
    	
    
    	inner ex: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."
    
    		Stack Trace: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
      				at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
      				at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
      				at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateObject stateObj)
      				at System.Data.SqlClient.TdsParserStateObject.ReadNetworkPacket()
      				at System.Data.SqlClient.TdsParserStateObject.ReadBuffer()
      				at System.Data.SqlClient.TdsParserStateObject.ReadByte()
      				at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
      				at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
      				at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
      				at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
      				at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)
      				at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
      				at Microsoft.Synchronization.Data.DbChangeHandler.ExecuteCommandReturnRowcount(IDbCommand command, Boolean& found)
      				at Microsoft.Synchronization.Data.DbChangeHandler.ExecuteChangeApplicationCommandBase(IDbCommand command, SyncRowMetadata rowMetadata, String commandName)

     

     

    Wednesday, July 21, 2010 7:10 PM
  • Hi Ricosol,

    The stack trace points to a timeout error. Unfortunately the SqlCommand timeout settings are not exposed in the built-in SqlSyncProvider class for the current release but it will be configurable in the next release.

    One option to make the timeout configurable is to inherit from DbSyncProvider and implement your own class. You can look at the code sample from: http://msdn.microsoft.com/en-us/library/dd918709.aspx and see if that helps.

    Also, it would be great if you can point me to a stack trace with the "out of memory" exception. I am particularly interested to see that one.

    Thursday, July 22, 2010 5:48 PM
  • Thanks  for link ; I will try some of the methods there.  I'll try to see if I can get the out of memory error message to appear again, and when it does I'll post the stack trace for you.  
    Friday, July 23, 2010 2:17 PM
  • Okay here is an out of memory stack trace:

    Error Message: An unexpected error occurred when applying batch file
    	    C:\Users\admin\AppData\Local\Temp\sync_bbc111bb5208460ea9195b9f5b304eb80f3207c86e5e421186fcf3d1a83f1efb\a2a3b37a-3fb9-4d37-afff-0cc8a7978b1f.batch. 
    	    See the inner exception for more details.
    
    Stack Trace: at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata, DbSyncSession syncSession, SyncSessionStatistics sessionStatistics)
      at Microsoft.Synchronization.Data.RelationalSyncProvider.ProcessChangeBatch(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, Object changeDataRetriever, SyncCallbacks syncCallbacks, SyncSessionStatistics sessionStatistics)
      at Microsoft.Synchronization.KnowledgeProviderProxy.ProcessChangeBatch(CONFLICT_RESOLUTION_POLICY resolutionPolicy, ISyncChangeBatch pSourceChangeManager, Object pUnkDataRetriever, ISyncCallback pCallback, _SYNC_SESSION_STATISTICS& pSyncSessionStatistics)
      at Microsoft.Synchronization.CoreInterop.ISyncSession.Start(CONFLICT_RESOLUTION_POLICY resolutionPolicy, _SYNC_SESSION_STATISTICS& pSyncSessionStatistics)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWaySyncHelper(SyncIdFormatGroup sourceIdFormats, SyncIdFormatGroup destinationIdFormats, KnowledgeSyncProviderConfiguration destinationConfiguration, SyncCallbacks DestinationCallbacks, ISyncProvider sourceProxy, ISyncProvider destinationProxy, ChangeDataAdapter callbackChangeDataAdapter, SyncDataConverter conflictDataConverter, Int32& changesApplied, Int32& changesFailed)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWayKnowledgeSync(SyncDataConverter sourceConverter, SyncDataConverter destinationConverter, SyncProvider sourceProvider, SyncProvider destinationProvider, Int32& changesApplied, Int32& changesFailed)
      at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize()
      at Microsoft.Synchronization.SyncOrchestrator.Synchronize()
      at DatabaseAccess.DatabaseSyncing.DataSyncManager.SynchronizeProviders(SyncDirectionOrder syncOrder, String scopeName, RelationalSyncProvider localSyncProv, RelationalSyncProvider remoteSyncProv) in C:\Users\soler\TeamProjects\Main\Trunk\Source\DatabaseAccess\Sync\DataSyncManager.cs:line 330
    
    
    	Inner Exception: "Exception of type 'System.OutOfMemoryException' was thrown."
    	
    	Inner Stack Trace: at System.Data.Common.Int64Storage.SetCapacity(Int32 capacity)
    		at System.Data.RecordManager.set_RecordCapacity(Int32 value)
      		at System.Data.RecordManager.GrowRecordCapacity()
      		at System.Data.RecordManager.NewRecordBase()
      		at System.Data.RecordManager.CopyRecord(DataTable src, Int32 record, Int32 copy)
      		at System.Data.DataTable.ImportRow(DataRow row)
      		at Microsoft.Synchronization.Data.DbSyncConflict.AddRemoteRow(DataRow row)
      		at Microsoft.Synchronization.Data.DbChangeHandler.ApplyInsert()
      		at Microsoft.Synchronization.Data.ChangeHandlerBase.ApplyChange(DbSyncBatchConsumer batchConsumer)
      		at Microsoft.Synchronization.Data.RelationalSyncProvider.ApplyChangesInternal(DbSyncScopeMetadata scopeMetadata, DataSet dataSet, ChangeApplicationType applyType)
      		at Microsoft.Synchronization.Data.RelationalSyncProvider.ApplyChanges(DbSyncScopeMetadata scopeMetadata, DataSet dataSet, DbSyncSession DbSyncSession, Boolean commitTransaction, String batchFileName, ChangeApplicationAction& action)
      		at Microsoft.Synchronization.Data.DbSyncBatchConsumer.ApplyBatches(DbSyncScopeMetadata scopeMetadata, DbSyncSession syncSession, SyncSessionStatistics sessionStatistics)
    
    	

    Friday, July 23, 2010 9:11 PM
  • The dev team would like to have a look at your table schema if possible to analyse this issue further. As I said theoretically this should not happen with the batch size you mentioned even with the additional memory that ADO.NET/DataSet's consume which can be upto 3x the amount of data.

    Also, did you have any luck using the DbSyncProvider?

    Tuesday, July 27, 2010 5:58 PM
  • DbSyncProvider worked in a round about way to get my initial sync down.  Had a lot of issues initially setting it up.  It seems to work because I have chunked out my large table into 5 separate scopes that are dynamic filtered.  This has added some more processing time to the initial sync.  Seems to have made it more reliable.   I still haven't tested it enough after my initial sync down to say definitively that is solves the problem.   Really all this work has been done to make the initial sync down reliable.  However, what I would really like is to know of some methodology where I could do initial sync quicker.  Is there any way of using snapshot with the collaboration scenario, etc.  I need a quick reliable way to push down the server database to my local database and then be able to sync back and forth after that.   If you have any ideas about this, it would be extremely helpful.   However, I already appreciate the help you have given so far!                                

    As for the schema, I would like to send to you guys, but I would have to get it approved first, as this is a proprietary product.  I can tell you that it is a highly normalized database, with about 70 tables.  The largest one, which I have troubles with, has about 3 million records in it.

    Wednesday, July 28, 2010 3:40 PM
  • Please have a look at the following discussion on speeding up sync deployments for large database. The recommended way to deploy large database is using batching, however you may try the approach discussed and see if it works for you.

    http://social.microsoft.com/Forums/en-US/syncdevdiscussions/thread/20222567-4dde-4e41-83cd-18a7836d3eb6


    SDE, Sync Framework - http://msdn.com/sync/
    Wednesday, July 28, 2010 6:23 PM