locked
Bulk Sync Problem RRS feed

  • Question

  • Hello,

           We are trying to build a diconnedcted client which operates over low speed internet connection. Enviorment details are as below.

     

    Client:

    Windows XP SP3

    RAM  512 MB

    Internet connection: Dial up with network letancy avg. 900 to 1000 ms to reach destination server

    Datanase: SQL CE 3.5 SP1

    Records  75000

    Fields 12

     

     

    Server:

    Windows Server 2003 R2

    RAM 2GB

    Database SQL Server 2005

    Intenet Conection 512 KBps dedicated 1/1

     

          I am facing performace issues when transfering such a huge volume of data using sync framework. We are unable to trf this load to server we cannot sync the table.

       Can anyone tell me What should be the best practise to perform this operation using sync framework. Any resources? Currently in case of new records we are using bulk insert command which performs the task in 15 seconds. I am not clear how many round trips the sync framework could take? Anyway can anyone suggest us any best practices in case of bulk sync operation.

     

    Nilkanth Desai

     

    Wednesday, October 29, 2008 3:36 AM

Answers

All replies

  • Hi Nilkanth,

     

    Take a look at this article: http://msdn.microsoft.com/en-us/library/bb902828.aspx

    It talks about using batching so that you dont get all the data in one shot and hence timeout or run into out of memory.

     

    Friday, October 31, 2008 4:00 AM
  • Hi Nilkanth,

     

    Take a look at this article: http://msdn.microsoft.com/en-us/library/bb902828.aspx

    It talks about using batching so that you dont get all the data in one shot and hence timeout or run into out of memory.

     


    Batching only applies to batching changes done since the last time you sync'd.  What about first table initialization?  That's the problem I've ran into.  We have a 75,000 table w/ 30 columns.  The client needs to initialize the table for the first time.
    Tuesday, September 15, 2009 2:32 PM
  • Hi Nilkanth,

    In Sync Framework 2.0, the batching is supported for the initial synchronization session and for subsequent sessions by Sync Framework Database Provider.

    http://msdn.microsoft.com/en-us/library/dd918908(SQL.105).aspx 

    Understanding Batching

    Batching changes is ideal for this type of scenario because it provides the following capabilities:

    • Enables the developer to control the amount of memory (the memory data cache size) that is used to store changes on the client. This can eliminate out-of-memory errors on the client.

    • Enables Sync Framework to restart a failed synchronization operation from the start of the current batch, rather than the start of the entire set of changes.

    • Can reduce or eliminate the need to re-download changes or re-enumerate changes on the server due to failed operations.

    Batching is simple to configure for 2-tier and n-tier applications, and it can be used for the initial synchronization session and for subsequent sessions.


    Thanks,
    Nina

    Wednesday, September 23, 2009 5:04 PM
    Moderator
  • Hi Nilkanth,

     

    Take a look at this article: http://msdn.microsoft.com/en-us/library/bb902828.aspx

    It talks about using batching so that you dont get all the data in one shot and hence timeout or run into out of memory.

     


    Batching only applies to batching changes done since the last time you sync'd.  What about first table initialization?  That's the problem I've ran into.  We have a 75,000 table w/ 30 columns.  The client needs to initialize the table for the first time.

    What I meant by my post is we have a 75,000 record table.  We turn on SQL Change Tracking for the first time.  Then we go to intially sync with a batch size of 50.  Change Tracking starts at 0 then goes up incrementally from there.  So even though the batch size is 50, it would contain the 75,000 initial records in the first batch as it is version 0.  It would also contain 49 more changes if the change tracking version for the table was 50+.

    Batch size refers to # of changes, not necessarily the # of records those changes are for.  So the "eliminate out of memory exceptions" really wouldn't alleviate the problem here.

    Sorry for the confusion.
    Friday, October 2, 2009 10:04 PM