locked
What is the overhead for sync service for ADO.net? RRS feed

  • Question

  • Hi,

    Is there any statistics on the overhead and actual data transfer on Sync Services for ADO.net?  Such as the data transfer size required to 1) establish connection, 2) check different, and 3) the actually data transfer (e.g. 100 transaction records on serveral tables with ~4000 chars per record) between 2 nodes.

    We are working with a constraint of only 2.4 kbits/sec satellite internet connection speed.

    Our central database is around 3GB with approximate 800 tables.

    Each client database would be around 120 to 500 MB with similar number of tables.

    Data transfer and online duration is charged separately and very costly over this 2.4 kbit/sec hence need to find out the statistics on such.

    Any help is greatly appreciated!

    Cheers,
    KennyGOH
    kenny
    • Moved by Max Wang_1983 Wednesday, April 20, 2011 11:25 PM Forum consolidation (From:SyncFx - Technical Discussion [ReadOnly])
    Tuesday, May 12, 2009 8:54 AM

Answers

  • Hi Kenny,

    We don't have such kind of performance data for comparison yet, but it should be easy to measure the overhead for your scenarios. Generally, the overhead areas are:
     1. trigger based change tracking will slow down the regular DML a little bit. You may not care about it.
     2. Compared to ADO.Net APIs, the Sync provider will need additioanl time to calculate incremental changes, handle conflicts, and update tracking tables and side tables during change application.

    You can create a simple sync app following the samples in the SyncSDK.msi,  and compare the performance with the ADO.net dataset APIs.

    For example, here are the detailed steps to measure apply changes:


    a.       Say a table with 1000 rows,

    b.       Do a sync and break up the sync as the selecting change time, applying change time, by using the event provided in the providers

    c.       Get the time for applying changes ; say X seconds

    d.       Create a new sql ce db

    e.       Create same table schema

    f.       Fill data set

    g.       Insert 1000 rows in the DS

    h.       Call adapter.update(); say this time is Y

    i.       X –Y would roughly be the overhead.

    Thanks,
    Dong


    This posting is provided AS IS with no warranties, and confers no rights.
    Wednesday, May 27, 2009 9:56 PM
    Moderator

All replies

  • Hi Kenny,

    We don't have such kind of performance data for comparison yet, but it should be easy to measure the overhead for your scenarios. Generally, the overhead areas are:
     1. trigger based change tracking will slow down the regular DML a little bit. You may not care about it.
     2. Compared to ADO.Net APIs, the Sync provider will need additioanl time to calculate incremental changes, handle conflicts, and update tracking tables and side tables during change application.

    You can create a simple sync app following the samples in the SyncSDK.msi,  and compare the performance with the ADO.net dataset APIs.

    For example, here are the detailed steps to measure apply changes:


    a.       Say a table with 1000 rows,

    b.       Do a sync and break up the sync as the selecting change time, applying change time, by using the event provided in the providers

    c.       Get the time for applying changes ; say X seconds

    d.       Create a new sql ce db

    e.       Create same table schema

    f.       Fill data set

    g.       Insert 1000 rows in the DS

    h.       Call adapter.update(); say this time is Y

    i.       X –Y would roughly be the overhead.

    Thanks,
    Dong


    This posting is provided AS IS with no warranties, and confers no rights.
    Wednesday, May 27, 2009 9:56 PM
    Moderator
  • hi Dong,
    Thanks for your response. We had manually sync the data instead.
    Cheers,
    Kenny
    kenny
    Sunday, July 19, 2009 1:58 AM
  • Anyone already found some good statistics for the Microsoft Sync Framework and the possible overhead it creates ?
    Monday, October 26, 2009 7:18 AM