• Question



    I am synchronizing a CE db on a device running windows mobile 6.0 and a SQL 2005 db on the server.

    I have around 90 empty tables, on the server and on the device.

    I tried the same scenario between the same sql 2005  server db and a  CE db on the server, it takes less than 5 seconds to synchronize.

    However, when I tried to synchronize the sql 2005 server db and the CE db on the device, it takes up to 9 minutes.


    Any ideas?

    Thanks in advance

    • Moved by Max Wang_1983 Friday, April 22, 2011 6:05 PM forum consolidation (From:SyncFx - Microsoft Sync Framework Database Providers [ReadOnly])
    Thursday, August 7, 2008 2:00 AM

All replies

  • is this the inital sync or the subsequent sync ?


    could you mark the time, by hooking up with the selectChanges, changesSelected on both client provider and server provider ? if this is the initial sync, could you also mark the time for GetSchema and SchemaCreated ?


    those numbers can help us to identify where the time was spend. there are a couple of perf issues that we are working on. those will also tell us if this is the issue we are working on already.




    Thursday, August 7, 2008 4:37 AM
  • Hi

    I did some testing and I found that even when all the tables are empty:


    On the client side, it's taking less than 1 second for each table to select and get the selected changes.

    On the server side, it 's taking more than 10 second for each table.



    What I've done is getting the time for each table on the client side from the  SelectingChanges and the ChangesSelected events.

    And at the server side, I'm getting the time from the same events at the server side.

    Right now ,I'm connecting the device to the seerver through ActiveSync. Does it affect anything?


    Do you have any idea if I download the new release if my problem will be solved ?


    The new release contains a sync service that can be run on devices. Right?

    Where can I find the sample application using WCF? Is it published or not yet?



    Thanks for your help


    Monday, August 11, 2008 3:55 AM
  • Hi


    I installed the new Sync Framework package. Reference my project to the new libraries installed(.dll), and still having the same issue.


    Thanks in advance


    Monday, August 11, 2008 11:17 PM
  • the new version of the Sync serivce won't make much difference here.


    I got different results based on the previous posts. in the first post, the issue seems to be on the client side since if everyting runs on the desktop, the sync only takes 5 seconds. and in the post above, it indicates the perf issue is at the server.


    could you clarify a bit ? are you using bidirectional sync direction ? can you (by hooking up to the event ) to get:


    T0: sync start:


    T1: client selecting changes

    T2: cleint changes selected

    T3: server applying changes

    T4: server change applied.


    T5: server selecting changes

    T6: server changes selected

    T7: client applying changes

    T8: client changed applied


    T9: sync done.


    the above Times assumes this is a bidirectional sync.




    Tuesday, August 12, 2008 4:48 AM



    Thanks for your reply.


    I am using bidirectional and downloadonly.

    I am creating a new syncgroup for each synctable.

    I tried my code on 1 bidirectional table, and here is the result:

    T0: 0s    (41:29)

    T1:1s     (41:30)

    T2:7s     (41:37)

    T3:2s     (41:39)

    T4:0s     (41:39)

    T5:2s     (41:41)

    T6:0s     (41:41)

    T7:2s     (41:43)

    T8:6s     (41:49)

    T9:20s   (41:49)



    I tried to combine all the bidirectional tables into one syncgroup and the downloadonly into another one. But still too slow.


    For all my tables SyncAdapters, I don't have:





    And for the bidirectional ones , I don't have:



    Does this affect the speed?


    Thanks for your help



    Tuesday, August 12, 2008 5:51 AM
  • so most of the time was spend at the client side, i.e. 7 seconds to select data and 6 seconds to apply data ( even there is no changes ).


    can you share out you table schema with us ? what kind of device are you using ? what version of the sql compact you have on the device ?




    Tuesday, August 12, 2008 6:48 AM
  • Hi

    For all the tables(90), it's taking 13 minutes to sync: 9 mn on the client side between  selecting and applying the changes. 

    I have sqlce3.5 installed on the device, Windows Mobile 6.0 .

    On the server side, it's sql server 2005.

    For each table, I have the LU  field as datetime.

    I am using this field as the creationdate and as the LastEditDate.

    I'm updating this field by the current date whenever a new record is inserted or updated through the triggers.

    I don't have this LU field on the device db. and I'm using for:


    Thanks a lot













    Tuesday, August 12, 2008 7:13 AM
  • thanks for the info. what is your table schema looks like ? e.g. how many columns in it ? what type for the columns, how many columns are there for the primary key ?





    Tuesday, August 12, 2008 8:05 AM


    For the table that I tested before : it has 94 columns of different types such as: datetime, varchar,int.

    It has one primary key as int.

    (This is the biggest one).

    All the rest, they have  less than 20 columns

    Other tables haave more than one primary key.

    Max is 7 columns as PK, all as int.




    Tuesday, August 12, 2008 10:31 PM
  • We can repro the change enumeration process is slower than expected, I.e. in my  case, it took 6 seconds to enumerate an empty changes from the sql compact db. we are looking into it and see what optimization we can do to improve the performance here.


    8/14/08 12:29:09 PM : Microsoft.Synchronization.SyncAgent Sync state changed to : Synchronizing

    8/14/08 12:29:09 PM : sync stage: ReadingMetadata                      1% completed

    8/14/08 12:29:10 PM : sync stage: UploadingChanges                    15% completed

    8/14/08 12:29:14 PM : Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider Enumerated Changes for tables : [Customers];[Orders]

    8/14/08 12:29:15 PM : sync stage: UploadingChanges                    51% completed

    8/14/08 12:29:15 PM : sync stage: DownloadingChanges             55% completed

    8/14/08 12:29:17 PM : Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider Applying Changes for tables : [Customers];[Orders]

    8/14/08 12:29:18 PM : sync stage: DownloadingChanges             91% completed

    8/14/08 12:29:18 PM : sync stage: WritingMetadata                       100% completed

    8/14/08 12:29:18 PM : Microsoft.Synchronization.SyncAgent Sync state changed to : Ready


    Thanks for reporting this issue.



    Thursday, August 14, 2008 11:11 PM
  • Hi

    Thanks for your reply.

    In my case it's taking up to 10 mn to sync an empty CE db with the sql server db.


    My device application is synchronizing through a Web Service (that calls the WCF containing my Server Sync Provider).


    I created a windows application containing a local sdf database. Synchronizing ths CE db with the Server db through the same WS( and WCF)  took less than 50 seconds.

    The sdf used in this windows application is a copy of the one on my device.


    Another test I did was synchronizing one bidirectional table through the same WS ( and WCF) :


    1- I minimized my CE db on the device by deleting all the tables except one ( ~200K).

       It took 7 s to sync.


    2- With the original CE db size( less than 2MB) ( without deleting any table), it took 27 s.


    Accessing my CE db on the device , using query analyzer 3.5, takes 1 mn to open the db.(I considere it as long time since my db size is less than 2MB).

    How does the sync work? Is it opening a connection with the CE db at the begining of the process and closing it at the end , or opening and closing the connection each time a table is synchronized?


    N.B.: My code is based on the sample in the video IntroToSyncServicesADODetNet_HighQuality.wmv

             I am calling the WCF through the Web Service.





    Thanks a lot 





    Thursday, August 14, 2008 11:56 PM
  • Hi

    Do you recommend using the Power Toys tools in order to call the WCF from the smart device application or it is give the same result as calling a web service that call the WCF?

    Does this affect anything ?


    I need to have each table in a separate syncgroup. And what I found that the synchronization process will take much much longer than having them all under one syncgroup.


    Can you give me please an idea when  the speed issue related to the getchanges event will be solved?


    Is there another way or tool you recommend to synchronize the CE db with the server db faster?


    If the speed issue is related to the GetChanges event, why doesn't affect the windowsapplications as well while calling the same WS(WCF) to sync?


    Thanks for your help



    Friday, August 15, 2008 6:56 AM
  • are you refering to the Power Toys for netcf ? if so, I don't think they will provide perf improvements for calling WCF services.


    Seperated tables to different groups will have some perf impact especially for the empty sync as you experieced, since the metadata was commited ( write to db ) at the sync group level ).


    hee are some generail guidelines I got from the SqlCompact team for improving performance at the db level. and yes, close/opening data base will have some overhead.


    Here are a few guidelines –


           The first connection made to the SQL Compact database engine has a performance overhead since the storage engine goes through its initialization tasks. However, while this connection is alive, any subsequent connections would experience a better response time. It is advisable to create a connection and keep it alive for the application lifetime.

           SQL Compact supports query plan caching for parameterized queries during the lifetime of a command object. It is advisable to hold on to the command object. Destroying the command object and creating a new one destroys the cached plan, and would require the query to be re-compiled, which will add to the relational engine’s response time.

           Multiple query executions under a transaction boundary yield significant performance improvements against individual query executions. Programmatically, it is advisable to create a transaction object and perform multiple query executions inside the transaction. Destroying the previous transaction object and creating a new one for each execute would require individual queries to be committed.

           SQL Compact supports two flush modes of transaction commits – Immediate flush and Deferred flush mode. The use of deferred flush mode helps avoid the cost of frequent disk writes (which, as per our performance experiments, takes the most time), and in turn improves performance of transaction commits. The Auto-Flush interval, a connection string parameter which SQL Compact supports, can be used to specify the interval between background flushes. The default value is 10 seconds, and can be changed in the connection string.

           The following connection string parameters supported by SQL Compact play an important role in governing the performance of an application.

          Max Buffer Size

           The largest amount of memory, in KB, that SQL Compact storage engine can use before it starts flushing changes to disk.

           Defaults to 4MB on desktops.

           Advisable to set this to around 20% of database file size.

          Autoshrink threshold

           The percent of free space in the database file that is permitted before autoshrink begins.

           Defaults to 60.

           Advisable to set this to a lower value for frequent inserts and deletes.

          Default lock timeout

           The number of milliseconds that a transaction will wait for a lock to be acquired.

           Defaults to 5000 ms.

           Advisable to set this to a higher value for high concurrency scenarios.

          Flush interval

           Specifies the interval time in seconds before all committed transactions are flushed to disk.

           Defaults to 10 secs.

           Advisable to set this to a higher value for high concurrent transactions.

           Schema plays a huge role in performance. Specifically, too much Normalization hampers database query performance.

           Choose your indexes carefully.

          Indexing on columns used in the WHERE clause of critical queries frequently improves performance depending on the selectivity of the index (selectivity is the ratio of matching rows to total rows. If this ratio is low, the index is highly selective and more efficient). Ensure that the indexes you create are selective.

          Multiple column indexes improve performance. It is advisable to have the most selective column to be the leftmost in the key.

          Avoid indexing on small tables for better performance. This saves the cost of creating, maintaining, loading and processing index pages.

          For join queries, it is helpful to create indexes on the columns used in the join, to get better performance.




    Monday, August 18, 2008 6:43 PM
  •  Yunwen Bai [MSFT] wrote:

           SQL Compact supports query plan caching for parameterized queries during the lifetime of a command object. It is advisable to hold on to the command object. Destroying the command object and creating a new one destroys the cached plan, and would require the query to be re-compiled, which will add to the relational engine’s response time.


    Storing a command object and and caching query plans produces exceptions because queries for some reason need to be recompiled after some syncs:


    The table schema has changed since the query was last compiled. Recompile the query.


    And as far as I understand they can't be recompiled with SQL Server Compact. OPTION (RECOMPILE) is not recognized.

    So how do I maintain connection and avoid recompiling queries?

    Thursday, August 21, 2008 8:48 AM
  • the above recommentdataions for better performance are general guidelines. in your case if the table schema is changed, then you will need to recompile the command the commandtext will need to be changed as well. normaly, if you cann sqlcecommand.Compile(), then sqlcecommand.executexxx(), the query plan will be cached and subsequential sqlcecommand.executexxx() calls will use the cached query plan.




    Friday, August 22, 2008 4:39 AM
  • But that's the thing - schema is not being changed after sync. I've no idea why this is being triggered.

    Friday, August 22, 2008 6:28 AM
  • Sorry but I think I missed some context here: by "schema is not being changed after sync", you meant to say the table schema changed after the inital sync, subsequent sync? and the change was done by the sync process ?




    Friday, August 22, 2008 8:04 AM
  • Initial client schema (and db itself) is created by Sync Services. After that schema is not changed neither on client nor server manually. But after subsequent synchronizations Sync Services report that schema has been changed (recreated?) so queries need recompiling.

    Friday, August 22, 2008 8:53 AM
  •  Yunwen Bai [MSFT] wrote:


    What exactly do you mean by this? I don't see any compile methods in SqlCeCommand class.


    Friday, August 22, 2008 2:26 PM
  • My bad. it should be sqlcecommand's Prepare() method.


    back to the reason why the table was recreated, can you describe a bit in detail what your scenario is ? Normaly after the inital sync, the table should not be recreated after the sync.





    Saturday, August 23, 2008 4:09 AM