I'm Ayan Mukherjee, my live id is email@example.com. We are developing a SYNC tool for our customer www.barmetrix.com which will sync the data from our local DB which is on SQL Express 2008 R2 to SQL Azure. We have developed a windows based application with
1) MicrosoftSync Framework2.1
4) .NET framework Version 4.0
5) SQL Express 2008 R2.
Now this program works very fast when sync with our local network between 2 SQLExpress2008 R2 database.
But when we are running the same program with SQL Azure database which is linked with my account, then it is taking too much time.
There is no such particular table for which it should take thatmuch time. Lets say when we are Syncing table1 it takes 1 min to sync the data, again when we are syncing the same table other time with same number of records it takes 30-35 mins.
We are sending you a log file which we have generated with our application for your reference.
You can reach us at 0919830130344
a couple of questions to clarify your scenario:
- how big is the data youre trying to sync (size and number of rows)
- does the SQL Azure DB contain the same data as your SQL Express already?
- did you enable batching in Sync Framework?
to help you troubleshoot further, try enabling Sync Framework tracing...
offhand, if your database is big, am guessing you're being throttled in SQL Azure...
try setting ApplicationTransactionSize, see: http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx
Hi June T
Thanks for your reply. The options you have mentioned we already tested with those options but it is happening in same way.
Enabling batching gave poor performance to us.
It is happening in both cases when if the SQL Azure DB is empty and we are trying to upload data from Local DB as well as
it is happening when the SQL Azure DB is even empty.
We are getting an Exception "Exception: Insert Conflict detected" when the SQL Azure DB contains as well as SQL local client contains same data.
At that time it is taking 1 secong to upload 3 rows in average. Now we will have 500-600 rows in each table and 10 to 11 tables will have on and average same rows.
500-600 rows is not that much...
am not sure what your batch size is but at 600 rows, i dont think the batching would even kick in.
i suggest you turn on Sync Fx tracing and have a look at the log.
it will be slower if you have existing data on both databases since as you already found out, it will fire conflicts.
As Microsoft mentions "During synchronization, each changed row from the source is compared with the corresponding row from the target (Upload). If both rows have changed, there is a conflict. The merge agent uses the defined conflict resolver to choose a winning row. After finishing all source rows, the same process starts with all changed target rows (Download).". I am not sure why server and client - both side can have changed data. However, to my best guess, it can be solved with Conflict Resolution policy, question how best we can do that.
Let me know.
am not sure where you got that quote above... Sync Framework never compares rows between source and destination. when it tries to apply a change and it fails to do so, it will raise it as a conflict not unless it's really an error. Sync Framework has no idea of what changes are made to a row, it only knows that a row was changed. what change was made, it doesnt know, so it will never be in a position to be able to compare changes between rows.
conflicts occur when the same row is changed in both source and destination. for example, you update row 1 in client 1, and the same row 1 is also updated in server 1. if you upload the change from client 1 to server 1, that will fire an update-update conflict. another example is you updated row 1 in client 1 and row 1 is deleted in server 1, that will fire an update-delete conflict as well.
to resolve conflicts, see: