Asked by:
Why the disk reads are so many when the SyncFrx tried sync and there are only about 100 changes to be applied?

Question
-
We have a system with two terminals. Both installed with SQL2008 Express R2. We're using SyncFx 2.1 for syncing the data between the terminals. The time interval between two synchronization cycles is about 90 seconds. Between the sync cycles there will be only about a few transactions done on each of the terminals with about 100 changes (due to multiple tables and rows added/modified). With only this little data to be synchronized, it would take the two terminals to finish the sync cycle in about 70 seconds and sometimes even more. We tried to determine which part is using the most time. By using the PSSDiag tool, we found the SyncFx's _SelectChanges query was taking long time and the number of disk reads was about 44,000.
My question is why the disk reads are so much? This disk reads is causing the system none or slow responsive to UI changes even though the CPU usage is not high. How do we improve the performance? Can we modify the SyncFx's _SelectChanges stored procedure?
Thanks,
Baolin
Sunday, September 16, 2012 1:46 AM
All replies
-
the _selectChange sp is doing a join between the table being synched and its corresponding _tracking table.
note that even before the _selectchanges sp fires, there needs to be a query on the scope_info table of the destination to grab the sync knowledge it has on the source.
the _tracking table also has very basic index and that might be a good start to look at.
Monday, September 17, 2012 2:37 AM -
Thanks for the reply!
The main transaction table contains about 310,000 records and so does the tracking table. When _selectChanges sp tries to grab the sync knowledge, does it need to read that many records?
Can we change the index on the tracking table? I thought normally we should not, isn't it?
Monday, September 17, 2012 2:55 AM -
the sync knowledge is just one row but the column size may vary depending on how many replicas sync against it and how fragmented it is.
you can change the index. at the end of the day, Sync Framework is just like any other database app, you can apply the same approach for database performance tuning. e.g., updating stats, rebuilding indices, filtering, etc...
Monday, September 17, 2012 3:26 AM -
After more investigation (at the help of Microsoft), we found out that in the SyncFx _selectChanges and other Stored Procedures there is a @sync_min_timestamp bound parameter that is defined as BIGINT and the local_update_peer_timestamp column in the tracking table is defined as TIMESTAMP. Since the bound parameter and the table column have different types, SQL Server must convert have different types. When SQL Server is doing the convertion, it's forcing on the table and that caused the index scan and thus high number of disk reads.
Any idea how to resolve this problem?
Thanks!- Edited by bli88 Monday, September 24, 2012 9:33 PM
Monday, September 24, 2012 7:45 PM -
so is your finding showing that it actually scans both PK and local_update_peer_timestamp indices?
have you tried including local_update_peer_timestamp as part of the PK?
as i have mentioned, you can treat sync framework like any other database apps in terms of tuning.
Tuesday, September 25, 2012 3:02 AM -
Sorry for the late reply.
No, we did not try to include the local_update_peer_timestamp as part of the PK because we try avoid to make any changes to the Sync Framework generated tables and SP's. If we make changes, then we'll have to do it everytime the DB is deprovisioned or there is upgrade to the framework.
But for this case, we had to do something on the SP's. We had to cast the parameter type to be consistent with the table column's type. After making these changes, we found out the disk reads dropped to almost nothing. And the execution plan was changed from index scan to index seek. The sync speed was improved dramatically.
Thursday, October 4, 2012 8:14 PM -
Hi bli88
We are facing a similar issue with the sync framework. I changed my PK to include local_update_peer_timestamp and that worked for me. But changing the sp parameter datatype did not work as sync generated exception for the invalid cast and I had to create a local variable, cast the parameter and then use it in the where clause. Can you please let me know, how did you achieve that?
Thanks in advance.
Regards Mayur MCTS Windows,Web & Distributed applications
Tuesday, December 18, 2012 4:17 PM -
Hi Mayur,
We achieve that by modifying the content of the SP to directly cast the data type of @sync_min_timestamp from big to timestamp. Changing the data type in the parameter list would not work like you said. We tried that route initially and had to come up with this new idea. However, there is a catch for this workaround: when you provision a database, you need to make these changes to all of the SP that have this problem. If you miss one of them, it could cause problem. Hope that works.
Good luck!
Thursday, January 17, 2013 2:57 PM -
Hi bli88,
Thanks for the reply.
We tried that but with larger volumes SQL Server still does scans on base tables. I figured, it is because SQL Server cannot sniff that value as it is a local variable and not parameter anymore. So we came with wrapped sps. Now, we have 2 _SelectChanges sps per table. First one calls second one with the correct datatype. And that seems to be working pretty good.
Regards Mayur, MCTS Windows,Web & Distributed applications
Thursday, January 17, 2013 3:56 PM