OOM Exception in GetChanges/AcceptChanges method RRS feed

  • Question

  • Hi Sync folks,

    I have to deal with severe problems since 2 days (and nights actually). I would love to hear from others if they solved it or have feedback for my thoughts below. I apologize for the long post, but I am really stuck and have to find a solution quickly because my customers are not able to able to sync their devices at the moment.

    My setup:

    • backend: SQL 2008 (changetracking), Webservice for Sync
    • clients: WinMo 6.1, CF 3.5, SQL CE 3.5 SP1, SyncServices 1.0.1

    As it seems, our amount of overall data manipulations has reached a critical size now. When I last had to re-init one of the clients (ie. create fresh device DB and let sync get all the required tables and rows, filtered for this user) it never suceeded but always ran into OOM exceptions.

    To deal with this I implemented batching according to the common samples. It took me quite some time, but finally the solution seems to work for me in terms of reasonable (= not leading to OOM) batch sizes (dynamically per sync group and actual change amount for this user) versus overall sync time.

    The big problem I am running into now is that I use custom Insert Handling in ApplyingChanges event (see thread on performance improvements). For not echoing back my local changes to the server I run


    after the sync has finished. This worked for me, but now the final call to AcceptChanges also runs into an OOM (also GetChanges does).

    To overcome this I tried to make this call in the ChangesApplied event (so after each batch was applied) to decrease the number of records, but this does not work as expected. Maybe changes are not yet committed at this point. Manual committing before the AcceptChanges call only lead me to errors because the transaction is needed elsewhere.

    Now I am really stuck here. My problem is that I did not come up with a solution other than calling AcceptChanges() to prevent echoing. As far as I know and read in several threads there is no other way to overcome SQL CE change tracking. I found some internal native methods in sqlceme35.dll to enable and disable, but that did not lead me any further.

    Another idea I had was somehow find a way to reset the system columns __sysChangeTxBsn and __sysInsertTxBsn. When an insert is done by SyncFramework it seems these are left null, indicating that this was no change that has to be tracked. Looking at the internal queries used in GetChanges to enumerate i/u/d, these rows are left out. Having lots of "manipulated" tables and rows leads to poor performance in GetChanges calls - and in my case also to Out-Of-Memory woes.

    So please, any hints, recommendations and ideas appreciated.
    • Moved by Hengzhe Li Friday, April 22, 2011 5:22 AM (From:SyncFx - Microsoft Sync Framework Database Providers [ReadOnly])
    Friday, March 13, 2009 10:43 AM

All replies

  • Hi Forstingera,

    how much data are you downloading from the server to the device in your scenario? there is a known issue in the way how dataset is constructed on the device ( getting data from the IIS server to device ) that would cause OOM is the data is big. and yes, using batching is one of the way to work around it.

    for the problem ( data eaching back ), frankly I have to say the current approach you tried probably will only lead to further issues. if all the tracking configurations are correctly set and the sync application is written to support the bidiretional sync, there should not be any data to echo back to the client who makes the changes.  we should understand the reaon before taking further code changes ( as you described in this mail ).

    could you check if your application is designed to support bidirection sync ? e.g. the sql server side queries, the sync direction settings ?

    This posting is provided "AS IS" with no warranties, and confers no rights.
    Sunday, March 15, 2009 4:35 AM
  • Hi Yunwen,

    there were around 170.000 changes in total in ~ 10 tables. To overcome the OOM when downloading I implemented batching, this works fine now.

    My problem is that the same (dataset related?) OOM occurs when there is massive changes in the local db of the client! And as far as I know there is no batching support in upload!? How will one solve this issue?
    I mean normally, when not being produced by custom insert handling our users would have to work a lot (and not sync for a while) for producing such amount of local changes - but hey, who says that no app ever will face this prob? My solution meanwhile is that after each batch i call the AcceptChanges() in ChangesApplied event and one time finally after the sync finished. This decreases the number of local changes to retrieve for the AcceptChanges() - not leading to OOM. For now...

    According to your comment about my "echoing" problem I assume you got me wrong somewhere. I apologize for not being clear enough on this. My problem is NOT that local changes made on the client side are uploaded to server and afterwards downloaded (echoed) again. So my bidirectional tables and stuff work quite well I think.

    What I meant with echoing (perhaps I should call it "the false change issue") is that when you intercept data manipulation usually done by sync framework (as I believe quite some devs do, e.g. use sqlce resultset etc.) you create a "change" in the local changetracking although the action you performed (e.g. insert a row in resultset instead of sync service internal method) actually is no change. That is what I meant with when insert is made via sync framework no stamp is made in __sysInsertTx column, when doing it manually a stamp is created.

    And therefore my thoughts (and others had those too, when looking at some threads) about somehow outsmarting the local changetracking or explicitly telling it to e.g. "ignore this insert".
    Monday, March 16, 2009 10:30 AM