locked
Help! Sync Services 1.0 Synchronize Call - Memory Leak! RRS feed

  • Question

  • The architecture is such that I created an SDF on the local client machine and via the designer I selected what tables to sync.  The clients sync constantly to the server.  The data is synced over web services. 

    I have basically two databases.  I have a ticketing database and an inventory database.  The ticketing database has no problems.  However, when I make large changes on the inventory database (the properties for syncing set the tables to "download only") the client attempts to sync and a memory leak rapidly grows and locks up their computer. 

    I have actually looked at a "good" client database and one where this occurs and there really doesn't appear to be any difference.  I thought maybe the client DB was corrupt and tried repairing it.  The only way I can seem to solve the problem is to delete their inventory database and have them pull this data down again which is a nightmare since it's a 20 meg DB over an aircard for most of these guys. 

    This has been rolled into production and was NOT caught in testing (we are a lean shop so the users were the testers) and I need help FAST finding a fix.  I appreciate ANY help...
    Travich
    Saturday, February 20, 2010 10:47 PM

All replies

  • hi travich,

    when you say you've compared a "good" vs the problematic client database, are they of the same size? same number of rows?

    how many rows are you talking about when you say your making changes on the inventory database? Am assuming you're just pulling down incremental changes right?

    how are you transferring the data using web services? via Datasets?

    Sunday, February 21, 2010 4:21 AM
  • Just an update -- I let my machine run even though it was chewing up memory.  After about a gig of memory it stopped and the services failed with about 37000 failed downloads.  So somehow the failed downloads may be involved in this explosion of memory useage.  When a normal sync occurs with this database your NORMALLY only looking at 200 meg.  Is there any way I can capture what caused the sync conflicts and also is there any way I can somehow keep this memory blow up from happening?  I appreciate any input.

    Travich
    Sunday, February 21, 2010 5:06 AM
  • you can put a break on the ApplyChangeFailed event to check what's causing the conflict (check the ApplyChangeFailedEventArgs's conflict type). The conflicts may be causing the extra memory consumption since when a conflict occurs, SyncFx tracks the conflicting rows from the client and the server in the conflicts collection (it's under SyncContext->GroupProgress->TablesProgress->Conflicts)
    Sunday, February 21, 2010 7:48 AM
  • Thanks June for the response -- this may help me figure out the root cause.

    Here's the really weird thing -- I can't pin point what causes it, but it seems to only happen to guys when large changes are made.  So the inventory was 38k in size -- I had an SSIS update the production server on a nightly basis with changes and the SQL that I had written had a bug and added and additional 8k records one night.  I figured this out, deleted the 8 k rows and that's seems to be when their syncing started to blow up.  I don't know if it's the fact that those rows were added and then deleted or what.  For the inserts and deletes they were typical statements:

    delete from table where xyz
    insert into table
    blah blah blah

    Anyways, let me catch the conflict and I think that's going to help me get to the bottom of this.  Thanks so much I will update here accordingly.

    About the number of rows -- the number of rows were correct, but the database sizes were different.  At one point (I didn't get the database, but the normal size was around 20 meg and this guys had grown to 30 meg). 

    Anyways, I'll check back in hopefully later today.
    Travich
    Sunday, February 21, 2010 7:24 PM
  • btw, changes are applied in this order: Deletes, Inserts, Updates.

    Monday, February 22, 2010 1:06 AM
  • Thanks -- interestingly enough, this time I had NO failed changes.  The problem still persisted with the memory size going up to a gig.  Is there someway I can throttle this?  I've seen something about batching -- maybe I can turn this on?  I'm kind of running out of ideas.  When the changes were applied this time it said about 100k changes applied.  It's frustrating because I'm not getting the same behavior that I got the other night.  The one thing that remained the same whether the change failed or were downloaded fine was the memory useage exploding to 1 gig.  Continuing to look...


    Travich
    Monday, February 22, 2010 1:21 AM
  • your table is just 38k in size and you got 100k changes applied, can you check the statistics how many are inserts, updates and deletes? The SyncContext->GroupProgress would have all those figures.

    yes, you may want to configure batching as well.
    Monday, February 22, 2010 2:03 AM
  • Any chance you have any examples on batching? 

    One other thing I did not tell you -- I have four other tables, but that data hasn't changed. 

    Will check the group progress thanks!
    Travich
    Monday, February 22, 2010 2:40 AM
  • just follow the steps here: http://msdn.microsoft.com/en-us/library/dd918908(SQL.105).aspx

    when you configured the sync using the designer, can you check the options you selected for each table for data to download? is it set to: New and incremental changes after first synchronization? what's the creation option you selected?
    Monday, February 22, 2010 3:04 AM
  • Thanks looking at batching now -- they are set to:
    New and incramental changes
    Creation Option: Drop Existing or Create New Table

    Should those be different?
    Travich
    Monday, February 22, 2010 4:02 AM
  • just checking why would you be getting downloads when no changes has been made. I thought you might have overlooked or accidentally chose downloading the entire table each time :)

    Monday, February 22, 2010 4:36 AM
  • Man I have looked at that link for probably an hour.  Do you know if there's any code samples or any other sites that have something a little easier to follow.

    I guess the web service thing is what I'm doing so it's a little harder for me to follow. 

    BTW, thanks for all of your help!
    Travich
    Monday, February 22, 2010 6:17 AM
  • my apologies, i forgot you used the Designer tool for your sync. That's the offline scenario.

    anyway, try checking out this samples.

    Sync101 with Remote Change Application over WCF
    https://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=sync&ReleaseId=3421

    Database Sync: Peer to Peer over WCF
    https://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=sync&ReleaseId=3423

    Database Sync: SQL Server and SQL Server express
    https://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=sync&ReleaseId=3762


    the latter two samples are based on peer to peer sync, but you should be able to figure out how to create your services.
    Monday, February 22, 2010 7:21 AM
  • That is completely different from what I'm doing now -- it seems much, much simpler the way I'm doing things -- in fact this is all that there is to my contract right now, for example (taken from another MSDN sample):

        [ServiceContractAttribute()]
        [XmlSerializerFormat()]
        public interface IPowerEFSRLookupSyncContract
        {

            [OperationContract()]
            SyncContext ApplyChanges(SyncGroupMetadata groupMetadata,
                                     DataSet dataSet,
                                     SyncSession syncSession);

            [OperationContract()]
            SyncContext GetChanges(SyncGroupMetadata groupMetadata,
                                   SyncSession syncSession);

            [OperationContract()]
            SyncSchema GetSchema(string[] tableNames,
                                 SyncSession syncSession);

            [OperationContract()]
            SyncServerInfo GetServerInfo(SyncSession syncSession);
        }


    In the last example (I haven't looked at the other two yet), I don't see a sync designer files, but I'm assuming that all remains the same -- the big difference will be that I change my web service and client to implement the example code, correct?  Also I have SQL Server CE not SQL server express on the client machines so I'm assuming this all remains the same? 

    I'll take a look at the other examples and see what I can figure out.  Just trying to understand how things work before I embark on making big code changes...

    Do they pay you to answer all my questions, because I'm asking a lot!  :)
    Travich
    Monday, February 22, 2010 3:40 PM
  • take a look at this instead: http://msdn.microsoft.com/en-us/library/bb902828(SQL.105).aspx, specifically on how to configure the get new batch anchor sp. I think that should be the only change you need to make for your batching. just locate the InitializeNewAnchorCommand in the Designer generated code and replace with the new one that uses batching.

    i can see you're uploading the changes and downloading them as datasets too. serializing the dataset is expensive. The dataset that you serialize actually has two copies of each row (diffgram). (you can enable tracing on the WCF message to see what i mean). So if you upload 1 row for example, the actual representation in the serialized dataset is actually 2. If you encounter conflicts, then there's another datatable containing the conflicting rows from both server and client inside the SyncContext. Plus another copy of the data if am not mistaken on the GroupProgress as well.

    if you can find the sample code to convert the dataset to a dataset surrogate and pass the dataset surrogate instead to the wcf call, then on the service convert it back to a dataset, you'll save a lot of bandwith. (you save more if you send a byte array actually, i have a project where i convert the dataset to a dataset surrogate, then convert to byte array and send a byte array instead :) ).

    and no, i dont get paid :) the beauty of the community :)





    Monday, February 22, 2010 4:05 PM
  • Hopefully through the partial class I can implement this without touching the designer file...

    I had written quite a long post, but it hasn't appeared not sure why.  I appreciate your help this is exactly what should work with my designer files.

    Travich
    Monday, February 22, 2010 5:40 PM
  • Okay kind of spewing here -- found the InitializeNewAnchorCommand on the server side.  Nothing I need to change on the client? that seems too easy...  Let's see what happens.
    Travich
    Monday, February 22, 2010 5:52 PM
  • Okay two questions -- I see it's calling the stored procedure, but it's referencing a UpdateTimeStamp and InsertTimeStamp -- should this be something different for me?  I do not have these columns, I'm assuming it's a hidden column or in another table that SQL Server 2008 manages.  I have change tracking on and everything seems to work in the past...

    Example:

              SELECT MIN(UpdateTimestamp) AS TimestampCol FROM Parts
              UNION
              SELECT MIN(InsertTimestamp) AS TimestampCol FROM Parts

    I'm guessing I somehow do the same thing form CHANGETABLE...

    Travich
    Monday, February 22, 2010 6:18 PM
  • I am very disappointed -- I've got batching working and I still have the same memory issues when syncing.  I am not even sure where to begin now.

    I have found the conflict:  A duplicate value cannot be inserted into a unique index. [ Table name = CommonInventory,Constraint name = PK_CommonInventory ]

    I am totally confused why it's trying to do this????  Something configured incorrectly on the server?

    Interestingly when I just force the overwrite -- the memory does not get out of control.  To me, it seems like the problem is these conflicts, like you said -- they grow and grow and eat up all the memory.  I just wonder what's causing them...  Either way if this is the hack I have to do to get this to work I am more than happy to do it.

    Travich
    Monday, February 22, 2010 10:22 PM
  • good to hear you got the batching working.

    can you confirm that your client is just downloading incremental changes? you may try to insert or update a row in the server and confirm that you only get exactly that row in the sync.

    Tuesday, February 23, 2010 12:46 AM
  • I quickly read through this thread, may I jump in ?  sorry to hear that you have been trying nailing this down but not too much progresses been made yet.

    could you confirm a few things ?

    1. is this upload or download when you see the memory jump ?
    2. what is the expected data change size ? e.g. number of tables, row size, number of rows ?
    3. I believe you are using the Ntier config as you mentioned your service layer, what is the proxy and service lay's logic
    4. how much conflicts do you expect and what type of conflicts?
    5. what conflict resolution and logic you have in your app or service layer ?

    there are a couple known issues regarding with possible high memory usage:
    1. the way ado.net to handle datarow increases in datatables, it basicaly double the size of the previously configured when the size needed to be increased.
    2. during conflict resolution, the sync code will hold two copy of the datarow( server and client ) for reviewing conflict's purpose.

    not sure if you are encountering the two above issues -- my feeling is not as you mentioned 100K change over 38K rowsize, but please confirm.

    putting some breakpoints and logging code in the service layer ( or where the memory issue resides ) to detect memory usage and increase pattern would be a way to get more hints on the root cause and for next steps to nail this down.

    thanks
    Yunwen
    This posting is provided "AS IS" with no warranties, and confers no rights.
    Tuesday, February 23, 2010 9:03 AM
  • 1.  Downloading
    2.  In this case I deleted 80k records -- that's all that should change, but it appears to be trying to pull all data down.
    3.  Client Provider from Designer:

    //------------------------------------------------------------------------------
    // <auto-generated>
    //     This code was generated by a tool.
    //     Runtime Version:2.0.50727.4927
    //
    //     Changes to this file may cause incorrect behavior and will be lost if
    //     the code is regenerated.
    // </auto-generated>
    //------------------------------------------------------------------------------

    namespace eFSR.Client {
       
       
        public partial class PowerEFSRLookupClientSyncProvider : Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider {
           
            public PowerEFSRLookupClientSyncProvider() {
                this.ConnectionString = global::eFSR.Client.Properties.Settings.Default.ClienteFSRLookup_v6ConnectionString;
                this.ApplyChangeFailed += new System.EventHandler<Microsoft.Synchronization.Data.ApplyChangeFailedEventArgs>(PowerEFSRLookupClientSyncProvider_ApplyChangeFailed);
                this.ChangesApplied += new System.EventHandler<Microsoft.Synchronization.Data.ChangesAppliedEventArgs>(PowerEFSRLookupClientSyncProvider_ChangesApplied);
            }

            void PowerEFSRLookupClientSyncProvider_ApplyChangeFailed(object sender, Microsoft.Synchronization.Data.ApplyChangeFailedEventArgs e)
            {           
               
                e.Action = Microsoft.Synchronization.Data.ApplyAction.RetryApplyingRow;
            }
           
            public PowerEFSRLookupClientSyncProvider(string connectionString) {
                this.ConnectionString = connectionString;
            }
        }
       
        public partial class PowerEFSRLookupSyncAgent : Microsoft.Synchronization.SyncAgent {
           
            private CommonInventorySyncTable _commonInventorySyncTable;
           
            private TipSyncTable _tipSyncTable;
           
            private PartsSyncTable _partsSyncTable;
           
            private UnitSyncTable _unitSyncTable;
           
            private InventoryItemSyncTable _inventoryItemSyncTable;
           
            partial void OnInitialized();
           
            public PowerEFSRLookupSyncAgent() {
                this.InitializeSyncProviders();
                this.InitializeSyncTables();
                this.OnInitialized();
            }
           
            public PowerEFSRLookupSyncAgent(object remoteSyncProviderProxy) {
                this.InitializeSyncProviders();
                this.InitializeSyncTables();
                this.RemoteProvider = new Microsoft.Synchronization.Data.ServerSyncProviderProxy(remoteSyncProviderProxy);
                this.OnInitialized();
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            public CommonInventorySyncTable CommonInventory {
                get {
                    return this._commonInventorySyncTable;
                }
                set {
                    this.Configuration.SyncTables.Remove(this._commonInventorySyncTable);
                    this._commonInventorySyncTable = value;
                    this.Configuration.SyncTables.Add(this._commonInventorySyncTable);
                }
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            public TipSyncTable Tip {
                get {
                    return this._tipSyncTable;
                }
                set {
                    this.Configuration.SyncTables.Remove(this._tipSyncTable);
                    this._tipSyncTable = value;
                    this.Configuration.SyncTables.Add(this._tipSyncTable);
                }
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            public PartsSyncTable Parts {
                get {
                    return this._partsSyncTable;
                }
                set {
                    this.Configuration.SyncTables.Remove(this._partsSyncTable);
                    this._partsSyncTable = value;
                    this.Configuration.SyncTables.Add(this._partsSyncTable);
                }
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            public UnitSyncTable Unit {
                get {
                    return this._unitSyncTable;
                }
                set {
                    this.Configuration.SyncTables.Remove(this._unitSyncTable);
                    this._unitSyncTable = value;
                    this.Configuration.SyncTables.Add(this._unitSyncTable);
                }
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            public InventoryItemSyncTable InventoryItem {
                get {
                    return this._inventoryItemSyncTable;
                }
                set {
                    this.Configuration.SyncTables.Remove(this._inventoryItemSyncTable);
                    this._inventoryItemSyncTable = value;
                    this.Configuration.SyncTables.Add(this._inventoryItemSyncTable);
                }
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            private void InitializeSyncProviders() {
                // Create SyncProviders.
                this.LocalProvider = new PowerEFSRLookupClientSyncProvider();
            }
           
            [System.Diagnostics.DebuggerNonUserCodeAttribute()]
            private void InitializeSyncTables() {
                // Create SyncTables.
                this._commonInventorySyncTable = new CommonInventorySyncTable();
                this._commonInventorySyncTable.SyncGroup = new Microsoft.Synchronization.Data.SyncGroup("CommonInventorySyncTableSyncGroup");
                this.Configuration.SyncTables.Add(this._commonInventorySyncTable);
                this._tipSyncTable = new TipSyncTable();
                this._tipSyncTable.SyncGroup = new Microsoft.Synchronization.Data.SyncGroup("TipSyncTableSyncGroup");
                this.Configuration.SyncTables.Add(this._tipSyncTable);
                this._partsSyncTable = new PartsSyncTable();
                this._partsSyncTable.SyncGroup = new Microsoft.Synchronization.Data.SyncGroup("PartsSyncTableSyncGroup");
                this.Configuration.SyncTables.Add(this._partsSyncTable);
                this._unitSyncTable = new UnitSyncTable();
                this._unitSyncTable.SyncGroup = new Microsoft.Synchronization.Data.SyncGroup("UnitSyncTableSyncGroup");
                this.Configuration.SyncTables.Add(this._unitSyncTable);
                this._inventoryItemSyncTable = new InventoryItemSyncTable();
                this._inventoryItemSyncTable.SyncGroup = new Microsoft.Synchronization.Data.SyncGroup("InventoryItemSyncTableSyncGroup");
                this.Configuration.SyncTables.Add(this._inventoryItemSyncTable);
            }
           
            public partial class CommonInventorySyncTable : Microsoft.Synchronization.Data.SyncTable {
               
                partial void OnInitialized();
               
                public CommonInventorySyncTable() {
                    this.InitializeTableOptions();
                    this.OnInitialized();
                }
               
                [System.Diagnostics.DebuggerNonUserCodeAttribute()]
                private void InitializeTableOptions() {
                    this.TableName = "CommonInventory";
                    this.CreationOption = Microsoft.Synchronization.Data.TableCreationOption.DropExistingOrCreateNewTable;
                }
            }
           
            public partial class TipSyncTable : Microsoft.Synchronization.Data.SyncTable {
               
                partial void OnInitialized();
               
                public TipSyncTable() {
                    this.InitializeTableOptions();
                    this.OnInitialized();
                }
               
                [System.Diagnostics.DebuggerNonUserCodeAttribute()]
                private void InitializeTableOptions() {
                    this.TableName = "Tip";
                    this.CreationOption = Microsoft.Synchronization.Data.TableCreationOption.DropExistingOrCreateNewTable;
                }
            }
           
            public partial class PartsSyncTable : Microsoft.Synchronization.Data.SyncTable {
               
                partial void OnInitialized();
               
                public PartsSyncTable() {
                    this.InitializeTableOptions();
                    this.OnInitialized();
                }
               
                [System.Diagnostics.DebuggerNonUserCodeAttribute()]
                private void InitializeTableOptions() {
                    this.TableName = "Parts";
                    this.CreationOption = Microsoft.Synchronization.Data.TableCreationOption.DropExistingOrCreateNewTable;
                }
            }
           
            public partial class UnitSyncTable : Microsoft.Synchronization.Data.SyncTable {
               
                partial void OnInitialized();
               
                public UnitSyncTable() {
                    this.InitializeTableOptions();
                    this.OnInitialized();
                }
               
                [System.Diagnostics.DebuggerNonUserCodeAttribute()]
                private void InitializeTableOptions() {
                    this.TableName = "Unit";
                    this.CreationOption = Microsoft.Synchronization.Data.TableCreationOption.DropExistingOrCreateNewTable;
                }
            }
           
            public partial class InventoryItemSyncTable : Microsoft.Synchronization.Data.SyncTable {
               
                partial void OnInitialized();
               
                public InventoryItemSyncTable() {
                    this.InitializeTableOptions();
                    this.OnInitialized();
                }
               
                [System.Diagnostics.DebuggerNonUserCodeAttribute()]
                private void InitializeTableOptions() {
                    this.TableName = "InventoryItem";
                    this.CreationOption = Microsoft.Synchronization.Data.TableCreationOption.DropExistingOrCreateNewTable;
                }
            }
        }
    }

    4.  Zero conflicts -- this inventory database should be downloaded ONLY.
    5.  No conflict resolution -- but I did just add when a change failed occurs to retry -- this resolved the memory useage issue, I assume because it clears out the conflict, but it still doesn't explain WHY that much data is pulled down.  In this case -- inserts are trying to occur when the data has already been pushed to the inventory database and that is what's causing the conflicts. 

    I am still testing this morning -- I'll report back whatever I find.

    Travich
    Tuesday, February 23, 2010 3:18 PM
  • I'm also specifying download for each table instead of bidirectional. 

    It appears that even when I create a new database and then call synch after the initial download, everything is pulled down again with a ton of conflicts. 


    Travich
    • Edited by travich Tuesday, February 23, 2010 5:15 PM
    Tuesday, February 23, 2010 4:58 PM
  • another thing you may want to do to check why previously downloaded inserts are downloaded still is to check the last received anchor on the SDF file. this should change every time you do a sync.

    On your SDF file, run this query to verify that the receivedanchor column actually changes in between syncs.

    select * from __sysSyncArticles

     

     

     

    Tuesday, February 23, 2010 5:12 PM
  • Thanks I'll check.  This problem is making me pull my hair out I swear.

    Okay I checked the receive anchor -- it does NOT change even AFTER I've synced changes.  What would cause this:
    before
    0x0001000000FFFFFFFF010000000000000004010000000C53797374656D2E496E74363401000000076D5F76616C75650009B2000000000000000B
    after
    0x0001000000FFFFFFFF010000000000000004010000000C53797374656D2E496E74363401000000076D5F76616C75650009B1000000000000000B

    But I checked in the database that doesn't have any issues -- it doesn't appear to be changing either.  What am I missing?!?!?!

    Here's my guess as to what happened (shot in the dark) -- change tracking was only on for two days on the server by default.  The guy with the memory problems, hadn't logged for several days (past two days).  When he did that, the change tracking wasn't there so his machine decided to pull everything down and apply it.  Since it already existed every item gave him a conflict and his memory blew up since it's a total of 100k changes.

    Going forward, I implemented batching.  This works great; however, the problem is that after the initial download through the batching, the very following sync never finishes and has continuous conflicts come in (again primary key issues). 

    So I've turned off batching, went back to the initial way of doing business with 30 days of change tracking instead of 2 days.  When there's a conflict I tell it to retry (in order to cause it NOT to go to the conflicts list and blow up my data).  This seems to work, but I'd prefer to use batching.  Does anyone have any idea why this might be happening to me AFTER the initial sync?  Please let me know.  :)   
    Travich
    Tuesday, February 23, 2010 9:10 PM
  • it changed actually, look at the end: from 9B200000 to 9B100000. but your after value is i think lower than your before value. (or is it just a typo?)

    Wednesday, February 24, 2010 3:25 AM
  • Sounds good. it seems we are on the right track to nail down this issue.

    so our next is to find out why we have so many "unexpected" downlaods ( and hence, conflicts ), right ?

    here is a few things we can do:

    1. trace the sync -- this might be length
    2. hook with the events to see what are selected and applied
    3. check with the selectInc queries to see if there is logic issues.

    those shall bring us some light.

    thanks
    Yunwen


    This posting is provided "AS IS" with no warranties, and confers no rights.
    Wednesday, February 24, 2010 4:13 AM
  • hi travich,

    just to expound on Yunwen's instructions:

    1. Enabling tracing - http://msdn.microsoft.com/en-us/library/cc807160(SQL.105).aspx
    2. What is selected and applied - this would be the ChangesSelected and ChangesApplied events
    3. Incremental selects - you'll find these under the InitializeCommands() of the designer generated code.

    In addition, you may also run SQL Profiler to trace the actual SQL sent.

    1. Run SQL Profiler (if you can filter the trace to the hostname of the client you're using to test the sync, the better so you get less SQL statements to review)
    2. Pay particular attention to the last_received_anchor being passed in the incremental selects.


    I've got some idle time (while troubleshooting another SyncFx issue for changes not detected), I'd be happy to help you if you could get me the scripts for the server db and the designer generated code.

    cheers,

    junet 
    Wednesday, February 24, 2010 6:00 AM
  • My guess is that I copied and pasted the orders in the wrong value because at first glance I assumed they were the same.

    Also, I can no longer recreate the issue -- this is my fault because I reloaded all the database tables with an SSIS package (after truncating the tables) and then I create a new version of the database.  I just did this so I could start everything over from scratch just simply out of frustration.  I did make a backup of the production and client database, so I can look at it again -- however, I'm running out of time on this issue so I'm going to move forward with this fresh database.

    I will say batching DOES not work after the initial sync, I get change conflicts every time again so I removed batching and everything seems to work fine now.  That SSIS package is running nightly, ran last night and I have no issues.  I'll keep everyone posted when the problem arises again (I'm sure it will).


    Travich
    Wednesday, February 24, 2010 7:21 PM
  • Hey JuneT, let me get back to you see my post above.  I REALLY appreciate everyone's suggestions and help.
    Travich
    Wednesday, February 24, 2010 7:22 PM
  • for your batching, you may want to review the part where its checking if its the first sync, i suspect the new get batch anchor is setting the last_received_anchor incorrectly.
    Thursday, February 25, 2010 1:50 AM
  • I will be very interesting and curious in knowing the issue around batching as well, when you have a few cycles, could you please revisist this and share the update with us ?

    and as a side note: have you consider to move to syncFx V2 where SyncServices for Ado.net V3 shipped ? this will have a, robust,  memory size based batching support, and it is easier to use as well. you can find details at http://msdn.microsoft.com/en-us/library/bb902828.aspx on this feature.

    thanks
    Yunwen
    This posting is provided "AS IS" with no warranties, and confers no rights.
    Thursday, February 25, 2010 6:58 AM
  • I am going to relook at batching now.  I am having so many problems with synching folks out in the field that I think this is work looking at again. 

    I would love to upgrade to v2 -- but not sure what all would be involved or what all I would need to rewrite and what changes I would need to make on the server.  I will keep you posted on the batching thanks.
    Travich
    Friday, March 5, 2010 8:36 PM
  • for your batching, you may want to review the part where its checking if its the first sync, i suspect the new get batch anchor is setting the last_received_anchor incorrectly.
    Is there an example of what I am looking to do there?  Thanks!

    Travich
    Friday, March 5, 2010 8:37 PM
  • hi travich,

    can you post the get new batch anchor sp that you created?

    if you move to V2 to take advantage of the memory-based batching, there are provisioning APIs to automatically create the required objects in the database to enable change tracking and synchronization. 

    tnx

    junet
    Saturday, March 6, 2010 1:53 AM
  • I am willing to look at version 2, but I am not sure what impact this would have on the client and server and what needs to be installed.
    Travich
    Saturday, March 6, 2010 5:31 AM
  • I am curious -- in version one of the designer file is there anyway that when a conflict occurs, I can tell it to DELETE the conflict information? 
    Travich
    Monday, March 8, 2010 3:10 PM
  • for one, there is no Designer or Wizard for the Collaboration scenario in V2, so you'll have to handcode you're provisioning (what tables, columns, filter to include, but not the SQL). It will also create some change tracking objects in both client and server.

    Monday, March 8, 2010 3:10 PM
  • You mentioned before that you the exception being thrown was "A duplicate value cannot be inserted into a unique index.".  I had/have this issue in a perfectly working 1.0 Sync App.

    I am using a peer to peer architecture.

    The issue surfaced when a new record was created on the client (insert) and then inserted on the server,  the server triggers fire and update the creationdate column of the server.  This is then interpreted as an "Update" to the client.  The row already exist in the datatable from the insert and then throws an exception.

    I had to work around this, here is what i did on the local provider:

    "
            public override SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession)
            {
                ApplyChangeFailed -= _ApplyChangeFailed;
                ApplyChangeFailed += _ApplyChangeFailed;
                return base.ApplyChanges(groupMetadata, dataSet, syncSession);
            }

            private static void _ApplyChangeFailed(object sender, ApplyChangeFailedEventArgs e)
            {
                if ((e.Conflict.ErrorMessage ?? "").Contains("A duplicate value cannot be inserted into a unique index."))
                {
                    e.Conflict.ClientChange.Rows.Remove(e.Conflict.ClientChange.Rows[0]);
                    e.Action = ApplyAction.RetryWithForceWrite;
                    return;
                }

                var Message = String.Format("Local Apply Changed for {1} Error at {0}", e.Conflict.SyncStage, e.Conflict.ServerChange != null ? e.Conflict.ServerChange.TableName : e.Conflict.ClientChange.TableName);
                Message = (e.Conflict.ErrorMessage ?? "Unkown Error") + Environment.NewLine + Message;
                Log.LogException(Message, EventLogEntryType.Error);
                
                if (e.Conflict.ConflictType != ConflictType.ErrorsOccurred && e.Conflict.ConflictType != ConflictType.Unknown)
                    e.Action = ApplyAction.Continue;
                
            }
    "
    In the case of a duplicate value i remove the row from the datatable and retry with forced rewrite. if it is not primary key violation i log the error in the event log and continue.

    Hope this helps.

    Edit: the reason for the memory leak, i believe, is because of the unhandled _ApplyChangeFailed events.  This also happened to me.  When you handle the exceptions the memory leak should disappear.

    Monday, March 8, 2010 3:21 PM
  • WOW Thanks -- also I was going to call:  I am going to try this now as it's raring it's ugly head and locking up my users apps again.  Wow this sucks!


                e.Conflict.ClientChange.Clear();   
                e.Conflict.ServerChange.Clear();           
     

    Travich

    EDIT:  If this works you are a life saver
    Monday, March 8, 2010 4:23 PM
  • Hey Travich,

    I'm not sure if i would clear the entire table of changes for either Client or Server.  I forget if the table only consists of one record or not, but if it does contain more then one record it might adversely impact the sync.

    I would simply remove the row from the table throwing the exception and then retry.

    Richard
    Monday, March 8, 2010 4:39 PM
  • travich, if you dont want any conflict information at all, you can clear the the TablesProgress conflict collection (see http://blogs.msdn.com/mahjayar/archive/2009/01/15/dbsyncprovider-improving-memory-performance-involving-execissive-conflicts.aspx)


    Monday, March 8, 2010 11:08 PM