locked
facilitating a quicker sync deployment with a database with large tables RRS feed

  • Question

  • I have a database with large tables; biggest table around 4 million.  I'm trying to sync down the whole database to a clean database that has no records.  This database will reside on each of my client machines.  The first time we sync down all tge records it takes about an hour to complete for each client machine.  This is to be expected due to the size of some of the tables.

    I am looking for a way to speed up this process when deploying our application.  Could someone tell me how the collaboration scenairo tracks individual servers.  We are using the SqlSyncProvider with sql server 2008.  My proposed solution for deployment was to locally sync one client, and then copy its database to the other client machines manually.   This would happen only to set up the local database with all the records of the server and then after that each local machine would be free to sync up and down. 

    After provisioning a local database, does the sync framework give each local database an individual id or can i reuse each 1st time synced database on other machines?  Is this possible?

     

     

    Wednesday, May 26, 2010 7:10 PM

Answers

All replies

  • have a look at  SqlCeSyncStoreSnapshotInitialization.GenerateSnapshot.

    Using snapshots, you can provision a SQL CE DB on the server side, then from this SQLCE DB, generate a new SQLCE DB snapshot.

    you then distribute the snapshot to your clients (i.e. provision your new clients using this snapshot)

    Wednesday, May 26, 2010 8:11 PM
  • I am using sql server 2008 to sql server express 2008 what class can I use for snapshots in this environment.
    Wednesday, May 26, 2010 10:05 PM
  • Snapshot only works for SQL CE. Unfortunately, SyncFx has no out of box support for fast init of server DB for SQL Server and SQL Express.

    Thanks,

     


    Ann Tang
    Thursday, May 27, 2010 12:22 AM
  • hi ricosol,

    you can provision your SQL Express clients using the SQL CE snapshot.

    steps would be:

    1. provision SQL CE from server scope

    2. generate snapshot from the SQL CE copy

    3. distribute snapshot copy

    4. synchronize snapshot copy on client to server

    5. provision SQL Express from snapshot

    6. synchronize SQL Express to server (you may now delete the snapshot)

    Thursday, May 27, 2010 2:38 AM
  • Is there any other way besides using snapshots to resolve this problem, namely the idea i proposed initially of syncing one client to the server and then copying that database to my other clients.  This would depend on how syncing identifies clients, and if the clients get their own ids etc.  How does syncing identify the client servers?  Thanks.
    Thursday, May 27, 2010 3:03 PM
  • Anyone else have any thoughts on my last post?
    Tuesday, June 1, 2010 10:41 PM
  • Can anyone answer these questions?

    How does syncing identify the client servers in the collaboration scenario?  Does sync framework identify servers by an id?  Thanks.
    Friday, June 4, 2010 8:51 PM
  • You could try to backup a client SQL Server DB which is provisioned by SyncFx , then on another client machine, restore the 1st client DB and then complete the PerformPostRestoreFixup  process. Now the 2nd client should be ready to sync as well.

    Thanks,


    Ann Tang
    Friday, June 4, 2010 11:13 PM
  • Ann could you tell me what you mean by PerformPostRestoreFixup process. Thanks.
    Tuesday, June 8, 2010 4:19 PM
  • That means calling a SqlSyncStoreRestore object for PerformPostRestoreFixup method as in

    http://msdn.microsoft.com/en-us/library/microsoft.synchronization.data.sqlserver.sqlsyncstorerestore.performpostrestorefixup.aspx

    Tuesday, June 8, 2010 5:31 PM
    Answerer
  • Thanks guys for your previous feedback.  I tried implementing a SqlSyncSotreRestore class and run PeformPostRestoreFixup on a database that I had just restored, and I got this error:

    The SqlTransaction has completed; it is no longer usable.

    Could someone please tell me why I received this error.  I even tried uping the timeout on my SqlConnection object that is passed into this object and I still received the same error.

    Also, could someone explain to me some of the syn metadata that gets updated when the PerformPostResotreFixup is executed.  Thanks.

     

     

     

     

     

    Friday, June 11, 2010 5:05 AM
  • I think you likekly hit a bug in PeformPostRestoreFixup in sync framework code. For a large db, it will time out on the sql command used by PeformPostRestoreFixup. It's diffferent from the SqlConnection timeout you specified. This will be fixed in next release.

     

    Friday, June 11, 2010 6:10 PM
    Answerer
  • Is there any work around for this bug or can I manually update the metadata?  Any ideas?
    Friday, June 11, 2010 6:22 PM
  • You could try to reduce the size of your tables and then insert rows back after PeformPostRestoreFixup is done.
    Friday, June 11, 2010 7:59 PM
    Answerer
  • If I was to reduce the size of the tables and then reinsert the rows back in won't that flag all those rows as inserts in the change tracking tables, and then when you try to sync, won't it try to synchronize all those rows down.  

    I guess what i am getting at is what does the PerformPostRestoreFixup give you in regards to the sync metadata.  Does it clear out all the timestamps that say that these rows have changed?

    Friday, June 11, 2010 8:22 PM
  • PerformPostRestoreFixup makes sure the endpoint being restored will know everything in the backup file. If you have succeeded in restoring the endpointA's backup file on endpointB, endpointB will know everything from endpointA. At this point, EndpointA also knows every row's version on endpointB so if you do a sync, there won't be rows synced over. If you add new rows back after they syncing with each other, those new rows will be synced.

    However, I have to say this work around is not tested at Sync framework team and not officially supported. You may try it but it's not guarranteed to work.

    Monday, June 14, 2010 5:27 PM
    Answerer
  • We have done the following for a large table. In this circumstance, the table is only ever synchronized from the server to the client. This strategy may or may not work for you:

    1. Add the large table to the server database.
    2. Provision a scope using SqlSyncScopeProvisioning and specify that no tracking records are to be created for the initial contents of the database. This is in essense a base table or a starting point for future synchronizations.
    3. Distribute the large table to a client as part of the software installation package. This is the very same base table that was initially loaded at the server.
    4. Provision the scope at the client, specifying that no tracking records are to be created for the initial contents of the table.
    5. Synchronize the client with the server. The client will receive any changes since the original install of the table at the server.

    The control of the initial tracking record creation is modified by the SqlSyncScopeProvisioning.SetPopulateTrackingTableDefault() method. The default is to create the tracking records for any initial contents.

    Tuesday, July 6, 2010 12:10 AM
  • Speedware I tried some of your methodology here.  Here are the issues I have with it:

    For the large tables, I don not put tracking records in up to a point I want a snapshot, using what you have described.  This has been done on the server and the client. I can sync and everything seems to be working pretty well, except for the fact that I am having sync failure warnings when applying changes. 

    When I look at what the error is, it an error that occurrs with a Primary key constraint exception, because the syncing was trying to add records to the database that are already there.  Well this has happened because I did a test on the server and changed some existing data that did not have a tracking record.  When I changed it starting putting in tracking records on the server of course for the changed data.  When I go to sync, it try to insert some records that already there on the destination database. 

    Have you encountered this problem and what was the work around.   I tried setting e.action to ApplyAction.Continue, to try to ignore if for the next sync, however when I do the next sync it tries to sync the record again and I am getting a primary key constraint failure again.  Is there anyway to clear out the errors so they occur one time and next time we ignore them.  If can't ignore them then the failure errors will keep stacking up as I do subsequent syncs.   

    In my scenario, I would still like to sync both up and down, and this case using your method, I may not be able to.

    Any other tips would be helpful.

    Wednesday, August 4, 2010 7:16 AM
  • Ricosol,

    If you have started with the same "snapshot" at both ends, then you should not be getting an insert during a sync on an existing record, since if it exists at either end, then, by definition, it should exist at the other end, meaning that operations on existing records are either updates or deletes.

    Nevertheless, we have also put together an application that does a similar thing. Basically, we have a "data feeder" database on the client into which records are written, then sync'd to the central server and finally removed from the client database (this is facilitated by removing the delete triggers so that the deletes do not get tracking data; we also remove tracking records in the client for those records that have been sync'd).

    This presented a problem at the server because the logic of the insert SP (as generated by the standard SqlSyncScopeProvisioning class, iirc) at the server is as follows:

    if no tracking record exists then insert the record into the table
    

    So that means that if we have already got a tracking record at the server (because this record was once added at the client, sync'd to the server, deleted at the client and then re-added at the client), then the resulting insert during the sync gets ignored. We changed the insert logic to be...

    if no tracking record exists then insert the record into the table
    else update the record in the table

    The point of saying all of this is that you can tailor the SPs at the receiving end to solve problems that you might be exist. In your case, you could change the insert logic from the standard form above to

    if the record does not exist in the table then insert the record into the table
    else update the record in the table
    

    I know that is inefficient because it reads the record first, but then the standard SP is reading the tracking record anyway, so it is no worse.

    HTH

    Steve

     

    Wednesday, August 4, 2010 1:40 PM