How do I update a Scope or Table after a schema change? RRS feed

  • Question

  • Short Version:

    I am working on a sync prototype and have created a table and necessary sync framework components to successfully sync the table.  Now, I have added a new column to the table and the table stills syncs, but the new column is never populated.  How do I update the scope description to include the new column?

    Longer Version:

    I am running MS Sync Framework 2.0 with SQL Server 2005 Express SP2.  This particular situation is one we have to understand to decide how to proceed.  I have been following the "WebSharingAppDemo-SqlProviderEndToEnd" tutorial.

    One of the things I attempted was to modify four of the stored procedures generated by the sync framework to add the new column.  That produced a DbSyncException (which is not suprising; I didn't really expect it work).  It seems to me like I should be modifying the DbSyncScopeDescription (or perhaps deleting the existing one and adding a new one that has all of the columns?), but I don't see how to do that.

    If someone could point me to an example of what needs to be done, I would appreciate it.

    Wednesday, February 24, 2010 7:41 PM


All replies

  • There was a question "SqlSyncScopeProvisioning - How to change existing scope when using SqlSyncScopeProvisioning" on the forum, upon http://social.msdn.microsoft.com/Forums/en-US/syncdevdiscussions/thread/65d1e966-f4fb-44c4-9189-9d5d6c0f69e2.  You might want to check it out.

    An alternative is to go through your approach, as modifying the Created SP and Insert/Update/Delete triggers to include this particular columns.


    Leo Zhou ------ This posting is provided "AS IS" with no warranties, and confers no rights.
    • Marked as answer by Yunwen Bai Thursday, February 25, 2010 6:45 AM
    Wednesday, February 24, 2010 10:04 PM
  • if you're just prototyping, you'll be better off starting from scratch or dropping all related SyncFx stuff on your DB.

    There is no facility to modify a scope. If you create a new one, you'll have to specify skip for all the other SQL stuff the SyncFx creates and specify create for SetCreateProceduresForAdditionalScopeDefault. This will generate a new SelectChanges SP with you new column. The problem though is that your Insert and Update statements are not updated to include the new column, so you may modify them to include the new column. When you do the sync, use the new scope you have just created. But then you'll still have the old scope definition and SelectChanges SP in your table.

    if you wan to continue your current approach and you have updated the SPs and still getting an error, check out the config_data column of the scope_config table (its an xml column). Havent tried this myself, but try modifying the config_data column to include the new column and see if it works :)
    Thursday, February 25, 2010 2:00 AM
  • Let me clarify a couple of points first, and then offer a response.  First, when I say I am "prototyping" that does in fact mean that I am writing code that I probably won't use in our actual application, but that does not mean it is not important.  I am prototyping to evaluate the Sync Framework as a replacement for our current sync solution.  Second, the purpose for my prototyping effort is to learn how the sync framework can be used in our environment and to insure that the sync framework has all of the capability we need to support our applications.  So even though this is "toy code", it has a direct impact on whether or not we will choose to use this, and one of the things I am prototyping is "what happens with we add a column to an existing table?"

    Right now, I am overwhelmingly unimpressed with this framework.  As far as I can tell, it only "works" under a very limited set of circumstances and it does not adapt well to changes to the database.  Nor does it expose interfaces or describe the interfaces that could be used to do the job correctly. 

    Yesterday I read another article on MSDN that suggested that using SyncAdapters might be a better way to handle schema changes.  That sounds a bit tedious, but I thought I would look into doing that.  Unfortunately, I don't see where the SyncAdapter that is probably there is exposed anywhere in the interface.  Also, the SyncAdapter class appears to be intended to support a Template pattern, meaning there is either a base class or controller class that is using the services.  However, I can't look any of the "terrific" documentation and see what the controller is doing, what information it provides to the SyncAdapter, or what information it expects back from the SyncAdapter.  It seems like everywhere I look, I find "gaps" in either the knowledge base or the capability needed to do what we need to do.

    Maybe I'm just too dumb to get it, so let's try a different approach.

    As of right now, our database consists of 81 tables and occupies about 2.5 GBytes of disk space on the central server.  Since all 81 tables will need to be synched, this means that when I build the initial sync framework, I am going to do the following:

    + Add new scope_info and scope_config tables
    + Add 81 new "_tracking" tables
    + Add several new "_tombstone" tables (not part of the sync framework, but we need these anyway)
    + Drop the existing triggers on 81 tables and generate 3 new triggers per table (243 total)
    + Drop the existing stored procedures we use for synching and generate 8 new stored procedures per table (648 total)

    On the maintenance side, our application is growing and adding new capability all of the time.  Our initial "1.0" version was originally deployed about a year ago.  Since that time, we have modified the database 34 times for bug fixes and new functionality.  A major release may alter 10 or more tables (new columns, etc.) and may add 5 or 6 (or more) new tables.  A minor release may alter just a few tables and possibly add a new table.

    Here is the big question:  Given that environment, how do I maintain the sync framework?  Specifically, what is the "best practice" to use when table schemas are modified?  Do I drop all of the scope information for the table and recreate it?  If so, how do I do that?  Do I "version" my scope information and upgrade it, leaving the original version intact?  What do I do to keep my synch framework up to date?

    So far, I have not seen an answer to this question that I find particularly appealing.

    Earlier when I tried to "hand code" the stored procedures, I was not suggesting that "hand coding" was an attractive solution, I was just looking for something that worked.  I had already looked at the scope_config table and found that the config_data field was just a big XML statement which included a definition of all of the columns in the table like the following:

    <Col name="LastName" type="nvarchar" size="50" null="true" param="@P_3" />

    Right now, my guess is that when I altered the stored procedures and added the new column, the reason this failed is because the new column was not mentioned in the scope_data field for the table.  I could "hand code" that as well,  but that would be unreliable (and tedious) in the long run because the sync framework might actually change and grow over time.

    On the good side, from what I have seen thus far, once the framework components have been built, synching the databases is very easy.  The fact that we can use scope descriptors to tailor the sync to work with various subsystems is a real asset.  In fact, this need is the driving force behind this particular effort.  But for this to be useful to us, we have to be able to manage changes to the database and, right now, I don't see how to do that in any realistic, reliable, and cost-effective way.

    Thursday, February 25, 2010 1:46 PM
  • Just an additional FYI.  If you "hand code" the data entry in scope_config.config_data to include your table changes and then modify four stored procedures generated by the framework to include the table changes, you can sync new columns without having to drop and recreate a scope descriptor.

    For example, in my prototype I have a table named "Users" and I added a new column named "FavoriteColor".  To get this new column to sync, I executed the following SQL statement:

    Update scope_config

       Set config_data = '<SqlSyncProviderScopeConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

      <Adapter Name="[Users]" GlobalName="[Users]" TrackingTable="[Users_tracking]" SelChngProc="[Users_selectchanges]" SelRowProc="[Users_selectrow]" InsProc="[Users_insert]" UpdProc="[Users_update]" DelProc="[Users_delete]" InsMetaProc="[Users_insertmetadata]" UpdMetaProc="[Users_updatemetadata]" DelMetaProc="[Users_deletemetadata]" InsTrig="[Users_insert_trigger]" UpdTrig="[Users_update_trigger]" DelTrig="[Users_delete_trigger]">

        <Col name="UserId" type="uniqueidentifier" param="@P_1" pk="true" />

        <Col name="FirstName" type="nvarchar" size="50" null="true" param="@P_2" />

        <Col name="LastName" type="nvarchar" size="50" null="true" param="@P_3" />

        <Col name="Login" type="nvarchar" size="128" null="true" param="@P_4" />

        <Col name="Password" type="nvarchar" size="50" null="true" param="@P_5" />

        <Col name="Role" type="nvarchar" size="50" null="true" param="@P_6" />

        <Col name="FavoriteColor" type="nvarchar" size="50" null="true" param="@P_7" />



     Where config_id = 'C7462B13-DAAA-4B4A-9064-D69A32878C9A'

    This added the new column to the scope descriptor.  I was then able to add a new input parameter "@P_7" to the stored procedures Users_insert, Users_selectchanges, Users_selectrow, and Users_update and then assign that value to the "FavoriteColor" column, which I also added by hand.

    Would I recommend this solution going forward?  Absolutely not.  Hand editing code that was originally generated by framework is always risky business.  However, given the way that we update our application, we could actually use this as an interim solution if we had to. 

    But I still don't like it.
    Thursday, February 25, 2010 2:44 PM

    I made the mistake of implementing this framework befoer properly testing it.


    Only after implementing it did I realize just how many blaring gaps there are in this framework.

    The old adage "you can solve any problem if you abstract it enough" has been dis-proven by this framework. They have abstracted and abstracted the abstractions ad-absurtum and still haven't been able to cover even the most basic scenarios.

    I, like you and every other developer, need to change our database on a regular basis. Databases are not static they are constantly changing.

    Any ideas?


    Friday, June 4, 2010 9:26 PM