Answered by:
Deprovisioning / Reprovisioning Issues

Question
-
I'm using SFX 2.1.
Here's the problem. I need to be able to reprovision a database when the data model changes. I don't want to recreate the whole database from scratch though. There is a lot of existing data already downloaded on the client's machine and it doesn't make sense to delete this and start from scratch, it would take hours to redownload everything. I am able to reprovision but existing _tracking records are not updated accordingly.
So in order to reprovision I do three things.
1) I use SqlSyncScopeDeprovisioning to deprovision the database
2) I execute a .sql file which alters the data model, updating it to reflect the new model on the main server.
3) I simply provision again with the new updated scope names that reflect the new data model.
Here's the problem. When I deprovision all _tracking records are deleted. When I reprovision new _tracking records are created but these records are incorrect. They do not appear as _tracking records would if I had provisioned and synced a blank database. So when I try to sync after deprovision/reprovision the Orchestrator first redownloads all the existing data in the database. And then on upload it tries again to upload all these records to the server as changed records. The sync failed.
Here is some sample data
_tracking record after provisioning and syncing blank database.
Id update_scope_local_id scope_update_peer_key scope_update_peer_timestamp local_update_peer_key local_update_peer_timestamp create_scope_local_id scope_create_peer_key scope_create_peer_timestamp local_create_peer_key local_create_peer_timestamp sync_row_is_tombstone restore_timestamp last_change_datetime
84E7E427-1AA0-4F32-A883-AA103FE95307 6 1 27655 0 0x000000000000215E 6 1 27655 0 8532 0 NULL 2010-10-23 17:05:45.083_tracking record after deprovisioning and reprovision, and then download sync again.
Id update_scope_local_id scope_update_peer_key scope_update_peer_timestamp local_update_peer_key local_update_peer_timestamp create_scope_local_id scope_create_peer_key scope_create_peer_timestamp local_create_peer_key local_create_peer_timestamp sync_row_is_tombstone restore_timestamp last_change_datetime
84E7E427-1AA0-4F32-A883-AA103FE95307 NULL 0 15834 0 0x0000000000004010 NULL NULL NULL 0 13167 0 NULL NULLSaturday, October 23, 2010 12:41 PM
Answers
-
Hi,
Thanks for the feedback and the interests to the SyncFx database provider. If you can keep your schema change to be back compatible with original schema, it is ok to share same server with multiple version of clients. Since I don't know your detailed schema changes, the safe recommendation is to have another copy of the server.
Sync framework Database providers are a set of libraries for helping developers to write their database sync apps. Since it is still in version 2.1, we have more functionalities to add in the future versions to support more user scenarios. As I mentioned in my previous email, the scenario that you described is not supported today. Before database provider has this feature, you have to write your custom logic on top of existing database provider library. I'm happy to see that you are also aware that it is not easy to handle your specific schema changes even with a manual way. It means the SyncFx team needs time to define and implement an appropriate, generic solution to handle it too.
Thanks,
Dong
This posting is provided AS IS with no warranties, and confers no rights.- Proposed as answer by Dong CaoMicrosoft employee Friday, October 29, 2010 10:50 PM
- Marked as answer by Dong CaoMicrosoft employee Monday, November 1, 2010 11:42 PM
Thursday, October 28, 2010 5:41 PM
All replies
-
when you provision with existing data already, Sync Fx will by default populate the _tracking tables with existing data and the _tracking records will actually make it appear that the rows are "originally" from this database although these rows were previously downloaded using the previous scope.
When you provision a blank database and initiate the first sync, the _tracking tables is populated based on the data that is downloaded and the _tracking rows will also imply that these rows came from another peer. as such there is a difference in the _tracking records.
Monday, October 25, 2010 2:10 AM -
Yes, you are describing part of the problem, but what's the solution? And actually, I don't think your description is entirely accurate. You'll notice that the _scope_local_id is NULL. Shouldn't it have an ID that matches with a scope in scope_config regardless of where it originated? I'm looking for a solution to this though. Is there a way to keep SFX from dropping the _tracking tables/records on deprovision? If this were possible one solution would be to deprovision without deleting _tracking records, then provision, then update the existing _tracking records with the new _scope_local_id.Monday, October 25, 2010 3:57 AM
-
the update_scope_local_id and the create_scope_local_id is set to NULL if it originated from the local node. when you deprovision and re-provision, the tracking table is populate from rows in the existing table and is known to sync fx as having "originated" from the local node.Monday, October 25, 2010 4:54 AM
-
i had a similar problem to what you are describing.
my solution : http://social.microsoft.com/Forums/en-US/syncdevdiscussions/thread/b4709619-4447-45cd-8c63-7065a7a8aad2
i'm using the 2.0 SDK, so if there is changes to how the sync meta data is handled and in its layout. it might not work :-(
u might dig out the solution by looking at my script.
Monday, October 25, 2010 8:08 AM -
AJ8829 sorry to bother, Can you please share the sample WebSharingAppDemo that you posted here http://social.microsoft.com/Forums/en-US/syncdevdiscussions/thread/8628f61e-bbc0-4303-86f7-3a9a4cf1ebcb/#5cee0336-7427-4c75-a15a-62478b3f435f
Thanks
lavanya
lavanyarMonday, October 25, 2010 2:44 PM -
Why do the timestamps get all screwed up? Based on what you're saying the id being NULL shouldn't matter when it comes to syncing again, right? But because the timestamps gets screwed up when I sync after deprovision/reprovision, sync doesn't work. Some records will sync up, others won't. There is no clear pattern for it.Monday, October 25, 2010 5:40 PM
-
Hi,
May I know if you deprovision and re-provision both server and client databases? If not, you should deprovision both sides for schema changes. Also, for the failed sync, can you share the Exception?
After deprovision and re-provision of the database, the NULL Create_Scope_Local_ID is expected because all existing records on a new provisioned table is viewed as "local changes". Create_Scope_Local_ID is only set when it is a sync insert change.
You are expected to have a lot of insert-insert conflicts in this situation, with default conflict action, the sync should just work although the performance of initial sync is not very good.
Thanks,
Dong
This posting is provided AS IS with no warranties, and confers no rights.Monday, October 25, 2010 7:02 PM -
Ok, I'll answer your questions one by one.
1) May I know if you deprovision and re-provision both server and client databases? If not, you should deprovision both sides for schema changes.
First, with this current example I'm not actually changing the database schema at all. I leave the data model the same and am simply trying to successfully deprovision/reprovision the client. No real changes are made on either client or server.
When I'm doing schema changes for real the answer to your question is NO. The reason for this is that I'll have multiple versions of the client floating around. Older clients need to be able to sync up before they upgrade to the new version. For a newer client version with a changed schema I simply create ADDITIONAL scopes that reflect the new schema.
2) Also, for the failed sync, can you share the Exception?
Actually, there is no exception. The sync does not fail in that sense. Several things happen. Here is a list of things that happen after syncing post deprovision/reprovision
a) When I sync down the contents of the client database are completely redownloaded. This defeats the whole point of doing the deprovision/reprovision in the first place. The whole point of the deprovision/reprovision is to keep the client from having to spend hours redownloading everything. Well, if they have to download it anyway I might as well just delete the MDF and recreate everything from scratch.
b) When I sync up a large number of records won't upload. This appears random. A number of records that don't even exist on the server (records that originated on client and haven't been synced up yet) won't upload. The sync process shows these records as having been successfully applied on upload. On the server I will find a _tracking record for the record, but the actual data record was not uploaded. Sometimes the _tracking record also doesn't exist. Note this does not occur for all "new" records that originated on the client. A lot of them sync up just fine. A lot also don't though.
c) Because the timestamps are all wrong some tables try to re-upload everything. Some scopes will take 5-10 minutes to process, not because of transmission time but because the DB server will be jacked at 80+% CPU usage for 5-10 minutes while it compares all the stuff the client is trying to reupload. These records already exist on the server though and ultimately fail with a failed message because they already exist. There are no changes to apply. Note this doesn't happen for all my scopes or tables, just a couple. Also, this doesn't happen on only the initial sync. Every time I sync this will happen, it will try to re-upload these same records over and over and over. No doubt this is because the sync failed, the same timestamp differences that existed causing it to want to reupload the first time still exist, so it's caught in a loop.
Tuesday, October 26, 2010 3:50 AM -
Hi,
Thanks for answer my questions. Firstly, your understanding to the deprovision of SyncFx database provider doesn't match the current desgin. After a deprovision, all sync related metadata, stored procedures and triggers are dropped. It means that nothing is remembered after a deprovision. When you provision the same database with the same sync scope name again, it is a new database that just joins this sync community. Sync Engine cannnot tell if any records actually come from other databases by previous syncs.
Unfortunately, the current database provider doesn't naturally support the sceanrios that you described. For multiple version clients due to schema changes, you need either copy the server database to another one to support different version of clients, or force all old versoin of clients to finish the sync before the server and client schema update.
If you are willing to directly modify the sync stored procedures and triggers for your schema update, it is possible to achieve your requirements. For example, if you add a new column to a provisioned table, you can add this column into existing sync stored procedures to avoid deprovision and provision. In order to find all sync related TSQL objects, you can generate a TSQL script with SqlSyncScopeProvisioning.Script() instead of Apply() for your sync scope.
Thanks,
Dong
This posting is provided AS IS with no warranties, and confers no rights.Tuesday, October 26, 2010 7:22 PM -
"After a deprovision, all sync related metadata, stored procedures and triggers are dropped. It means that nothing is remembered after a deprovision. When you provision the same database with the same sync scope name again, it is a new database that just joins this sync community."
I guess I was just hoping that there were more options when it came to deprovisioning. What I'm finding out is that deprovisioning isn't really useful for anything beyond cleaning up a database... sort of like uninstalling SFX at the database level. What would be incredibly useful is if there were simply a "skip" option for deprovision similar to provision that skipped removing _tracking records.
Unfortunately, the current database provider doesn't naturally support the sceanrios that you described. For multiple version clients due to schema changes, you need either copy the server database to another one to support different version of clients, or force all old versoin of clients to finish the sync before the server and client schema update.
I don't think I agree with this. Perhaps you're describing something else and I'm not fully understanding you. I already have multiple versions floating around with multiple schemas and it works just fine. This is NOT the problem I'm having. The only limitation in changing the database schema is that new columns must be nullable and/or contain default values. Provided that this is the case it does not "break" any existing scopes that work with older database schemas. Scopes operate through stored procedures that are specific to the scope. New scopes for new schemas also means new stored procedures which reflect the changes in the new schema. The old stored procedures can still work with the old schema. This is a behavior determined by MS SQL not SFX.
If you are willing to directly modify the sync stored procedures and triggers for your schema update, it is possible to achieve your requirements. For example, if you add a new column to a provisioned table, you can add this column into existing sync stored procedures to avoid deprovision and provision. In order to find all sync related TSQL objects, you can generate a TSQL script with SqlSyncScopeProvisioning.Script() instead of Apply() for your sync scope.
Yes, that's what I've been trying to avoid, but I suppose I just have no choice :(. I feel like I'm doing work that SFX ought to be doing though. What I'll most likely do is develop an engine that compares the existing scopes against the current model and then auto-ALTERS accordingly. So many changes occur that tracking and changing things manually is almost impossible. A recipe for faults.
Thursday, October 28, 2010 11:38 AM -
Hi,
Thanks for the feedback and the interests to the SyncFx database provider. If you can keep your schema change to be back compatible with original schema, it is ok to share same server with multiple version of clients. Since I don't know your detailed schema changes, the safe recommendation is to have another copy of the server.
Sync framework Database providers are a set of libraries for helping developers to write their database sync apps. Since it is still in version 2.1, we have more functionalities to add in the future versions to support more user scenarios. As I mentioned in my previous email, the scenario that you described is not supported today. Before database provider has this feature, you have to write your custom logic on top of existing database provider library. I'm happy to see that you are also aware that it is not easy to handle your specific schema changes even with a manual way. It means the SyncFx team needs time to define and implement an appropriate, generic solution to handle it too.
Thanks,
Dong
This posting is provided AS IS with no warranties, and confers no rights.- Proposed as answer by Dong CaoMicrosoft employee Friday, October 29, 2010 10:50 PM
- Marked as answer by Dong CaoMicrosoft employee Monday, November 1, 2010 11:42 PM
Thursday, October 28, 2010 5:41 PM