locked
Can I do dynamic row filtering in Sync Framework 2.0? RRS feed

  • Question

  • I think I have already seen the answer to this question in another forum thread (and the answer is "no"), but I want confirmation that this is not currently possible in 2.0.

    Here is a specific example of what I want to be able to do.  I have set up a sync scope that focuses on a single table.  When we synch values from this table, each client database only wants a small subset of data from that table.  This is the equivalent of

    Select * from Table where someColumn = someValue

    Or

    Select * from Table where someColumn In (some value list)

    The table itself may have 50,000 rows, but each client only uses a small sub-set of those rows, perhaps 500 to 1000.  This means that pulling all 50,000 rows over to the client and then throwing away the rows it doesn't want is not an attractive option.  The other complicating factor is that each client (currently over 300 client databases) is interested in different rows from the table, so we need to be able to set the filtering criteria dynamically (300 static filters per scope would be agony to maintain).

    From a posting in January, 2010, I think the answer to my question is that this is not currently possible in sync 2.0.  I would be willing to write my own SyncAdapters (or other components) to do this, but I do not yet understand how these would fit into the framework.

    Any assistance, insight, or advice would be greatly appreciated.

    Friday, March 12, 2010 6:11 PM

Answers

  • PuzzledBuckeye:

    The answer very likely in your situation is yes, if your first SELECT is the case. Your situation looks like it could be the same as mine.

    If any given record for a table belongs to (a) only one client or (b) to all of them, then you can do this.

    I have the following situation:

    - Sql2005 SP2/Sql2008/Sql2005 Express Sp2/Sql2008 Express at the client
    - Sql2008 at the server (but Sql2005 SP2 would be fine too)
    - Each client is indentified by a unique value.
    - Every table in the scope that meets condition (a) above has this unique value in a column. I call this a segmented table.
    - Every table in the scope that meets condition (b) above does not have this unique value in a column. I call this a shared table.

    For 300 clients, you will get 300 scopes, but these are just records in the scope_info and scope_config tables. You will not end up with 300 sets of 8 stored procedures per table. You don't really need to know 300 scope names. I use just one externally, but for any client, append the unique value to that name to get the actual (internal) scope name to use for synchronization. So, externally, you sync scopeA and scope B and internally for clientX, clientY, etc it would be scopeA-clientX, scopeA-clientY, scopeB-clientX and scopeB-clientY.

    I have 5 scopes (externally) and about 25 tables in those scopes, so I have 25*8=200 SPs (there are 8 SPs for a single table after provisioning) in total no matter the number of clients. For 300 clients, I would still have 200 SPs. I would have 5 scopes externally and internally 200*5=1,000 scopes in the scope_info and scope_config tables.

    At a high level, this is what you can do:

    At the client...

    - Create a SqlSyncScopeProvisioning to provision the database.
    - Create a DbSyncScopeDescription and specify the scope name as "scopeName-<unique Id for this client>"
    - For each table that is segmented, you will add a filter column (e.g. "uniqeId"), which will be the column that contains the unique value and a matching filter clause, e.g. "[side].[uniqueId] = '<unique Id for this client>' ".
    - For each table that is shared, do not create any filter information.
    - Provision the database

    So this means that at each client we have static filtering on all of the segmented tables and no filtering on the shared tables.

    You can use a regular off the shelf SqlSyncProvider to do sync with the clients.

    Now to the server. You will need to create a custom provider. You can base it on DbSyncProvider (both SqlSyncProvider and DbSyncProvider are based off of the abstract class RelationalSyncProvider) so they will be very similar in function and the custom provider is easy to implement. I'll get to that later.

    First, you need to provision the database for static filtering. Do this exactly the same as described above, except, don't both adding the suffix for the scope name, just use scopeA and picking any value you like for the unique Id in the filter clause.

    Ok. So now we have a provisioned server database. You will have 8 SPs per table. You will need to modify one of them per table for all of the segmented tables (but not for the shared tables). For a table T, you will have a SP named dbo.T_selectchanges (assuming no ObjectSchema and no ObjectPrefix, otherwise [ObjectSchema].ObjectPrefix_T_selectchanges).

    If you look at this SP, you will see that there is a conditional check which is the filter clause. It will appear in the WHERE as "[side].[uniqueId] = 'uniqueId value'". You need to make 2 changes: add a new parameter to the SP, same @uniqueId and modify the WHERE, replacing 'uniqueId value' with @unqiueId. So now you have dynamic filter for this SP.

    This is where the custom DbSyncProvider comes in. You implement this and you define the calls to each of the 8 SPs. You can find examples of this by searching for DbSyncAdapter. You have a separate one of these for each table that the provider handles. Within the DbSyncAdapter, you  would have the standard documented SPs, except for the SelectIncrementalChangesCommand. This is the one that is customized for the extra parameter. here is mine (the adapter creation and the modified command creation):

            private DbSyncAdapter CreateAdapter(MyApplicationTable table)
            {
                // Create the adapter.
                DbSyncAdapter adapter = new DbSyncAdapter(table.Name);
    
                // Get details on this table
                SqlSyncProviderAdapterConfiguration scopeAdapter = this._configuration.GetAdapter(table.Name);
                Collection<DbSyncColumnDescription> primaryKeys = this._configuration.GetPrimaryKeys(scopeAdapter.Columns);
                
                // Add in all primary keys
                foreach (DbSyncColumnDescription primaryKey in primaryKeys)
                {
                    adapter.RowIdColumns.Add(primaryKey.UnquotedName);
                }
    
                // Add stored procedures
                adapter.SelectIncrementalChangesCommand = this.GetSelectIncrementalChangesCommand(table);
                adapter.InsertCommand = this.GetInsertCommand(table, scopeAdapter);
                adapter.UpdateCommand = this.GetUpdateCommand(table, scopeAdapter);
                adapter.DeleteCommand = this.GetDeleteCommand(table, primaryKeys);
                adapter.SelectRowCommand = this.GetSelectRowCommand(table, primaryKeys);
                adapter.InsertMetadataCommand = this.GetInsertMetadataCommand(table, primaryKeys);
                adapter.UpdateMetadataCommand = this.GetUpdateMetadataCommand(table, primaryKeys);
                adapter.DeleteMetadataCommand = this.GetDeleteMetadataCommand(table, primaryKeys);
    
                return adapter;
            }
    
            private SqlCommand GetSelectIncrementalChangesCommand(MyApplicationTable table)
            {
                SqlCommand command = new SqlCommand();
                command.CommandType = CommandType.StoredProcedure;
    
                command.CommandText = "[dbo]." + table.Name + "_selectchanges";
                
                command.Parameters.Add("@" + DbSyncSession.SyncScopeRestoreCount, SqlDbType.Int);
                command.Parameters.Add("@" + DbSyncSession.SyncMinTimestamp, SqlDbType.BigInt);
                command.Parameters.Add("@" + DbSyncSession.SyncScopeLocalId, SqlDbType.Int);
    
                // Do dynamic filtering if this is a filtered table
                if (!string.IsNullOrEmpty(table.FilterColumn))
                    command.Parameters.Add(new SqlParameter("@Application_UniqueClientId", this._uniqueClientId));
    
                return command;
            }
    
    
    

    To make things easier, I de-serialized the scope_config data for the scope into SqlSyncProviderScopeConfiguration which I then derived from and added some methods to help get at stuff, such as, in the code above, this._configuration.GetAdapter() which finds an adapter in that configuration. This is useful because it identifies primary keys, datatypes etc which allows me to not hard code everything in those commands that take, for example like SP dbo.T_insert, a list of all of the columns in the table.

    Anywhere, you'll get the idea.

    So now we have a custom DbSyncProvider which we will use to access the server database. We are not quite ready to go. You need to provision the server database for each client. What I do is check the server provisioning before doing a sync to the server. If it is not provisioned, then I go ahead and provision it before proceeding with the sync.

    To provision the server for the client, you do exactly the same as for the client (you use the scopeA-clientX internal format of the scope name), except you do not create any tables or SPs, just the scope_info and scope_config entries:

    serverConfig.SetCreateProceduresDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTableDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTrackingTableDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTriggersDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateProceduresForAdditionalScopeDefault(DbSyncCreationOption.Skip);
    
    serverConfig.Apply(provider.Connection as SqlConnection);
    

    So now we have a provisioned client which will use SqlSyncProvider and we have a provisioned server which will use the derivative of DbSyncProvider.

    Now, I mentioned earlier that if your first SELECT was the case then this can work. You had a second SELECT with a list of values to match. You can also do that if you describe an appropriate dbo.T_selectchanges SP and implement the associated command in the custom provider.

    Also, although I specified that records only belonged to one client or to all, this is not necessary if you have appropriate filter clauses to make this possible. The basic steps above can be used to make this happen.

    BTW, I'd like to give dues to JuneT who described the basis for the custom provider, which for me completed the odyssey. So.. Thanks JuneT.

    HTH

    Steve

    Saturday, March 13, 2010 3:42 AM

All replies

  • PuzzledBuckeye:

    The answer very likely in your situation is yes, if your first SELECT is the case. Your situation looks like it could be the same as mine.

    If any given record for a table belongs to (a) only one client or (b) to all of them, then you can do this.

    I have the following situation:

    - Sql2005 SP2/Sql2008/Sql2005 Express Sp2/Sql2008 Express at the client
    - Sql2008 at the server (but Sql2005 SP2 would be fine too)
    - Each client is indentified by a unique value.
    - Every table in the scope that meets condition (a) above has this unique value in a column. I call this a segmented table.
    - Every table in the scope that meets condition (b) above does not have this unique value in a column. I call this a shared table.

    For 300 clients, you will get 300 scopes, but these are just records in the scope_info and scope_config tables. You will not end up with 300 sets of 8 stored procedures per table. You don't really need to know 300 scope names. I use just one externally, but for any client, append the unique value to that name to get the actual (internal) scope name to use for synchronization. So, externally, you sync scopeA and scope B and internally for clientX, clientY, etc it would be scopeA-clientX, scopeA-clientY, scopeB-clientX and scopeB-clientY.

    I have 5 scopes (externally) and about 25 tables in those scopes, so I have 25*8=200 SPs (there are 8 SPs for a single table after provisioning) in total no matter the number of clients. For 300 clients, I would still have 200 SPs. I would have 5 scopes externally and internally 200*5=1,000 scopes in the scope_info and scope_config tables.

    At a high level, this is what you can do:

    At the client...

    - Create a SqlSyncScopeProvisioning to provision the database.
    - Create a DbSyncScopeDescription and specify the scope name as "scopeName-<unique Id for this client>"
    - For each table that is segmented, you will add a filter column (e.g. "uniqeId"), which will be the column that contains the unique value and a matching filter clause, e.g. "[side].[uniqueId] = '<unique Id for this client>' ".
    - For each table that is shared, do not create any filter information.
    - Provision the database

    So this means that at each client we have static filtering on all of the segmented tables and no filtering on the shared tables.

    You can use a regular off the shelf SqlSyncProvider to do sync with the clients.

    Now to the server. You will need to create a custom provider. You can base it on DbSyncProvider (both SqlSyncProvider and DbSyncProvider are based off of the abstract class RelationalSyncProvider) so they will be very similar in function and the custom provider is easy to implement. I'll get to that later.

    First, you need to provision the database for static filtering. Do this exactly the same as described above, except, don't both adding the suffix for the scope name, just use scopeA and picking any value you like for the unique Id in the filter clause.

    Ok. So now we have a provisioned server database. You will have 8 SPs per table. You will need to modify one of them per table for all of the segmented tables (but not for the shared tables). For a table T, you will have a SP named dbo.T_selectchanges (assuming no ObjectSchema and no ObjectPrefix, otherwise [ObjectSchema].ObjectPrefix_T_selectchanges).

    If you look at this SP, you will see that there is a conditional check which is the filter clause. It will appear in the WHERE as "[side].[uniqueId] = 'uniqueId value'". You need to make 2 changes: add a new parameter to the SP, same @uniqueId and modify the WHERE, replacing 'uniqueId value' with @unqiueId. So now you have dynamic filter for this SP.

    This is where the custom DbSyncProvider comes in. You implement this and you define the calls to each of the 8 SPs. You can find examples of this by searching for DbSyncAdapter. You have a separate one of these for each table that the provider handles. Within the DbSyncAdapter, you  would have the standard documented SPs, except for the SelectIncrementalChangesCommand. This is the one that is customized for the extra parameter. here is mine (the adapter creation and the modified command creation):

            private DbSyncAdapter CreateAdapter(MyApplicationTable table)
            {
                // Create the adapter.
                DbSyncAdapter adapter = new DbSyncAdapter(table.Name);
    
                // Get details on this table
                SqlSyncProviderAdapterConfiguration scopeAdapter = this._configuration.GetAdapter(table.Name);
                Collection<DbSyncColumnDescription> primaryKeys = this._configuration.GetPrimaryKeys(scopeAdapter.Columns);
                
                // Add in all primary keys
                foreach (DbSyncColumnDescription primaryKey in primaryKeys)
                {
                    adapter.RowIdColumns.Add(primaryKey.UnquotedName);
                }
    
                // Add stored procedures
                adapter.SelectIncrementalChangesCommand = this.GetSelectIncrementalChangesCommand(table);
                adapter.InsertCommand = this.GetInsertCommand(table, scopeAdapter);
                adapter.UpdateCommand = this.GetUpdateCommand(table, scopeAdapter);
                adapter.DeleteCommand = this.GetDeleteCommand(table, primaryKeys);
                adapter.SelectRowCommand = this.GetSelectRowCommand(table, primaryKeys);
                adapter.InsertMetadataCommand = this.GetInsertMetadataCommand(table, primaryKeys);
                adapter.UpdateMetadataCommand = this.GetUpdateMetadataCommand(table, primaryKeys);
                adapter.DeleteMetadataCommand = this.GetDeleteMetadataCommand(table, primaryKeys);
    
                return adapter;
            }
    
            private SqlCommand GetSelectIncrementalChangesCommand(MyApplicationTable table)
            {
                SqlCommand command = new SqlCommand();
                command.CommandType = CommandType.StoredProcedure;
    
                command.CommandText = "[dbo]." + table.Name + "_selectchanges";
                
                command.Parameters.Add("@" + DbSyncSession.SyncScopeRestoreCount, SqlDbType.Int);
                command.Parameters.Add("@" + DbSyncSession.SyncMinTimestamp, SqlDbType.BigInt);
                command.Parameters.Add("@" + DbSyncSession.SyncScopeLocalId, SqlDbType.Int);
    
                // Do dynamic filtering if this is a filtered table
                if (!string.IsNullOrEmpty(table.FilterColumn))
                    command.Parameters.Add(new SqlParameter("@Application_UniqueClientId", this._uniqueClientId));
    
                return command;
            }
    
    
    

    To make things easier, I de-serialized the scope_config data for the scope into SqlSyncProviderScopeConfiguration which I then derived from and added some methods to help get at stuff, such as, in the code above, this._configuration.GetAdapter() which finds an adapter in that configuration. This is useful because it identifies primary keys, datatypes etc which allows me to not hard code everything in those commands that take, for example like SP dbo.T_insert, a list of all of the columns in the table.

    Anywhere, you'll get the idea.

    So now we have a custom DbSyncProvider which we will use to access the server database. We are not quite ready to go. You need to provision the server database for each client. What I do is check the server provisioning before doing a sync to the server. If it is not provisioned, then I go ahead and provision it before proceeding with the sync.

    To provision the server for the client, you do exactly the same as for the client (you use the scopeA-clientX internal format of the scope name), except you do not create any tables or SPs, just the scope_info and scope_config entries:

    serverConfig.SetCreateProceduresDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTableDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTrackingTableDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateTriggersDefault(DbSyncCreationOption.Skip);
    serverConfig.SetCreateProceduresForAdditionalScopeDefault(DbSyncCreationOption.Skip);
    
    serverConfig.Apply(provider.Connection as SqlConnection);
    

    So now we have a provisioned client which will use SqlSyncProvider and we have a provisioned server which will use the derivative of DbSyncProvider.

    Now, I mentioned earlier that if your first SELECT was the case then this can work. You had a second SELECT with a list of values to match. You can also do that if you describe an appropriate dbo.T_selectchanges SP and implement the associated command in the custom provider.

    Also, although I specified that records only belonged to one client or to all, this is not necessary if you have appropriate filter clauses to make this possible. The basic steps above can be used to make this happen.

    BTW, I'd like to give dues to JuneT who described the basis for the custom provider, which for me completed the odyssey. So.. Thanks JuneT.

    HTH

    Steve

    Saturday, March 13, 2010 3:42 AM
  • good to hear it worked for you steve.

    hi puzzledbuckeye,

    you can look at a sample here : http://jtabadero.spaces.live.com/blog/cns!BF49A449953D0591!1187.entry

    but Steve's sample above is much cleaner. The blog above simply describes the process.

    Saturday, March 13, 2010 8:05 AM
  • Thanks for the write-up Steve.  I will spend some time pondering this, but it definitely looks like it is pointing me in the direction I need to go.  JuneT, I tried to follow your link to your blog, but my Internet Explorer (6.0, SP3) displays the page and then aborts (no error message other than "Cannot open the Internet site ... Operation aborted).  I would like to look at that so I can understand the process better.  After thinking about this more over the weekend, I am convinced that I need to understand the SyncAdapter class a lot better and how it fits in with the DbSyncProvider, so understanding the process would be an asset.

    Monday, March 15, 2010 12:36 PM
  • Not sure about the error in the link as I'm able to access it. Steve's post above more or less sums up the steps.
    Monday, March 15, 2010 1:21 PM
  • Thanks for the write-up Steve.  I will spend some time pondering this, but it definitely looks like it is pointing me in the direction I need to go.  JuneT, I tried to follow your link to your blog, but my Internet Explorer (6.0, SP3) displays the page and then aborts (no error message other than "Cannot open the Internet site ... Operation aborted).  I would like to look at that so I can understand the process better.  After thinking about this more over the weekend, I am convinced that I need to understand the SyncAdapter class a lot better and how it fits in with the DbSyncProvider, so understanding the process would be an asset.


    PuzzledBuckeye,

    Actually, there really isn't much that you have to understand about SyncAdapter. It really boils down to this:

    For each adapter that you create:
    1. It needs to know the primary key fields.
    2. It needs to know the stored procedure name and parameters for 8 commands. See my prior examples plus the documentation examples.
    3. There is no #3.

    Then for the provider:

    1. It need have an adapter for each table in your scope.
    2. It needs to know the stored procedure name and parameters for 4 commands. See the documentation for examples.
    3. There is no #3.

    I have conceptualize the adapter creation in an early response. Here is the provider setup.

    HTH

    Steve

                // Note that the DbSyncProvider has been subclassed so that
                // I have a class that loads its configuration from the scope_config table.
                // So, below, "this" is the subclassed DbSyncProvider object
    
    
                // First create all of the adapters
                foreach (MyApplicationTable table in this._scope.GetTables())
                {
                    this.SyncAdapters.Add(CreateAdapter(table));
                }
    
                // Now the provider-wide Sql commands
                this.SelectNewTimestampCommand = this.GetSelectNewTimestampCommand();
                this.SelectScopeInfoCommand = this.GetSelectScopeInfoCommand();
                this.UpdateScopeInfoCommand = this.GetUpdateScopeInfoCommand();
                this.SelectTableMaxTimestampsCommand = this.GetSelectTableMaxTimestampsCommand(this._scope);
    
    
    Monday, March 15, 2010 1:59 PM
  • Just a quick update:  I was able to access JuneT's blog page from my home PC (apparently the browser on my work PC doesn't like the site, but I don't control the software installed on it) and I found the article helpful.  After looking at the blog and reading the comments here, I have devised a strategy I plan to try today to see if I can get it to work.  If it works, then I think I have an overall sync solution.

    If it does work, I will post my overall solution strategy in a separate posting since this may help other developers who are facing similar problems.
    Tuesday, March 16, 2010 11:52 AM
  • Hi Steve, hope this topic is still active. Firstly this information and the related information on JuneT's blog (http://jtabadero.spaces.live.com/blog/cns!BF49A449953D0591!1187.entry ) is really useful. Working on combining this with the n-tier peer to peer samples to get something going via WCF. Generally going well but having some issues with the config reading functions (which seemed straight foward at first). Unfortunately if your adapters are not configured properly, you get a fairly generic error message telling you the same :) Anyway, for GetColumnSize, I noticed the size property (string for some reason..) is null unless it's a variable size column (and sizespecified is true). If it is null, I've been using 0 for the SqlParameter (looks like the size is required because the source column is needed) but I have a feeling this is causing the problem. How did you handle this in your GetColumnSize? Hoping not to hard-code this against the type: int,4 uniqueidentifier,16 etc. Thanks.

     

     

     

     

    Tuesday, May 18, 2010 3:11 PM
  • Brendon,

    The collection of SqlSyncProviderAdapterConfiguration objects (one per table in a scope) is obtained by reading the Scope_Config table and deserializing it into SqlSyncProviderScopeConfiguration and then using the AdaptersConfigurations collection.

    The SqlSyncAdapterConfiguration class contains a collection Columns of DBSyncColumnDescription, one entry per column in the table.

    I use the following method for the size:

        /// <summary>
        /// Convert a DbSyncColumnDescription size string to an interger.
        /// </summary>
        /// <param name="column">Column for which the size is to be returned</param>
        /// <returns>The size of the column.</returns>
        private int GetColumnSize(DbSyncColumnDescription column)
        {
          // There are values like "max", null etc which should be returned as 0.
          try
          {
            return Convert.ToInt32(column.Size);
          }
          catch (FormatException)
          {
            return 0;
          }
          catch (OverflowException)
          {
            return 0;
          }
        }
    

     

    HTH

    Steve

    Monday, July 5, 2010 8:48 PM
  • Hello,

    I'm facing the same problem as author of this post.

    But I don't understant what have you done to run following lines:

     // Get details on this table
    SqlSyncProviderAdapterConfiguration scopeAdapter = this._configuration.GetAdapter(table.Name);
    Collection<DbSyncColumnDescription> primaryKeys = this._configuration.GetPrimaryKeys(scopeAdapter.Columns);

     // and
    this.GetUpdateCommand(table, scopeAdapter);

    And what type is MyApplicationTable ?

    Can any one help me with this?

    Any code example?

    Regards,
    Daniel Skowroński

    Tuesday, July 6, 2010 10:45 AM
  • those are custom code by the poster (Steve). what he posted are just code snippets of his solution. you might want to ping him privately if he's willing to share the rest.
    Tuesday, July 6, 2010 1:38 PM
  • Daniel,

    I cannot post the whole solution as I do not own the code that I have developed here.

    Specifically to your questions, I have a custom class that holds my configuration. Here are the steps to get an adapter configuration:

    1. Read the scope_info table for a specific scope.
    2. Get the scope_config using the scope_config_id from the scope_info (preferably you do both these steps with a join).
    3. Deserialize the contents of the scope_config record's config_data into a SqlSyncProviderScopeConfiguration.
    4. Get the Adapter for the table you want from the AdapterConfigurations collection (which is what my this._configuration.GetAdapter() method does).

    I have a little console program below that demonstrates this in completeness...

    You ask about MyApplicationTable. I renamed the name of a class to that general name. I have encapsulated scope and table configurations into classes. For example, the scope class (let's call it MyApplicationScope) has a version, whether it is active in the application or not, directions allowed for sync, default and so forth. Same sort of thing with MyApplicationTable.

    Anyway, here's the deserialization code.... this just happens to list out the delete trigger names.

    HTH

    Steve

    using System;
    using System.IO;
    using System.Collections.Generic;
    using System.Data.SqlClient;
    using System.Text;
    
    using System.Xml;
    using System.Xml.Serialization;
    
    using Microsoft.Synchronization.Data.SqlServer;
    
    namespace SqlSyncProviderAdapterConfigurationTest
    {
      class Program
      {
        static void Main(string[] args)
        {
          string connection = @"Data Source=.\SQL2008;Initial Catalog=TestDB;Integrated Security=SSPI;"; ;
          SqlConnection sqlConnection = new SqlConnection(connection);
          SqlSyncProviderScopeConfiguration scopeConfiguration = GetScopeConfiguration(sqlConnection, "ScopeName");
          foreach (SqlSyncProviderAdapterConfiguration adapterConfiguration in scopeConfiguration.AdapterConfigurations)
          {
            Console.WriteLine(string.Format("TableName: {0}, DeleteTriggerName: {1}", adapterConfiguration.TableName, adapterConfiguration.DeleteTriggerName));
          }
        }
    
        static private SqlSyncProviderScopeConfiguration GetScopeConfiguration(SqlConnection sqlConnection, string scopeName)
        {
          SqlSyncProviderScopeConfiguration configuration = null;
    
          try
          {
            using (SqlConnection connection = new SqlConnection(sqlConnection.ConnectionString))
            {
              connection.Open();
    
              string commandText = " SELECT [dbo].scope_info.scope_name"
                        + "   , [dbo].scope_config.config_data"
                        + " FROM [dbo].scope_config INNER JOIN"
                        + "    [dbo].scope_info ON"
                        + "    [dbo].scope_config.config_id = [dbo].scope_info.scope_config_id"
                        + " WHERE [dbo].scope_info.scope_name = '" + scopeName + "'"
                        ;
    
              SqlCommand command = connection.CreateCommand();
              command.CommandText = commandText;
              SqlDataReader reader = command.ExecuteReader();
              if (!reader.Read())
                throw new Exception(string.Format("Missing scope table data: server: {0} database: {1}", sqlConnection.DataSource, sqlConnection.Database));
    
              string xmlConfiguration = reader.GetString(reader.GetOrdinal("config_data"));
              configuration = FromXml(xmlConfiguration, typeof(SqlSyncProviderScopeConfiguration))
                as SqlSyncProviderScopeConfiguration;
            }
          }
          catch (SqlException exception)
          {
            throw new Exception(string.Format("Error loading scopes: server: {0} database: {1}", sqlConnection.DataSource, sqlConnection.Database)
              , exception
              );
          }
    
          return configuration;
        }
        private static object FromXml(string xml, System.Type objType)
        {
    
          try
          {
            XmlSerializer serializer = new XmlSerializer(objType);
            StringReader reader = new StringReader(xml);
            XmlTextReader xmlReader = new XmlTextReader(reader);
            object obj = serializer.Deserialize(xmlReader);
            xmlReader.Close();
            reader.Close();
    
            return obj;
          }
          catch (InvalidOperationException e)
          {
            throw new Exception("Invalid XML for " + objType.ToString() + ": " + xml, e);
          }
          catch (Exception e)
          {
            throw new Exception("XML parsing error on: " + xml, e);
          }
        }
    
      }
    }
    
    Tuesday, July 6, 2010 2:28 PM
  • Daniel,

    I used a different technique to build my sync adapters and I can share some of that with you.  My basic technique is to dynamically build the table adapters for the sync providers based on the table's current schema.  Basically, I query the database for the table's schema information and then build a table adapter from that information.

    I am going to show you some code examples of what I did, but this technique relies on a couple of important assumptions:

    • We use Guids or SQL Server uniqueidentifiers for all primary keys.
    • Our database schema is always automatically updated before our application runs.

    That last point is really important because I use the same basic technique in both the client and server. If the schema ever gets "out of sync", then this won't work.

    Step 1 - Read the information about the table from the database

    For simplicity, I am omitting most variable declarations and exception handling and just focusing on the statements that do the work. The "ColumnDescriptor" class is one of my classes. You can deduce its structure from the code.

    Public Function ReadTableDescriptor(ByVal tableName As String) As List(Of ColumnDescriptor)
      Dim conn As New SqlConnection()
      Dim cmd As New SqlCommand()
      Dim schemaTable As DataTable
      Dim myReader As SqlDataReader = Nothing
      Dim result As List(Of ColumnDescriptor)
    
      conn.ConnectionString = Me._dbConnStr
      conn.Open()
    
      cmd.CommandText = "Select top 1 * From [" + tableName + "]"
      cmd.Connection = conn
      myReader = cmd.ExecuteReader(CommandBehavior.KeyInfo)
      schemaTable = myReader.GetSchemaTable()
      result = ColumnDescriptor.CreateFromMetadata(schemaTable)
    
      Return result
    End Function
    
    Public Shared Function CreateFromMetadata(ByVal metadata As DataTable) As List(Of ColumnDescriptor)
    
      Dim result As New List(Of ColumnDescriptor)()
    
      For Each row As DataRow In metadata.Rows
       Dim desc As New ColumnDescriptor()
       desc.ColumnName = CType(row.Item("ColumnName"), String)
       desc.ColumnOrdinal = CType(row.Item("ColumnOrdinal"), Integer)
       desc.ColumnSize = CType(row.Item("ColumnSize"), Integer)
       desc.VB_DataType = row.Item("DataType").ToString
       desc.SQL_DataType = CType(row.Item("DataTypeName"), String)
       desc.IsKey = CType(row.Item("IsKey"), Boolean)
       desc.AllowDBNull = CType(row.Item("AllowDBNull"), Boolean)
       desc.NumericPrecision = CType(row.Item("NumericPrecision"), Integer)
       desc.NumericScale = CType(row.Item("NumericScale"), Integer)
       result.Add(desc)
      Next
    
      Return result
    
    End Function
    

    Step 2 - Make a table adapter

    I have a class named "SyncAdapterFactory" whose only purpose is to create table adapters. Here is the most interesting function from that class, the one that creates a table adapter where we are going to do row filtering. In our database, row level filtering is nearly always based on a foreign key and the foreign key is always a Guid or uniqueidentifier.

    Public Function CreateAdapter(ByVal tableName As String, ByVal filterColumnName As String, ByVal filterValue As Guid) As DbSyncAdapter
    
      If String.IsNullOrEmpty(tableName) Then Return Nothing
      Dim tableDescriptor As List(Of BusinessObjects.ColumnDescriptor) = Me.ReadTableDescriptor(tableName)
    
      'A SyncAdapter must provide eight different SqlCommands, each of which calls a stored
      'procedure in the database. Typically these commands are Sync.XXXX_action, e.g., 
      'Sync.Users_insertmetadata, etc. We are going to use the column descriptors to
      'dynamically create the calls to these stored procedures.
    
      Dim result As New DbSyncAdapter(tableName)
      Me.AddPrimaryKeys(result, tableDescriptor)
    
      result.DeleteCommand = Me.CreateDeleteCmd(tableName, tableDescriptor)
      result.DeleteMetadataCommand = Me.CreateDeleteMetadataCmd(tableName, tableDescriptor)
    
      result.InsertCommand = Me.CreateInsertCmd(tableName, tableDescriptor)
      result.InsertMetadataCommand = Me.CreateInsertMetadataCmd(tableName, tableDescriptor)
    
      result.SelectIncrementalChangesCommand = Me.CreateIncrementalChangesCmd(tableName, tableDescriptor, filterColumnName, filterValue)
      result.SelectRowCommand = Me.CreateSelectRowCmd(tableName, tableDescriptor)
    
      result.UpdateCommand = Me.CreateUpdateCmd(tableName, tableDescriptor)
      result.UpdateMetadataCommand = Me.CreateUpdateMetadataCmd(tableName, tableDescriptor)
    
      Return result
    
    End Function
    

    My factory class has methods for creating each of the various SqlCommand objects needed to interface with the sync framework. They are all essentially the same, but I will show you the "selectchanges" command since that is where the row level filtering is applied.

    Private Function CreateIncrementalChangesCmd(ByVal tableName As String, ByVal tableDescriptor As List(Of ColumnDescriptor), ByVal filterColumnName As String, ByVal filterValue As Guid) As SqlCommand
    
      Dim cmd As New SqlCommand()
    
      cmd.CommandType = CommandType.StoredProcedure
      cmd.CommandText = "Sync." + tableName + "_selectchanges_by" + filterColumnName
    
      cmd.Parameters.Add("@" + DbSyncSession.SyncMinTimestamp, SqlDbType.BigInt)
      cmd.Parameters.Add("@" + DbSyncSession.SyncScopeLocalId, SqlDbType.Int)
      cmd.Parameters.Add("@" + DbSyncSession.SyncScopeRestoreCount, SqlDbType.Int)
    
      Dim filterParam As New SqlParameter("@" + filterColumnName, SqlDbType.UniqueIdentifier)
      filterParam.SqlValue = filterValue
      cmd.Parameters.Add(filterParam)
        
      Return cmd
    End Function
    

    This call requires a slightly customized stored procedure to use the row filtering value as an input parameter, but it is essentially the same stored procedure generated by the sync provisioning classes but with one or more additional "join" clauses and an additional "where" clause.

    For the stored procedures that directly insert or update field values, the method is different.

    Private Function CreateUpdateCmd(ByVal tableName As String, ByVal tableDescriptor As List(Of ColumnDescriptor)) As SqlCommand
    
      Dim cmd As New SqlCommand()
    
      cmd.CommandType = CommandType.StoredProcedure
      cmd.CommandText = "Sync." + tableName + "_update"
    
      For Each colDesc As ColumnDescriptor In tableDescriptor
       cmd.Parameters.Add("@" + colDesc.FormatParameterName(), colDesc.SqlDataType, 0, colDesc.ColumnName)
      Next
    
      cmd.Parameters.Add("@" + DbSyncSession.SyncForceWrite, SqlDbType.Int)
      cmd.Parameters.Add("@" + DbSyncSession.SyncMinTimestamp, SqlDbType.BigInt)
      cmd.Parameters.Add(Me.CreateSyncRowCountParameter())
    
      Return cmd
    End Function
    

    I have a second class called "SyncProviderFactory" that builds the sync providers for each sync. When I create this factory, I give it the database connection string. When I call its CreateSyncProvider() method, I give it an object that specifies the scope name, the list of tables included in that scope, and (optionally) the name and value of a row filtering parameter. This class creates a table adapter for each table in the sync, create set of SqlCommand objects for the provider, and a batching directory name.

    Private Function CreateSyncProvider(ByVal scopeName As String, ByVal tableNames As List(Of String), ByVal batchDirName As String) As DbSyncProvider
    
      Dim provider As New DbSyncProvider()
    
      provider.ScopeName = scopeName
    
      Dim adapterFactory As New SyncAdapterFactory(Me._dbConnection)
    
      For Each tableName As String In tableNames
       provider.SyncAdapters.Add(adapterFactory.CreateAdapter(tableName))
      Next
    
      provider.SelectNewTimestampCommand = Me.CreateSelectNewTimestampCmd()
      provider.SelectScopeInfoCommand = Me.CreateSelectScopeInfoCmd()
      provider.UpdateScopeInfoCommand = Me.CreateUpdateScopeInfoCmd()
    
      Dim lastTableName As String = tableNames.Item(tableNames.Count - 1)
      provider.SelectTableMaxTimestampsCommand = Me.CreateSelectTableMaxTimestampsCmd(lastTableName)
    
      provider.Connection = New SqlConnection(Me._dbConnection)
      provider.MemoryDataCacheSize = Me._batchSize
      Me.CreateDirectory(batchDirName)
      provider.BatchingDirectory = batchDirName
      
      Return provider
    End Function
    

    My SyncProviderFactory also creates my service proxy class for talking to a remote sync service, but that is a different kind of provider. Both the client application and the sync service use the exact same code to build their version of the sync provider and the table adapters.

    This isn't difficult at all, but it is a little bit tedious. However, that is why we write computer applications to begin with--to do repetitive tedious stuff. Overall, this takes longer to set up, but trying to administer the sync on our project using the static table adapters built by the sync provisioners would have been extremely difficult and this technique works, as long as one adheres to the two assumptions mentioned earlier.

    If this technique appears helpful, I would suggest that you spend some time studying what the sync provisioners do and how the framework fits together. I built my code based on "reverse engineering" what the provisioners did.

    Good luck to you and I hope this helps.

    Wednesday, July 7, 2010 3:12 PM
  • Unfortunately my manager decided that this framework is not yet "mature" and we can't use so much workarounds in production... We have build our own solution that is based on Open Data Protocol.

    Thank you all for feedback. 

    Thursday, July 8, 2010 5:05 PM
  • PuzzledBuckeye,

    I have a question on your "dynamic filtering" approach.  It appears to be substantively the same approach taken by June T above in that it relies on both a filter parameter and a SyncMinTimestamp to work.

    The issue/question I have with this approach is that I cannot see how it will work.  Below I'll outline a specific example:

    1) Suppose I want to filter dynamically on a Person table - I only want the "client" application to see specific people. There is an FK relationship to an Address table, and I also want the "client" to sync the Address table data

    2) I sync my databases, all data is synchronized correctly

    3) On the client I decide that I want to sync a Person record that I previously did not sync because it was not part of the filter criteria.  This is (to me) what dynamic filtering is.

    4) Since I have last sync'ed the no data has changed on either the client or the server machine.  I run a sync, with the new Person_Id (a GUID perhaps) added to the filter criteria.  I would expect that the Person that I added to the filter will be sync'ed to my client.  It will not because there have been no data changes to the Person table on the "server" or the "client" and consequently the data will be excluded because of the SyncMinTimestamp.

    5) I could get around this by "touching" the Person row with that Id, which is not ideal, but is doable.  However I also want to sync the Address table (and presumably other FK'ed tables) and pretty soon this approach would become unworkable.

    Am I missing something?


    Nick Caramello
    Tuesday, August 10, 2010 4:36 PM
  • Nick,

    I think the difference lies in your definition of dynamic filtering versus that which is addressed in this thread. Not suggesting that your definition is unreasonable, it is just different.

    You see this came about because of the position of Microsoft and SyncFx with regards to filtering. Let's say that you want to take data and get at of the data for a given State (eg Michigan). MS says that you must have that statically defined in the scope. If you want to then have a filter for MA or CA or SC, then you must have a statically defined filter in a different scope.

    What this thread addressed was a way for that single scope to have a different, but fixed, value on that single filter condition (eg where state = <some-value>). I say different but fixed because for any given client, that value is fixed, but each of the clients are (or can be) different in their condition.

    In my case, the situation was where we have loads of clients (say a, b, c, d) and a single central database where tables have the client id as a key in every table. So, within the central database it is like the clients have a segment of the whole database. So for client a, the conditional will always evaluate as "where clientId = 'a'", for b it will always evaluate as "where clientId = 'b'" and so forth.

    The timestamp is used in the query so that only changes that occurred after the last sync happened are examine in the tracking tables.

    So that's the dynamic filtering that is addressed in this thread.

    Now you could get your definition by modifying the stored procedure the gets the set of changes. If you examine the SP named objectschema.objectprefix_tableName_selectchanges you will see that the lengthy condition ends with

    AND

     

    [side].[local_update_peer_timestamp] > @sync_min_timestamp

    This is what causes only the changes since the last sync to be selected. You do change this conditional to eliminate that check. Or you could pass set @sync_min_timestamp to be 0 in the SP. Or you could change the adapter in your provider or provider proxy to pass 0 as the value of that parameter. I don't think this is very efficient in returning all of the data for the "person" that you want, but seems to be what you are wanting to do.

    Anyway, something like that might work for you.

    HTH

    Steve

     

    Tuesday, August 10, 2010 5:19 PM
  • Steve

    thanks for the response.  I appreciate the difference in the definitions of "dynamic filtering" - what I have been looking for in Sync Framework v 2.0 is the same type of dynamic filtering that is available in Sync Framework v. 1.0, which does address the function of changing what you are filtering for at runtime. This type of filtering is not available in the above examples without getting rid of the sync_min_timestamp which would, I believe, lead to a very chatty and inefficient sync process.  Given that I am working over a WAN its pretty much out of the question for me.

    I am now stuck between a rock and a hard place - I need to accommodate schema changes.  The two approaches I can think of are:

    1) Synchronize the client database, delete it and start fresh (or create a new sync database).  This would work well if I had dynamic filters per my definition, but poorly if I have to sync the entire database over the WAN.

    2) Implement something like the above without the "row level filtering" and which also provisions schema changes to the "clients" (since I don't believe that is handled by the sync service).

     


    Nick Caramello
    Tuesday, August 10, 2010 9:15 PM
  • Hi Steve/PuzzledBuckEye,

    just in case you're still read this forums, can you shoot me an email at jtabadero AT hotmail.com?

    thanks!

    junet

    Thursday, March 17, 2011 1:46 PM