Answered by:
Problem with dynamic row filtering in Sync Framework 2.0

Question
-
Using the tips provided by JuneT and Steve, I have had a lot of success with my sync prototyping efforts, but now I seem to be stuck on a problem with the dynamic row filtering example. So I am hoping that someone can point me to the error of my ways.
My problem is pretty basic: I am getting an exception from the sync framework about a missing parameter to my "selectchanges" stored procedure. The specific error message is:
Procedure or function "WorkOrders_selectchanges_bySiteId" expects parameter "@siteId", which was not supplied.
Following the examples provided by Steve and JuneT, I am building a SyncAdapter for my "WorkOrders" table and providing a "SiteId" (uniqueidentifier or Guid) to use as a filtering parameter. The VB code to build the IncrementalChanges command is:
Private Function CreateIncrementalChangesCmd(ByVal tableName As String, ByVal tableDescriptor As List(Of ColumnDescriptor), ByVal filterColumnName As String, ByVal filterValue As Guid) As SqlCommand Dim cmd As New SqlCommand()
cmd.CommandType = CommandType.StoredProcedure
cmd.CommandText = "Sync." + tableName + "_selectchanges_by" + filterColumnName
cmd.Parameters.Add("@" + DbSyncSession.SyncMinTimestamp, SqlDbType.BigInt)
cmd.Parameters.Add("@" + DbSyncSession.SyncScopeLocalId, SqlDbType.Int)
cmd.Parameters.Add("@" + DbSyncSession.SyncScopeRestoreCount, SqlDbType.Int)
Dim filterParam As New SqlParameter("@" + filterColumnName, SqlDbType.UniqueIdentifier)
filterParam.SqlValue = filterValue
cmd.Parameters.Add(filterParam)
'cmd.Parameters.Add(New SqlParameter("@" + filterColumnName, filterValue))
Return cmd
End Function
I've tried this a couple of different ways and looked at the result on the debugger and it appears to be correct, i.e., the "filterColumnName" is "SiteId" and my "filterValue" does in fact point to a legitimate uniqueidentifier in both databases.
The WorkOrders_selectchanges_bySiteId stored procedure has the following parameter list:
ALTER PROCEDURE [Sync].[WorkOrders_selectchanges_bySiteId]
@sync_min_timestamp BigInt,
@sync_scope_local_id Int,
@sync_scope_restore_count Int,
@siteId uniqueidentifier
AS
It looks like the call is being set up correctly and being made, but for some reason the "SiteId" parameter value I set is not getting to the stored procedure.
Does anyone see what I am doing wrong?Wednesday, March 17, 2010 6:24 PM
Answers
-
Sometimes the chance to go home and sleep on a problem is a wonderful thing. This morning I realized what the probable cause of this error was, so I came in and checked it out and, sure enough, that was it.
The problem was not in the SyncAdapter or the call to set up the parameter or any of that stuff. The problem was that I was not sending the filter value to the *client* database. I didn't dig enough into the error to see which sync provider was actually complaining (and I'm not sure I would have been able to figure that out anyway).
What I was doing was allowing the sync framework to build the SyncAdapter for the client database and then building a custom SyncProvider with a custom SyncAdapter for row filtering for the server database. The basic idea is that we want to sync everything in our client databases down to the server, but we only want to sync up a small subset of records from our server to our client (that may sound odd, but it makes perfect sense in our application).
Well, the sync framework was doing what I told it to. It built the SyncAdapter that the XML configuration told it to build, but I never set the sync parameter on the client side so it failed. Since this is prototype code, my temporary solution was to download my work orders data from the client to the server using default, non-filtered sync providers and then upload data from the server to the client using a filtered server sync provider. That worked just fine.
So my problem is solved. Even more important, this prototyping effort (basically a "proof of concept" or "can we use this to solve our problem?") has been answered and I think we are going to give this a try. There is still some more evaluation to do, but I think we have identified most of our major concerns and have some idea how we want to manage the overall process.
I really appreciate the advise and assistance.
P.S. In my CreateIncrementalChangesCmd, both methods of setting the filter parameter value work. The one that is not commented out allows me to specify the expected SqlDataType rather than allowing the framework to deduce it (I was trying different things to see if I could get this to work or at least get a different reaction). This isn't necessary, but it still works.
- Marked as answer by Kyle LeckieEditor Thursday, March 18, 2010 3:52 PM
Thursday, March 18, 2010 12:35 PM
All replies
-
Which (sync) provider are you using?
Leo Zhou ------ This posting is provided "AS IS" with no warranties, and confers no rights.Thursday, March 18, 2010 1:07 AMAnswerer -
can you run SQL Profiler to track the SQL being sent? then see if you can execute the SQL statement from SQL profiler?Thursday, March 18, 2010 2:47 AM
-
Pardon me if I am stating the obvious, but you have the correct line commented out.
My (successful) addition of a parameter in this very same method is - in c# - and this looks exactly like your commented out line:// Do dynamic filtering if this is a filtered table if (!string.IsNullOrEmpty(table.FilterColumn)) command.Parameters.Add(new SqlParameter("@UniqueClientId", this._uniqueClientId));
I haven't looked up the call and maybe your statements do the same thing. However, I do know that the above works in my subclassed DbSyncProvider.
SteveThursday, March 18, 2010 5:56 AM -
Sometimes the chance to go home and sleep on a problem is a wonderful thing. This morning I realized what the probable cause of this error was, so I came in and checked it out and, sure enough, that was it.
The problem was not in the SyncAdapter or the call to set up the parameter or any of that stuff. The problem was that I was not sending the filter value to the *client* database. I didn't dig enough into the error to see which sync provider was actually complaining (and I'm not sure I would have been able to figure that out anyway).
What I was doing was allowing the sync framework to build the SyncAdapter for the client database and then building a custom SyncProvider with a custom SyncAdapter for row filtering for the server database. The basic idea is that we want to sync everything in our client databases down to the server, but we only want to sync up a small subset of records from our server to our client (that may sound odd, but it makes perfect sense in our application).
Well, the sync framework was doing what I told it to. It built the SyncAdapter that the XML configuration told it to build, but I never set the sync parameter on the client side so it failed. Since this is prototype code, my temporary solution was to download my work orders data from the client to the server using default, non-filtered sync providers and then upload data from the server to the client using a filtered server sync provider. That worked just fine.
So my problem is solved. Even more important, this prototyping effort (basically a "proof of concept" or "can we use this to solve our problem?") has been answered and I think we are going to give this a try. There is still some more evaluation to do, but I think we have identified most of our major concerns and have some idea how we want to manage the overall process.
I really appreciate the advise and assistance.
P.S. In my CreateIncrementalChangesCmd, both methods of setting the filter parameter value work. The one that is not commented out allows me to specify the expected SqlDataType rather than allowing the framework to deduce it (I was trying different things to see if I could get this to work or at least get a different reaction). This isn't necessary, but it still works.
- Marked as answer by Kyle LeckieEditor Thursday, March 18, 2010 3:52 PM
Thursday, March 18, 2010 12:35 PM -
good to hear its all working now.
just curious, on the client side, did you manually modified the selectchanges SP to add your parameter? or did you specifically provisioned it with the filter? did you populate the client scope using the scope description from the server (populatefromscope) or did you use the same codes you used for provisioning the server to provision the client and simply changing the connection string?Thursday, March 18, 2010 1:25 PM -
To add to what June wrote, if you used SqlSyncProviderScopeProvisioning to do the initial setup of your server (say with a static filter of [side].[siteId] = 'somevalue'), then you can use that very same provisioning for each of your clients, assuming you have the correct (and different) 'somevalue' for each. Then you can use SqlSyncProvider out of the box on the client side. This way you do nothing special or custom for your client side. Client: out of the box; Server: Out of the box plus custom DbSyncProvider/custom selectchanges SP.
Thursday, March 18, 2010 4:21 PM -
What I did was include two different selectchanges methods for two different scopes. I have a plain old "WorkOrders_selectchanges" method for basic table operations in my "WorkOrders" scope and a "WorkOrders_selectchanges_bySiteId" method for filtering by site Id in my "WorkOrder_bySiteId" scope. I manually modified the WorkOrders_selectchanges stored procedure to add the "siteId" parameter and then added that to the selection statement as a "where" clause, e.g.,
WHERE [side].[local_update_peer_timestamp] > @sync_min_timestamp
AND SiteID = @siteIdThat modified version then became my "WorkOrders_selectchanges_bySiteId" stored procedure. In this case, the "WorkOrders" scope is probably unnecessary. One of my goals in doing this is to not have 300 scope_info entries (one per client database) and I think this will solve that particular problem.
Right now, we think our basic maintenance strategy will be to generate our own version of the stored procedures and then update our client databases using our standard database patching procedure. To that end, I've already written "code generators" for all of the framework components (hand-coding these is too painful, even during prototyping). I have also tucked all of the sync framework tables and stored procedures in their own "Sync" schema. That way, they will still be visible but won't clutter up the more "interesting" stored procedures related to business logic, etc. Our initial patch for deploying the new sync capability will be huge, but long-term maintenance will be much easier. Besides, trying to run the provisioning code in our client applications would be a lot more risk (and therefore a lot more pain). We have a better shot at fixing it using our current system.
As far as the sync providers, the first time I tried this I used the standard SqlSyncProvider obtained from loading the provider from the scope_config data (which of course was the problem since the scope_config referenced the additional "@siteId" parameter which wasn't being passed in). A little later I decided to try generating a custom client DbSyncProvider object that I generated using the exact same code I used to generate the DbSyncProvider for the server database, but with the client connection string, and that actually works (I wasn't sure it would, but it did). Given that this works, I am quite comfortable with the idea of having factory classes to build custom DbSyncProvider objects with custom SyncAdapters inside to handle situations where we want to do row filtering (in our database of 80+ tables we have two major subsystems consisting of about 10 tables each where we will want do to this) and then using the standard DbSyncProviders loaded from scope_config to handle the remaining, non-filtered tables.
Of course, as I am working my way through various scenarios I find other things I have to think about and resolve. My current issue is that when syncing with row filtering, it doesn't recognize deletions. From looking at the sync event data (I have a bunch of event handlers that I added to my DbSyncProviders just to look at all the "chatter"), it looks like the problem is that my WorkOrders_selectchanges_bySiteId method is not recognizing these records as having been deleted. I think that what I need to do is either include the SiteId column in WorkOrders_tracking and use that as part of the selection criteria in WorkOrders_selectchanges_bySiteId, or I might be able to do this by including WorkOrders_tombstone.SiteId as part of the join. That complicates things a bit more, but we have to do row filtering in our sync process. This is just a matter of figuring out the best technique and then adding it to the custom "selectchanges" stored procedure (and then maybe figuring out how to automatically create that stored procedure).Again, I really appreciate the assistance and insight I have received in trying to understand how to use this framework to meet our design objectives. Database synchronization is a difficult problem to solve in general and I came into this with very little knowledge of this particular framework, so I needed the extra assistance.
Thursday, March 18, 2010 4:49 PM -
if you included the filter column in the provisioning, it will automatically be added in the tracking table as well including the metadata stored procedures.
but if you're provisioning via scripts, the just make sure you add it in all other places where the sync provisioning would normally add the filter (selectchanges, tracking table, metadata SPs, etc...)Thursday, March 18, 2010 4:58 PM -
One of my goals in doing this is to not have 300 scope_info entries (one per client database) and I think this will solve that particular problem.
PuzzledBuckeye,
The scope_info records contain the synchronization state (known as knowledge) between the server and a given client. If you have 300 clients, then you need 300 states.
The first time, I figured out the dynamic filtering, I ditched the billions and billions of SPs and then all of the extra scopes. Got some interesting sync results (mainly only one client worked, after you sync to a certain point, no other client can get any records prior to that - because they have already got the records - according to the state/knowledge). Then I went back to the one-scope per client because of the need for knowledge (er I mean state).
For me, the SP proliferation was a huge issue. The extra records within the pair of scope tables not so. Furthermore, externally, there is only one scope name. Internally for a client ClientA, I convert the scope name into scopeName_ClientA which is actually used. Thus the detail is hidden from the outside.
So, I think you need to reflect on one scope for dynamic filtering. One SP set? Yes. One scope? No.
Steve- Proposed as answer by KayWessel Thursday, March 18, 2010 7:22 PM
Thursday, March 18, 2010 6:24 PM -
I figured out the deletion problem and thought I would share the solution. It may prove helpful to someone else.
As I suspected, the problem was caused by the WorkOrders_selectchanges_bySiteId not correctly selecting the SiteId for a deleted WorkOrder. I wanted to come up with a "minimally invasive" change (meaning I only want to fix a single piece of custom code) to fix the problem. My solution was to further modify WorkOrders_selectchanges_bySiteId to look at the tombstone table to get the SiteId for deleted records. If I do this, then basically all I have to do is modify a single stored procedure that I have already modified anyway.
Here is what it looks like (important changes):
ALTER
PROCEDURE [Sync].[WorkOrders_selectchanges_bySiteId]
@sync_min_timestamp BigInt,
@sync_scope_local_id Int,
@sync_scope_restore_count Int,
@siteId uniqueidentifier
AS
BEGIN
SELECT
[side].[WorkOrderID],
case when [side].[sync_row_is_tombstone] = 1
then [tomb].SiteID
else [base].SiteID
end as SiteId,
[base].Description,
[base].[Type],
[base].CreatedBy,
[side].[sync_row_is_tombstone],
[side].[local_update_peer_timestamp] as sync_row_timestamp,
-- NOTE: There are four huge "case" statements generated by the provisioner that
-- I am omitting for clarity. These four statements are used without modification.
FROM [WorkOrders] [base]
RIGHT JOIN [Sync].[WorkOrders_tracking] [side] ON [base].[WorkOrderID] = [side].[WorkOrderID]
LEFT OUTER JOIN [Sync].[WorkOrders_tombstone] [tomb] on [side].[WorkOrderID] = [tomb].[WorkOrderID]
WHERE [side].[local_update_peer_timestamp] > @sync_min_timestamp
AND ([base].SiteID = @siteId OR [tomb].SiteID = @siteId)END
With those changes, the delete functionality will work correctly.
Note that we cannot "hard code" filtering criteria directly into the sync framework. Our application supports several hundred engineers and technicians who do troubleshooting and maintenance work at many different client sites and they need access to different information about different sites as they are doing their jobs. The idea is that the service engineer can go to a client site, download all of the latest maintenance information about that site (and there is a lot of this), perform a series of maintenance actions at that site and record them in their local database, and then sync all of this (including time worked and billing information) back up to the central database.
They can do this now with our current sync system. What they can't do right now (and one of the things we are trying to give them) is sync their own work information and history without worrying about which site they worked at. Currently, the only way they can sync their own work information is to sync every site they've worked at (which will include everyone else's information at that site). That's simply too much stuff to have to carry around.
Right now, I am not inclined to use the Sync framework provisioning code to maintain our framework. The reason for this is because we will be updating our database schema on average at least once per month, and we would have to write "versioning" code inside the application to try to figure out whether or not we need re-provision the sync interface, etc., and still need to write custom code anyway (for example, the provisioning code doesn't seem build a tombstone table). Some of our client users don't sync nearly as often as they should and this has already caused us maintenance difficulty with the application. Patching the database is easy, automatic, and easily visible; we can look at the current version number and know whether or not the user is up to date.
Instead, we have written some tools to generate all of the necessary scripts (that's the same thing the provisioning framework does), and our tools look at the current database table and then decide what needs to be done (the same as the provisioning framework). However, our tools also generate (or modify) the tombstone table and can figure out what needs to be changed when a table's schema changes. Basically, the tool can look at the current database table and decide which columns were added or removed (or if this is a new table), and then generate the update scripts for that situation.
The goal here is the same in both cases: automate the tedious stuff so it gets done quickly and correctly. In both cases, the end product is the same: a set of scripts that can be used for change tracking. Rather than have our client application try to generate the proper scripts, we are simply going to deliver them as part of the database update. Then all the client application has to worry about is the actual sync itself.
As we get further into development, all of that could change (it wouldn't be the first time that happened). But right now, that's the plan.Thursday, March 18, 2010 7:17 PM -
Steve,
First I create the the initial scope with a static filter of my first clientId. After this I add a parameter to the _selectchanges proc and change the static filter to be the new parameter.
Then, lets say I make 1000 scopes, one for each of my client with using the clientId as both filter and as the last part of the scope name. On the server, I will skip all the creation of all objects, so nothing is created except for 1 record in the scope_info table and 1 record in the scope_config table. Right?
The records in the scope_info table is only updated by the sync process during synchronisation, so there is no performance problem with these records?
After doing this I have one issue. The deleted records on the server is not deleted on the client after a sync has been done. Seems like I need to change the last part of the Where clause also?
I have a lot of tables and there will be some work to create the custom DbSyncProvider. I looked at a snippet of code you provided in an other posting where it looks like you managed to create the DbSyncProvider from the scope_config file and a few parameters instead of hand witing all this code?
KayThursday, March 18, 2010 7:22 PM -
Kay,
Para #1: Yes.
Para #2: Yes.
Para #3: Yes.
Para #4: Yes.
Para #5: Yes.
Regarding #4, you should have changed the WHERE clause in para #1 when you changed the SP. Regarding #5, I cannot post a whole solution, but I have posted enough to get you going. You can definitely code this to be totally generic without my code; in fact less code that if you hard code a single scope!
Both of these are addressed in my postings on this thread: http://social.microsoft.com/Forums/en-US/syncdevdiscussions/thread/4af321b6-4678-4620-af46-98c560cc2bc6/#1f3dc838-7de5-4021-bccd-5913f82e948e and also JuneT's blog at http://jtabadero.spaces.live.com/blog/cns!BF49A449953D0591!1187.entry
HTH
SteveThursday, March 18, 2010 10:22 PM -
I figured out the deletion problem and thought I would share the solution. It may prove helpful to someone else.
As I suspected, the problem was caused by the WorkOrders_selectchanges_bySiteId not correctly selecting the SiteId for a deleted WorkOrder. I wanted to come up with a "minimally invasive" change (meaning I only want to fix a single piece of custom code) to fix the problem. My solution was to further modify WorkOrders_selectchanges_bySiteId to look at the tombstone table to get the SiteId for deleted records. If I do this, then basically all I have to do is modify a single stored procedure that I have already modified anyway.have you modified the triggers as well? how did you end up with a separate tombstone table?
Friday, March 19, 2010 2:20 AM -
I generate my own delete trigger which idoes what the framework trigger does, but includes inserting the deleted record into the tombstone table I created. I'm running on SQL Server 2005 SP2 (we thought about upgrading to SQL Server 2008 but it would be too much pain for very little gain at this point) so I think this means I have to create and maintain my own tombstone table. So my scripting tool generates everything.
I am going to spend some more time looking at the scoping issue Steve raised, i.e., the need for separate scope_info records for each client. I have been prototyping without this using multiple client databases and thus far have seen no ill effects, includling simultaneous syncs by multiple clients against the same table (so far, the only noticeable effect I've seen is the need to sync twice to get the full update and that is understandable). Our current sync framework cannot handle this situation without losing data, so I do need to understand this situation better.
Friday, March 19, 2010 12:19 PM -
i see. it all makes sense to me now why you make the join with the tombstone table. When you provisioned, you didnt specify the filter column and you just added it to the SP so the column is not in the tracking table and so you have to do the filtering on the base table instead of the tracking table (default in provisioning filter)
Friday, March 19, 2010 1:59 PM -
I always use primary key columns to filter, and they are always in the tracking table. When I tested I used a column which is not the primary key, and that was probably the reason for my problem. The stored prosedure(_selectchanges) created by SQL Azure provisioning expects the filter column to be Null, when checking for deleted rows. Changing this to check that the filter is equal to the parameter should solve the issue. I will probably not run into this in my real application.
Kay
Friday, March 19, 2010 2:22 PM -
The scope_info records contain the synchronization state (known as knowledge) between the server and a given client. If you have 300 clients, then you need 300 states.
The first time, I figured out the dynamic filtering, I ditched the billions and billions of SPs and then all of the extra scopes. Got some interesting sync results (mainly only one client worked, after you sync to a certain point, no other client can get any records prior to that - because they have already got the records - according to the state/knowledge). Then I went back to the one-scope per client because of the need for knowledge (er I mean state).
Steve
Hi Steve,Care to share more about your findings on the "knowledge", I've done several rounds of testing with dynamic filtering with multiple clients (max 5) and only one scope entry in the scope_info table and everything seems to be ok. Changes are uploaded and downloaded as it is.
Also, the selectchanges never considers the "knowledge" when it selects changes. Is there a comparison happening after changes has been retrieved to determine whether to apply them to a destination? Seems to me an expensive operation to retrieve the rows using the select changes SP and go over them one by one to see if they been applied on the destination.
Friday, March 19, 2010 2:41 PM -
Hi Steve,
Care to share more about your findings on the "knowledge", I've done several rounds of testing with dynamic filtering with multiple clients (max 5) and only one scope entry in the scope_info table and everything seems to be ok. Changes are uploaded and downloaded as it is.
Also, the selectchanges never considers the "knowledge" when it selects changes. Is there a comparison happening after changes has been retrieved to determine whether to apply them to a destination? Seems to me an expensive operation to retrieve the rows using the select changes SP and go over them one by one to see if they been applied on the destination.
June,
Sorry, I haven't been monitoring this forum for a while (since phase I was going well).
I think there is a problem with what you have suggested, that being that you only need one scope and it can be shared by all of the dynamic clients.
The select changes query has a parameter sync_min_timestamp which controls the selection of change tracking records for a given sync, the WHERE clause ending with...
AND [side].[local_update_peer_timestamp] > @sync_min_timestamp
The left side of the expression is in the tracking table. So basically, this is saying get all of the tracking records for this table that are newer than a certain point in time.
Let's consider your environment where we have 5 clients and one server. Client #1 says give me all of the records that are after timestamp X and the server obliges. No problem. Likewise for client #2. And so forth. I do not think there is a problem here.
Now consider the opposite direction. The server does a sync with client #1 and gets all records starting at the beginning up to now, say that is timestamp X. It stores this timestamp off somewhere. I'd suggest one of two places: (a) in the scope_timestamp of scope_info or (b) in the scope_sync_knowledge of the scope_info table. I suggest it is (b) since we know that there is an exchange of knowledge before record selection occurs (the GetChangeBatch() method of a proxy provider provides knowledge to the remote provider in requesting a set of changes).
Now comes along client #2. If the same scope name is used, then the server is now going to ask for all records after timestamp X meaning that any records at client #2 that are before timestamp X are never going to go to the server since they will not qualify in the select changes query that would be run at the client on behalf of the server.
I think this can easily be demonstrated:
- For each of your 5 clients, create a database with a single table in it.
- Populate your 5 clients with some data in the table, preferably different in each client. Make the record counts in each client the same.
- Provision (SqlSyncProvisioning) a scope containing the table at each client, the same name scope at each.
- Create the database at the server with a single table.
- Populate the table with some data for each of the clients, same number of records for each client.
- Provision the scope at the server, applying the dynamic filtering that we have defined in prior posts.
- Sync the scope from client to server.
- Sync the scope from server to client.
- Add an equal number of records at each client.
- Add a different equal number of records for each client at the server.
- Sync the scope from client to server.
- Sync the scope from server to client.
I think you will find that the clients contain all of the data from the server. I think you will find that the server does not contain all of the records from the clients.
At least that is what I have found. And this is the reason I believe that you need to have a separate scope for each client. You can have a one to many relationship in a sync (server to client) but you cannot have a many to one relationship (client to server).
My solution was to have a scope SCOPE used externally, but when it came time within the software to actually do the scope, then append the client identifier to it (eg SCOPE_CLIENT_A for client A). The provisioning of all of these scopes at the server is done when it is detected that this client is not provisioned. The provisioning disables the creation of all stored procedures, tables, triggers etc and essentially just creates the scope_info and scope_config entries. There is of course an initial provisioning of the server (I use the base name SCOPE for this) followed by the customization of all of the selectchanges SPs for dynamic filtering.
Your thoughts on this?
Steve
Monday, July 5, 2010 11:48 PM -
hi steve,
good to hear from you again and that your project is going well.
i'll try to do more test with your scenario when i find sufficient time (am swamped with an upcoming release of a project).
anyways, just to confirm some points above:
1. Sync Fx does execute the change enumeration and compare the results with the sync knowledge sent by the destination to see if the changes are already contained in the sync knowledge.
2. You're right, the knowledge is stored in the scope_sync_knowledge as it is the only column big enough to track the metadata (its a varbinary(max) )
3. My understanding is that @sync_min_timestamp value will be different for each synchronizing peer (replica). I think its actually making a call to FindMinTickCountForReplica. (I maybe wrong on this as i havent found time to really dig deep on how a tick count is determined)
will test this further when i find time.
cheers,
junet
Tuesday, July 6, 2010 1:33 PM -
3. My understanding is that @sync_min_timestamp value will be different for each synchronizing peer (replica). I think its actually making a call to FindMinTickCountForReplica. (I maybe wrong on this as i havent found time to really dig deep on how a tick count is determined)
June,
I agree. This must be in the knowledge as there is nowhere else that it can be stored. There is a single timestamp in the scope_info and so this cannot be it, since there must be "n" of them.
Steve
Tuesday, July 6, 2010 2:31 PM