locked
How to handle dynamically calculated FilterClause? RRS feed

  • Question

  • Hi again,

    while digging deeper into my scenario i came across the question how to handle dynamically calculated FilterClause like this:

     

    SELECT
    
     * FROM
    
     [MyTable]
    WHERE
    
    
    	DATEDIFF
    
    (dd, [MyTable].[SomeDate], GETDATE
    
    ()) <= @MyFilter
    	AND
    
    
    	DATEDIFF
    
    (dd, GETDATE
    
    (), [MyTable].[SomeDate]) <= @MyFilter
    
    
    Without having attempted it yet i can imagine the following SyncScope definiton - but is it supposed to work?
    <
    SyncTable
    
     Name
    
    =
    
    "
    MyTable
    "
     FilterClause
    =
    "
    DATEDIFF(dd, [side].[SomeDate], GETDATE()) <= @MyFilter AND DATEDIFF(dd, GETDATE(), [side].[SomeDate]) <= @MyFilter
    "
    >
    
    
    <
    FilterColumns
    
    >
    
    
    <
    
    FilterColumn
    
     Name
    
    =
    
    "
    
    SomeDate
    
    "
    
     />
    
    
    </
    
    FilterColumns
    
    >
    
    
    <
    
    FilterParameters
    
    >
    
    
    <
    
    FilterParameter
    
     Name
    
    =
    
    "
    
    MyFilter
    
    "
    
     SqlType
    
    =
    
    "
    
    datetime
    
    "
    
     />
    
    
    </
    
    FilterParameter
    
    >
    
    
    
    

    I am afraid that the answer will be to perform the calcuation of the date range on the client and pass it as plain filter parameters in order to make it work?

    (To describe the scenario more precisely: the @MyFilter value actually lives in a related table but i've stripped that out of my example here.)

    (I am very sorry for the formatting problems but thats the fault of this lousy forum software.)

    Monday, August 9, 2010 9:19 AM

Answers

  • Do you need to keep that information on the server? If its dispensible then you may just delete them from the server. Right solution would be to delete and clean up the tombstones but that would require access to the underlying SqlSyncProvider which we dont in the labs bits.


    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    • Marked as answer by mjayaram Thursday, September 23, 2010 3:21 AM
    Friday, August 13, 2010 4:41 PM

All replies

  • Yes that should work. The filter clause is a mere string and we dont do any additional parsing on it. Just make sure its valid TSQL, all necessary dynamic values are parameters and make sure those parameters are defined.
    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Monday, August 9, 2010 6:30 PM
  • The reason why i was asking is that (based on what i ve understood from the SyncFx' change tracking mechanism) i doubt it would actually produce expected results. Because:

    MyFilter would be something like 3 (to sync entities having SomeDate up to three days before or after today). With inital sync this would return a distinct set of entities, all fine. But with the next sync (on another day later) the parameter would remain 3 - but the subset of entities would have to change of course. Are you sure this kind of "changes" would be properly tracked and result in the correct subset of entities to be synced? In this scenario nothing might have changed, neither in  the database nor in the scope parameters.

    Sorry for asking such kind of theoretical questions (i just should try it out for myself) but it's helpful to understand before i spend a lot of time desinging a solution which wouldn't work out.

    Tuesday, August 10, 2010 7:36 AM
  • If you expect tombstones for the older days to flow in the next sync then this wont happen. The SqlSyncProvider does not do partition realignment and so in the next sync you will only get items that were modified and not synchronized which matches the filter parameter of 3 days period. Sync tracking will track all changes and the enumeration will pick them up if they have changes and not known to the destination and fits the filter predicate.
    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Tuesday, August 10, 2010 8:37 PM
  • I am not sure if i fully understand. What i need to achieve is that with every sync all entites matching the filter criteria _at the moment of sync_ get synced even though neither the filter parameter (@MyFilter = 3) nor the entities itself have changed. Thank you very much for your patience.
    Wednesday, August 11, 2010 6:54 AM
  • I am saying that is not possible in incremental syncs. For you to get that kind of behavior, it always has to be the first sync (send an empty anchor to the server). For the window to keep moving those items must be touched since the last time you sync'd. The reason for this is that each time you sync the server says I have told you everything upto my current tickcount and then filters the data based on what the client wanted. So when the client comes back it picks up from that tickcount. So if the items you now need are before that tickcount they will not be picked up. This sort of rolling window is not possible in incremental syncs. To get that behavior you always have to come with an empty tickcount.
    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Wednesday, August 11, 2010 4:41 PM
  • Oh! Thank you very much for your answer. Obviously i was wrong when i expected the filter paramters to be taken into account when determining the entities for sync.

    Now i find myself in a quite tricky situation but i will figure a way arround it. Most probably i will divide my entities into separate scopes so i can use incremental sync where possible and full sync where required.

    Just in case the dev guys have some spare time: this kind of "moving window" filtering would greatly assist my scenario. I would say it's not too uncommon, is it?

    Thursday, August 12, 2010 7:15 AM
  • Multiple static scopes  is the right way to think about this. However you can get a rolling window behavior even today by sending a null anchor (if you control the client implementation) or just clear the local IsolatedStore cache and then go back. This will work very nicely for download only reference data. Keep in mind that the server is performing lots of extra enumeration for each such new client and as the data on the server grows, you enumeration will take that much longer.


    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    Thursday, August 12, 2010 5:31 PM
  • Keep in mind that the server is performing lots of extra enumeration for each such new client and as the data on the server grows, you enumeration will take that much longer.

    What do you suggest do keep the overhead on the server low? Is there a handy way to clear out old clients from the database? Any hint where i should have a look arround?

    In my scenario, for one server instance, we have 10 to 100 clients which will have to do such "moving window" sync once per day. But they will never look back.

    Friday, August 13, 2010 6:42 AM
  • Do you need to keep that information on the server? If its dispensible then you may just delete them from the server. Right solution would be to delete and clean up the tombstones but that would require access to the underlying SqlSyncProvider which we dont in the labs bits.


    Maheshwar Jayaraman - http://blogs.msdn.com/mahjayar
    • Marked as answer by mjayaram Thursday, September 23, 2010 3:21 AM
    Friday, August 13, 2010 4:41 PM