Can Sync Services for ADO.NET maintain filtered data on a client based on a rolling window? RRS feed

  • Question

  • Hello!


    Please forgive the uninformed nature of this question. I'm lightly involved in a project which is being built using Sync Services for ADO.NET for mobile devices and a central SQL 2008 database. It is using the change tracking feature for the sync services. My knowledge of sync services and the current code is very light, so I may have a hard time providing some technical details.


    For performance reasons, the handhelds need to only have 14 weeks of data. The current solution (which I'm trying to change) handles this by only having 14 weeks in the master database. Since change tracking is used, the inserts/updates/deletes will be sent to the client via sync services and thus stay in sync. It's relying on deletes (after a record leaves the 14-week window) to remove the data from the clients.


    I would like to change our solution so that we don't limit our master database to the 14 weeks (instead have 3 years). We would add a filter to the sync query to provide us with the 14-week window. But in this instance we're no longer able to rely on deletes to define our data boundary and thus remove the data from the client. So my question is this: If the sync services data set is defined by a filter, and that filter causes data to pass in and out of the dataset, will sync services maintain that current dataset on the client? In other words, will it remove from the client any records which are no longer in the dataset? Or do we need to write something separate to delete those old records?


    Thank you very much! We're really struggling right now to come up with the proper design for maintaining the 14 weeks of data on the handheld. Your help is much appreciated.



    • Moved by Hengzhe Li Friday, April 22, 2011 3:17 AM (From:SyncFx - Microsoft Sync Framework Database Providers [ReadOnly])
    Thursday, October 16, 2008 7:11 PM


  • Yes you are correct.


    Maria del Mar Alvarez - MSFT


    Thursday, November 20, 2008 7:11 PM

All replies

  • Brian,

    The enumeration query only decides the new data that needs to be downloaded/uloaded to/from the client. What you want to achieve on the client is automatic cleanup of data. Sync services does provide a property called RetentionInDays which basically tells the runtime the number of days for which metadata has to be kept. Setting this to 14*7 days will mean that

    after each sync the client runtime would delete all metadata older than that number. You can then run a mini query that will delete data older than 14 weeks. For more information refer http://msdn.microsoft.com/en-us/library/microsoft.synchronization.data.sqlserverce.sqlceclientsyncprovider.retentionindays.aspx



    Friday, October 17, 2008 3:38 AM
  • Sorry it took me so long to reply. But thanks for the info! However, from my reading of the RetentionDays, it deletes the metadata, not the data itself. I believe the metadata deleted is that which determines the position of the last sync and thus what data the client receives. Is this correct?

    Monday, October 27, 2008 6:13 PM
  • Yes you are correct.


    Maria del Mar Alvarez - MSFT


    Thursday, November 20, 2008 7:11 PM
  • Hello Brian,

    We're trying to figure out how to manage filters on user data as well (ie. User A should only be able to get what they have entered or have access to.) This can not be handled by retention days.

    Did you wind up using retention days, or were you able to find a better way to manage this?


    Tuesday, February 10, 2009 9:16 PM
  • I'd very much like to know this too, it must surely be a fairly common requirement to have selective synchronisation. Please?
    Friday, February 13, 2009 4:01 PM
  • Brian,

    The answer that's worked for my team is to 'simulate deletes', for example if you want to have data older than 14 weeks come through as deletes on the clients then your server proc for SelectIncrementalDeletes would need to look something like this:

    CREATE PROC dbo.MyTable_SelectIncrementalDeletes
      @sync_initialized bit,
      @sync_last_received_anchor bigint,
      @sync_client_id_binary varbinary(128),
      @sync_new_received_anchor bigint
      IF @sync_initialized > 0
        FROM CHANGETABLE( CHANGES dbo.MyTable, @sync_last_received_anchor ) AS CT
        AND CT.SYS_CHANGE_VERSION <= @sync_new_received_anchor
        AND (
          OR CT.SYS_CHANGE_CONTEXT <> @sync_client_id_binary
        FROM dbo.MyTable T
        WHERE DATEDIFF( week, T.LastModified, GETDATE() ) > 14
    The issue with the above is that data older than 14 days is always sent as deletes, so client side you have to 'fake' the record count, which means you might miss actual errors.

    A more advanced solution is to catch anything that would cause the data to no longer match a filter (using a TRIGGER or application logic) to write a new record to another table, then you can just use a union on that other table with normal change-tracking, eg:

    FROM CHANGETABLE( CHANGES dbo.MyTable, @sync_last_received_anchor ) AS CT
    AND CT.SYS_CHANGE_VERSION <= @sync_new_received_anchor
    FROM CHANGETABLE( CHANGES dbo.MyTableFilterChange, @sync_last_received_anchor ) AS CT
    JOIN dbo.MyTableFilterChange F
    ON CT.ID = F.ID
    WHERE CT.SYS_CHANGE_VERSION <= @sync_new_received_anchor
    -- logic here depends on how you track filter changes
    AND F.Reason = 'Old'
    For simplicity my extra table above simply has an ID and Reason field (and change tracking turned on of course), then you would simply need to have some process that checks for old record, eg:

    INSERT INTO dbo.MyTableFilterChange ( ID, Reason )
    SELECT T.ID, 'Old'
    FROM dbo.MyTable T
    AND DATEDIFF( week, T.LastModified, GETDATE() ) > 14
    That adds new records to the FilterChange table as long as there isn't already a record there. As you can imagine more complex filters can use something similar, for my teams app there was a SiteID, and each remote site filtered to just get it's own data, it was a matter of creating a trigger that checked if the SiteID changed and if so created a Reason='Moved' record for the old SiteID and new SiteID, then the SelectIncrementalDeletes and SelectIncrementalInserts just unioned to the filter table joining on OldSiteID for deletes and NewSiteID for inserts. The SelectIncrementalUpdates also was modified to ignore updates if there was a 'Moved' with SiteID = NewSiteID row in the same anchor window.

    Hope this helps!
    Thursday, May 14, 2009 2:49 AM