locked
Sync Bandwidth is too high RRS feed

  • Question

  • We're seeing some unacceptable overhead when performing a full or partial sync.  The data which is under 1M on disk in SQL CE 3.5, and which could be represented in well under a meg, is using 300+M when performing a full sync.  We're seeing this with both CTP1 and CTP2.

    We're using a custom provider and the transport mechanism is WCF over Net TCP.

    When looking at the network stream, I see the bulk of the communications is this (excerpt below is the metadata for one data item).  It seems the metadata is being represented in an extremely inefficient way.  

    sync*http://schemas.microsoft.com/2008/03/sync/^A^A@^MitemOverrides^H*
    http://schemas.microsoft.com/2008/03/sync/@^LitemOverride^E^Dsync^FitemId~X^\AQAAAAC3P0BU1F1Olh0rAbMpKOk=   ^Dsync*http:
    //schemas.microsoft.com/2008/03/sync/A^Dsync^KclockVectorA^Dsync^RclockVectorEle
    ment^E^Dsync
    replicaKey~@^E^Dsync    tickCount~X^D1020^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~B^E^Dsync    tickCount~X^E28226^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A3^E^Dsync tickCount~X^D4267^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A4^E^Dsync tickCount~X^C340^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A5^E^Dsync tickCount~X^C696^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A6^E^Dsync tickCount~X^C460^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A7^E^Dsync tickCount~X^C452^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A8^E^Dsync tickCount~X^E13993^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^A9^E^Dsync tickCount~X^D1276^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^B10^E^Dsync    tickCount~X^C417^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^B11^E^Dsync    tickCount~X^D2757^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^B12^E^Dsync    tickCount~X^C486^AA^Dsync^RclockVectorElement^E^Dsync
    replicaKey~X^B14^E^Dsync    tickCount~X^C523^AA^Dsync^RclockVectorElement^E^Dsync

    Any options to serialize the SyncKnowledge in a more compact manner?  A 300:1 bloat is unacceptable for our needs.
    Thursday, August 20, 2009 2:14 PM

Answers

All replies

  • Mark, you could use the byte array that you get from SyncKnowledge.Serialize() and move that using WCF?

    sid
    Monday, August 31, 2009 6:15 PM
    Moderator
  • It was discovered that our sync knowledge was getting highly fragmented due to not having a linear sync id scheme.  We've since written our own custom metadata store to order the SyncKnowledge and eliminate the "swiss-cheese" effect.

    Our sync performance has gone up 100 fold and CPU usage has dropped right off now.


    Tuesday, November 24, 2009 3:01 PM
  • I see - so you are ordering your changes when enumerating them now?

    When you say your wrote your own custom metadata store, were you using the Sync Framework metadata store before?

    Our doc also has some help on how ordering will help compact the knowledge:

    From http://msdn.microsoft.com/en-us/library/bb902821(SQL.105).aspx:

    Global ID

    Is an identifier for an item stored in a replica. Because the replica is responsible for generating the global IDs, the replica can allocate global IDs that make enumeration more efficient. For example, a community can define its global ID format to be a GUID that is preceded by an 8-byte prefix. The prefix can then be used to control the sort order of the global IDs. This enables the providers to more easily use ranges to enumerate changes, and because one range can contain a large number of items, the knowledge structure can be more compact when items are represented as ordered groups. For more information on global ID formats, see Flexible IDs.

    Thursday, December 10, 2009 6:13 AM
    Moderator