A major problem with file sync RRS feed

  • Question

  • I want to synchronize a file system storage from a local office to a server destination. · The destination file storage is huge. About 1 TB in 24 million files and folders. · The source has only a few MB. · The problem I am facing is that the knowlegde file on the destination machine is huge (about 200MB). I mean the syncfile.metadata As I understood from the sych algorithm this file is sent to the source when the source want to send files to the destination I am seeing that the entire 200MB is sent on the network each time it needs to tell the source it’s knowledge. This solution is not good for me since there are about 50 sources trying to sync the server. And if I am running the Synctoy 2.1 which is based on the synchronization framework it doesn’t send this knowledge file and is much much faster. What should I change in order to make it work normal and fast for a huge destination? If the Synctoy 2.1 is capable of doing it and it is wrapped aroung the sych framework what do i have to change to use this solution in my large enviorment? (BTW – I am not using hash for file comparison) Here are the improtant code snippets that I am using:


    public void SyncFileSystemReplicasOneWay(string sourceReplicaRootPath, string destinationReplicaRootPath, FileSyncScopeFilter filter, FileSyncOptions options)


     logger.Debug( "SyncFileSystemReplicasOneWay - Started");

     FileSyncProvider sourceProvider = null;

    FileSyncProvider destinationProvider = null;

    try {

     sourceProvider = new FileSyncProvider(sourceReplicaRootPath, filter, options);

    destinationProvider = new FileSyncProvider(destinationReplicaRootPath, filter, options);

    destinationProvider.AppliedChange += new EventHandler(OnAppliedChange);

     destinationProvider.SkippedChange += new EventHandler(OnSkippedChange);

     sourceProvider.DetectingChanges += new EventHandler(sourceProvider_DetectingChanges);

    sourceProvider.DetectedChanges += new EventHandler(sourceProvider_DetectedChanges);

    destinationProvider.DetectingChanges += new EventHandler(destinationProvider_DetectingChanges);

     destinationProvider.DetectedChanges += new EventHandler(destinationProvider_DetectedChanges); // Use SyncCallbacks for

    conflicting items. SyncCallbacks destinationCallbacks = destinationProvider.DestinationCallbacks;

     destinationCallbacks.ItemConflicting += new EventHandler(OnItemConflicting);

     destinationCallbacks.ItemConstraint += new EventHandler(OnItemConstraint);

     SyncOrchestrator agent = new SyncOrchestrator();

    agent.LocalProvider = sourceProvider;

     agent.RemoteProvider = destinationProvider;

     // Sync source to destination

    agent.Direction = SyncDirectionOrder.Upload; logger.Info( "Synchronizing changes to replica: " + destinationProvider.RootDirectoryPath);

    Console.WriteLine("Synchronizing changes to replica: " + destinationProvider.RootDirectoryPath);

     agent.SessionProgress += new EventHandler(agent_SessionProgress);


     Console.WriteLine("Synchronizing ended");

    logger.Info( "Synchronizing ended"); }



    // Release resources if (sourceProvider != null)


     if (destinationProvider != null)


    logger.Debug( "SyncFileSystemReplicasOneWay - Ended");



    The options are: fileSystemSynchronizer.Options = FileSyncOptions.RecycleDeletedFiles | FileSyncOptions.RecyclePreviousFileOnUpdates | FileSyncOptions.RecycleConflictLoserFiles;


    Thanks, Pini

    Monday, November 1, 2010 7:56 AM

All replies

  • Hi,

    I assume that your sync app is running from local office machine. If you want to avoid the .metadata file to be sent across the network in every sync, please use different FileSyncProvider constructor to explicit set the destination metadata folder to a folder of the ocal office machine. When FileSyncProvider detects the local changes for destination, it need to access the .metadata file. This file doesn't only contain knowledge, it also contains all local metadata change tracking info such as each file/directory's file path, attributes, file size and file content hash value in previous change detection. These change tracking metatdata should take most of the 200MB size instead of Knowledge.

    Since you have 50 clients that need to sync with server, keeping a seperate server .metadata in each client as different sync pair is additional cost. If your clients never sync to each other directly, it is ok to set them up in this way. It is same as the SyncToy folder pair concept.



    This posting is provided AS IS with no warranties, and confers no rights.
    Monday, November 1, 2010 11:42 PM
  • Hi Dong, Thank you very much for this answer!!

    Is this means that after this cahnge the main server that contains the huge amount of folders and files will not send it's metadata file to each client each time a sync from the client will be initiated?

    Is that what you ment?

    And I dont understand what do i need to change . Now my code is as follow: 

    sourceProvider =

    new FileSyncProvider(sourceReplicaRootPath, filter, options);

    destinationProvider =

    new FileSyncProvider(destinationReplicaRootPath, filter, options);



    Tuesday, November 2, 2010 9:46 AM
  • Hi,

    You need to store server metadata files on client offlice machine. Please use ths FileSyncProvider constructor for server endpoint:

    public FileSyncProvider (
    string rootDirectoryPath,
    FileSyncScopeFilter scopeFilter,
    FileSyncOptions fileSyncOptions,
    string metadataDirectoryPath,
    string metadataFileName,
    string tempDirectoryPath,
    string pathToSaveConflictLoserFiles

    You need to set the metadataDirectoryPath to a local file path of the client machine. Otherwise, the defefault metadata file path is the sync rootfolder of the Server.

    I copied a sentence from MSDN below from link: http://msdn.microsoft.com/en-us/library/dd936767.aspx


    This form of the constructor initializes the location of the metadata storage file and temporary files to be rootDirectoryPath. It initializes the path to save conflict loser files to a null reference (Nothing in Visual Basic), and names the metadata storage file filesync.metadata. It initializes the filter to a null reference (Nothing in Visual Basic) and the configuration options to None.




    This posting is provided AS IS with no warranties, and confers no rights.
    Tuesday, November 2, 2010 4:57 PM
  • Thnak you again Don.

    This brings me to ask 2 more questions:

    1. Is this means that only in the first time the huge metadata file that represents the server knowlege will be created for each client( i guess a long runnign process)

    2. In my case the server gets updated regardless to the clients in the offices ( we have a main web appication that is not part of the clients that updated this storagfe). How does the clients then knows about these updates? how does the metadate file that represents the server get updated? or it is irrelvant?

    Thanks pini

    Wednesday, November 3, 2010 7:10 AM
  • Hi,

    For the first question, the answer is Yes if server has a lot of files and folders.

    For the second question, the FileSyncProvider will implicitly call DetectChanges() for the rootfoder that it owns when starting a new sync. And the DetectChange method will scan the hierarchy of the root folder to detect new changes.


    This posting is provided AS IS with no warranties, and confers no rights.
    Thursday, November 4, 2010 3:17 AM
  • Don, About the second answer I am still trying to understand the complete process:

    Is this true:

    1. For each remote client  - when he first tries to upload his files ( one way only) the metedata will built by scanning the server folder structrue. Since the server has a hugr amount of files and folders this will take some time but only for the first time.

    2. The metadata created will be stored in the remote client side.

    3. When another reauest to sync is initated - the client will implicitly call the DetectChange that will scan the server again for changes? doesnt this means that again this huge metadata file is transfered from somewhere to somewhere?

    Thanks, Pini

    Thursday, November 4, 2010 7:27 AM
  • Hi,

    For question 1, the scan of server folder structure happens every sync. FileSyncProvider is a full enumeration change detection provider. It needs to scan the whole folder structure to tell what have been changes since last sync.

    For question 2, the answer is yes. As I mentioned in the previous reply, you need to use the correct FileSyncProvider constructor to set "metadataDirectoryPath". I assume your sync app is running on client side not server side. Why you call it remote client side? Server should be remote in this case.

    For question 3, the answer is yes too, but metadata file will not be transfered in this case because it is in the same machine as the Sync App. You may want to try it out.


    This posting is provided AS IS with no warranties, and confers no rights.
    Thursday, November 4, 2010 6:12 PM
  • Hi Dong, I will try it out as soon as I can i simply want to have a full understanding. and still have some open question in my mind.

    The Remote client in my case is the client that initiates the request and is remote from the server - it can be called simply client :-)


    1. As you mentioned " the scan of server folder structure happens every sync" - isnt this a very long process since in my case the server has a huge amount of files and folder?

    2. Is this mean during the detection changes no metadatefile will be transfered in the network since it is stored on the client ?

    3. Is this mean during the detection changes the server will update it's metadatafile that is located in the client?


    And thank you so much for for patient :-)



    Sunday, November 7, 2010 10:07 AM