locked
How do I ensure initial sync doesn't take long RRS feed

  • Question

  • We are using File Synchronization.  We have 4GB of files (over 6000 files).  User's laptops already have the initial set of files.  Therefore, in the first sync we would like sync framework to figure out that the 2 folders (server and client) contain identical files (we are doing only download to clients) and do not take long time but currently it takes hours!!! Is there a way to avoid this?  On subsequent sync there maybe changes to the folders so we would like the folders compared.

    Thanks
    H

     

    Tuesday, January 3, 2012 5:58 PM

Answers

  • Of course when you copy the metadata to a new client, that new client must have all the exact files that the source client does (ie it matches the metadata you are copying).

    And June is right, you must test this scenario, but it should be fine.

    -Jesse

    • Marked as answer by Yunwen Bai Tuesday, January 17, 2012 9:58 PM
    Tuesday, January 17, 2012 2:22 AM

All replies

  • Hi,

    There is no good way to avoid this.  Initial sync will always be slow as the frameworks builds up the metadata for the first time.  Subsequent syncs will be much faster.  Are you using the file hashing option for comparison?  This can slow things down.  Also, all the create-create collisions during the initial sync might slow things down.  From the docs, if the timestamps differ then file copying might be occurring:

    Create–create collision: In the case where two files were independently created on two replicas with the same name, the resolution on the next synchronization operation is to preserve a single file with the same name, but with contents from the side which had the later update timestamp. In case the file timestamps are the same, the content is assumed to be the same and one of the file streams is picked as the “winner” deterministically. If Recycle Bin support is requested from the file synchronization provider, the “losing” file will be placed in the Recycle Bin on the “losing” replica so it can be recovered if the user wishes to do so.

    Tuesday, January 3, 2012 7:14 PM
  • Hi Jesse,

    Many thanks for the quick response.

    How about if I sync one client then copy the .filesync metadata file to the other clients' replicas?

    I am not using the file hashing at all.  I have included the code I am using below in case you can see anything I am doing wrong.

     

            public void SyncFiles(int updateId)
            {
                string serverReplicaRootPath = ConfigurationUtility.ServerDocumentsPath;
                string clientReplicaRootPath = ConfigurationUtility.ClientDocumentsPath;

                OnProgressUpdate("Checking to see if client replica folder is there and creating if not ...");
                if (!System.IO.Directory.Exists(clientReplicaRootPath))
                {
                    System.IO.Directory.CreateDirectory(clientReplicaRootPath);
                }

                try
                {
        FileSyncOptions options = FileSyncOptions.ExplicitDetectChanges | FileSyncOptions.RecyclePreviousFileOnUpdates;

        FileSyncScopeFilter filter = new FileSyncScopeFilter();
     
        DetectChangesOnFileSystemReplica(serverReplicaRootPath, filter, options);

        // Sync in one way
        SyncFileSystemReplicasOneWay(serverReplicaRootPath, clientReplicaRootPath, null, options);
                }
                catch (Exception ex)
                {
                    Logger.LogException(ex, "Syncher.SyncFiles");
                    throw;
                }
            }

            public void DetectChangesOnFileSystemReplica(string serverReplicaRootPath, FileSyncScopeFilter filter, FileSyncOptions options)
            {
                FileSyncProvider provider = null;

                try
                {
                    provider = new FileSyncProvider(serverReplicaRootPath, filter, options);

                    provider.DetectChanges();
                }
                catch (Exception)
                {
                    throw;
                }
                finally
                {
                    // Release resources
                    if (provider != null)
                        provider.Dispose();
                }
            }

            public void SyncFileSystemReplicasOneWay(string sourceReplicaRootPath, string destinationReplicaRootPath, FileSyncScopeFilter filter, FileSyncOptions options)
            {
                FileSyncProvider sourceProvider = null;
                FileSyncProvider destinationProvider = null;

                try
                {
                    sourceProvider = new FileSyncProvider(sourceReplicaRootPath, filter, options);
                    destinationProvider = new FileSyncProvider(destinationReplicaRootPath, filter, options);

                    destinationProvider.AppliedChange += new EventHandler<AppliedChangeEventArgs>(destinationProvider_AppliedChange);

                    SyncOrchestrator agent = new SyncOrchestrator();
                    agent.LocalProvider = destinationProvider;
                    agent.RemoteProvider = sourceProvider;
                    agent.Direction = SyncDirectionOrder.Download; // Sync source to destination

                    SyncOperationStatistics syncStatistics = agent.Synchronize();
                }
                catch(Exception ex)
                {
                    Logger.LogException(ex, "Syncher.SyncFileSystemReplicasOneWay");
                    throw;
                }
                finally
                {
                    // Release resources
                    if (sourceProvider != null) sourceProvider.Dispose();
                    if (destinationProvider != null) destinationProvider.Dispose();
                }
            }
      

     

    Tuesday, January 3, 2012 7:49 PM
  • Hi,

    Code looks fine.  Depending on your requirements/application you might be able to get away with running detect changes only once before syncing all clients.  Seems like you are doing this every time.

    As for your idea for copying the .filesync metadata, it might work only because you are doing one way syncs down from the server.  Be careful with this as basically this means every client is the same replica, so if you decide to sync up to the server, things will break.

    -Jesse

    Tuesday, January 3, 2012 8:16 PM
  • Hi Jesse,

    I have no idea why I am running detect changes.  I added it in the hope of reducing time it takes to sync but it has the opposite effect I think. Maybe I am better off removing it. 

    For copying the .filesync metadata, our requirement will never need clients synching up to server so I may have a solution there

     

    Cheers
    H

     

     

    Wednesday, January 4, 2012 1:13 PM
  • You do need to run DetectChanges to properly determine the files that change on the source (the server in this case).  You are doing the right thing here in that you call DetectChanges explicitly AND set the sync options flag FileSyncOptions.ExplicitDetectChanges which tells the provider not to run DetectChanges automatically.  If you do not set the flag then the provider will run DetectChanges on the source before the first getchangebatch call automatically. 

    The optimization I was trying to get at was to do what you are doing, but only call DetectChanges once before syncing ALL clients.

    -Jesse

    Wednesday, January 4, 2012 6:32 PM
  • Ah, now I get you Jesse, Thank you.

    My detect changes code runs on the client!  Every client, when sycnching, calls this code.  The clients can sync when they want.  Some sync 2 or 3 times a day, some once a week etc.  The server folder has files added daily.  Where will I put the code that does the detect changes and when would I run it for best performance gains when synching?

    Appreciate your input

    H

     

     

    Monday, January 9, 2012 11:08 AM
  • another related issue I am having is that network admins are telling me that even wehn there is 1 files (half a meg approx) I am downloading 151MB of data.  Does this sound reasonable from the framework? The network admin maybe monitoring incorrectly but I wanted your opinions on this first please

    Many thanks
    H

     

    Monday, January 9, 2012 11:11 AM
  • 150 MB does not sound right at all for 1 item at 1mb.  Might want to investigate that further.

    As for detect changes, the code can run anywhere, but keep in mind you only have to run it on the server files, with the server provider because you are doing a one-way sync down to the client.  That means you could run a server side process that detects changes every once a while.  Of course, you might have the requirement that the client must be completely up to date once it syncs, so you might have to run it before every client sync - it depends on your requirements.

    -Jesse

    Monday, January 9, 2012 6:14 PM
  • Thanks Jesse,

    That explains well.  So I am using ASP.NET code that writes files to the server folder.  The best time to run detect changes would be at that point, right?  If I go that route, will I need to ever call Detect Changes on the clients?

    Just follow up Q.  Does Detect Changes (in my scenario - running on server provider) only update filesync.metadata on the server folder?  If so when I run sync - specify explicit Detect Changes but do not call it from client, does syncfx just compare the filesync.metadata file on client and that on server?

    Many thanks for your assistance
    H

     


    Hassan
    Monday, January 9, 2012 6:43 PM
  • Yes, that would work, if all your changes happen at a set time, then you would only have to run detect changes once after those changes finish.  And then you'd never have to run it on the clients.

    Yes, detect changes only updates the metadata on the folder it runs on - and yes during a sync the client and server metadata will be compared.

    Remember, the purpose of detect changes is simply to find the LOCAL file changes and to add those to the metadata.  On the client you presumably don't have (or don't care about) local file changes so there is no need to run detect changes.  THe client metadata will consist of the sync changes from the server.

    -Jesse

    Monday, January 9, 2012 7:15 PM
  • Thank you so much Jesse.

    I was struiggling before b ut now understand how things work.  Our changes don't happen at a given time but over short period.  An admin will upload documents to the server via the web app in a given day - we can request that they cilck on a button "Refresh Clients" or something like that, that fires a console app we put on server that does the Detect Changes.  Does that sound reasonable to you?  On the clients I will just sync and not call Detect Changes

    Can I ask Jesse, is it the Detect Changes on the server on 6000 file folder that is taking most of the time ot the actual comparison of the filesync.metadata on server and client? because even if we have no new files on the server it can still run 10 to 20 minutes

    Many thanks for helping with this

    H


    Hassan
    Tuesday, January 10, 2012 12:59 PM
  • Yes, that makes sense.

    And yes, detect changes is likely taking the most time - every file is looked at during this process.

    -Jesse


    Wednesday, January 11, 2012 6:44 PM
  • Thanks for the confirmation.  I have been doing tests regarding the 150 MB download claim our network admin is complaining about whenever we sync a client.  Wireshark shows that conversation between client and server even if no new files exists and detect change is called from the client machine on the server folder consumes about 113 MB for a metadata file about 5.5 MB.  Perhaps calling detect changes on the client machine but with a server folder path is wrong.  There is no documentation that advised on this AFAIK.

    I'll report back my finding on running the detect changes from the server.

    Many thanks for helping me 

    H


    Hassan
    Thursday, January 12, 2012 9:40 AM
  • Hi Jesse,

    When is the client metadata created? on initial sync? Before initial sync I have all the current server files manually copied to the client and so do not want anything copied down from server.  How do I get the client metadata created - is it the exact copy of the server metadata?  Just a bit confused on this one because I tested a new client with the idea of running detect changes on server and not on the client - I am specifying File options as

     

    FileSyncOptions options = FileSyncOptions

    .ExplicitDetectChanges;

    And not calling DetectChanges on client just call Syncronize.  This doesn't seem to get any files from server at all.  I am not sure I understand the whole process so please throw some light

    Thanks

    H

     


    Hassan
    Thursday, January 12, 2012 12:11 PM
  • My previous statements were incorrect.  Apologies.  It turned our network went bust and a quiet error (allbeit logged) was preventing the sync.

    Initial sync starts creating the metadata file but is taking forever.  It is not downloading files just comparing metadata I think.  Is this avoidable? or must we live with it.  If I knew how client metadata file is created it would have made some sense, I suppose.

    Please respond.  Don't let me talk to myself :-)

    Cheers


    Hassan
    Friday, January 13, 2012 8:20 AM
  • Initial sync of 6000 files will certainly be slow.  After the first sync, once the metadata is built up on the client, the syncs will be fast.  It is slow the first time because there is no metadata on the client and every file must be enumerated.

    -Jesse

    Friday, January 13, 2012 8:32 AM
  • Thanks Jesse,

    Could you comment on these please

    • We can't use the server metadata file (copied to client) to speed up initial sync -as metadata on client and server are different
    • If I synched one client, can I reuse that metadata on other clients on their initial sync to speed things up - All my clients do a download only to the same replica on the same server and even their local folder is identical
    • When I tried to create an initial sync on my dev machine actually running it within Visual Studio, I got an error  "a storage engine operation failed with error code 25051 (hresult = 0x80004005, source iid = {0c733a7c-2a1c-11ce-ade5-00aa0044773d}, parameters=(0, 0, 0, , , , )).\r\n"  Any idea??

    I would appreciate any assistance.  I have to deliver solution that syncs files as expected by our clients today ... somehow!

    Cheers
    H

     


    Hassan
    Friday, January 13, 2012 8:57 AM
  • AFAIK,

     1 - no, you can't reuse the metadata. the metadata contains a replica id to differentiate one replica from another. when you copy, you're replica id's will be the same.

    2 - same as above

    3- can you confirm no other sync is on-going?


    btw, have a look at the file sync section of this link on the placement of the server metadata file: http://social.technet.microsoft.com/wiki/contents/articles/sync-framework-tips-and-troubleshooting.aspx
    • Edited by JuneT Saturday, January 14, 2012 1:29 AM
    Friday, January 13, 2012 10:11 AM
  • Many thanks June.

    I copied the metadata from sevrer to client and started sync.  I immediately get a "metadata file is in use" and no other sync is taking place (no one else syncs currently - just me).  I tried the link you gave me to change the contructor but either I am using wrong version or that is out of date as there is no contructor tha takes 6 parameters.  Just working with the one with 7 parameters that I have and see

    Cheers
    H

     

     


    Hassan
    Friday, January 13, 2012 10:51 AM
  • Hi,

    I  am still not sure of these (apologies .. I am learnig as I go as I can't see any other documentation that goes deep enough or even a commercial book that covers the details of the syncfx):

    • which provider to setup metadata file path - client or server or both. 
    • Where do I put the metadata file copied from the server - see June's previous comment? with the documents or somewhere else.
    • Should there be 2 metadata files - one copied from server and one created when synchronize is called 
    • My code is:

    OnProgressUpdate(

    "Started Setting providers ...");
    sourceProvider =
    new FileSyncProvider(sourceReplicaRootPath, filter, options);

    string metadataFolder = Path.Combine(destinationReplicaRootPath, "metadata");
    string temp = Path.Combine(destinationReplicaRootPath, "temp");
    string conflictsFolder = Path.Combine(destinationReplicaRootPath, "conflicts"

    );

    destinationProvider =

    new FileSyncProvider(destinationReplicaRootPath, filter, options, metadataFolder, "filesync.metadata"

    , temp, conflictsFolder);

    OnProgressUpdate(

    "Completed Setting providers ..."

    );


    Hassan
    Friday, January 13, 2012 2:56 PM
  • Hello Jesse, June,

    May I have your attention please.  I need help desperately on this.  I am nearly there.  Everything is setup.  I just need to figure out which provider to change constructor for as June's link.  If I change for the server provider then it syncs quick but new files are not downloaded.  If I change for the client provider then it syncs for awhile and comes back with an error when metadata file is 212kb(This metadata is the one i specified on the constructor) The one I copied from the server is inside the docs folder with the files to be synched

    Error: = "a storage engine operation failed with error code 25051 (hresult = 0x80004005, source iid = {0c733a7c-2a1c-11ce-ade5-00aa0044773d}, parameters=(0, 0, 0, , , , )).\r\n"  

    I wish Microsoft had more documentation detailing these operations.  I spend weeks now trying to figure this out.

    Cheers
    H


    Hassan
    Friday, January 13, 2012 5:35 PM
  • The error you are getting is:

    Internal error: Unable to successfully execute disk I/O on the file system.

    According to this: http://msdn.microsoft.com/en-us/library/ms171879(v=sql.100).aspx

    Are you able to write to that disk?  I'm not sure what the confusion is...you shouldn't be touching the metadata file itself once sync framework creates, as long as you point to the same file every time - the server provider to its file, and the client provider to its file.

    -Jesse

    Friday, January 13, 2012 6:33 PM
  • Thank you for the response Jesse.

    I can write to the disk ok.  My confusion is stemming from the fact that I copied the servers metadata file into the client metadatafolder so that the client uses that metadata on the initial sync (to avoid the long wait on initiial sync).  Is this possible?


    Hassan
    Friday, January 13, 2012 6:47 PM
  • You certainly can NOT copy the server metadata, as June mentioned.  The client must have it's own metadata.

    We discussed earlier that you might be able to get away with copying the client metadata to other clients, but that's not really supported.  But to re-iterate, server and client must have their own metadata.

    -Jesse

    Friday, January 13, 2012 7:20 PM
  • my apologies Hassan,

    on second look, this:

     1 - no, you can reuse the metadata

    should have read:

     1 - no, you can't reuse the metadata


    • Edited by JuneT Saturday, January 14, 2012 1:31 AM
    Saturday, January 14, 2012 1:31 AM
  • Thank June.  It is very clear now - now copying of metadata files :-)

    Having read the conversation from top to bottom, it looks like Jesse has been providing all the right information but I have an unrealistic client who is driving me mad. 

    I will now insist on full sync on all clients to create own metadata.  Just to cover my back Jesse, what can go wrong if we copied a metadatra from properly synched client to all the other clients for their initial sync?  I know this is not supported as you stated and I can simply state this fact to the client but it is preferable to give some explanation even if it "it will probably work for you but it is not supported" as I suspect the case is

    I can't thank you enough for your assistant, both of you.

    Cheers
    H

      


    Hassan
    Monday, January 16, 2012 10:50 AM
  • as i have mentioned earlier each replica in the sync community has a unique id associated with it. by copying metadata files, you will have clients having the same id.

    assuming you have client id 1 and it syncs with server 1. you copy client 1 metadata to client 2. when you sync client 2, it has replica id of 1 and when you sync, the server 1 metadata may actually think it has sync with client 2 because the id present in client 2 is 1.

    Monday, January 16, 2012 11:47 AM
  • Cheers June. 

    I have thought about that process and think ... what is the harm.. still metadata from client 1 (now used by client 2) is compared with the server metadata and any file changes applied.  So I am assuming that from then on the 2 client metadata files have the same id but will end up having different content as the 2 clients sync with server different times.  Am I missing something perhaps.  I am assuming that ids are irrelevant with I am doing a download only from server???

    H

     


    Hassan
    Monday, January 16, 2012 1:25 PM
  • on second thought, it might work on a download only scenario. as Jesse mentioned, this is an unsupported scenario though. you might want to do extensive testing to see if it works.
    Tuesday, January 17, 2012 1:44 AM
  • Of course when you copy the metadata to a new client, that new client must have all the exact files that the source client does (ie it matches the metadata you are copying).

    And June is right, you must test this scenario, but it should be fine.

    -Jesse

    • Marked as answer by Yunwen Bai Tuesday, January 17, 2012 9:58 PM
    Tuesday, January 17, 2012 2:22 AM
  • Thank you both.  I appreciate your support.
    Hassan
    Tuesday, January 17, 2012 10:21 AM
  • Jesse, June

    I went with a solution that:

    1. Created a scheduled app that runs DetectChanges on server folder over night.  This is very fast (takes less than a minute)
    2. App sync specifies ExplicitDetectChanges but does not call DetectChanges.  This on first sync takes about 20 minutes which is great and on subsequent syncs with a few new files takes between 2 seconds to 1 minute again awesome.

    Client now demands we run server DetectChanges more often - i.e hourly or more often.  I am worried that if a client syncs while server  DetectChanges is being performed there will be errors - i.e locked files (data files or sync.metadata file).  What is your opinion on this?  Many thanks in advance

    Hassan

     

     


    Hassan
    Tuesday, January 24, 2012 4:48 PM
  • Cool.

    Running it more often should be ok I think.  The download syncs will be read operations only on the metadata.

    -Jesse

    Tuesday, January 24, 2012 5:38 PM
  • Cheers Jesse

    What about the framework trying to download a (new) data file while the console app is doing a detectchanges (checking for that file)?

    It looks like I have to do extensive tests - that this sort of thing is not common and therefore not known how it would go.

    H

     

     


    Hassan
    Tuesday, January 24, 2012 5:49 PM
  • Again it should be ok because it's just concurrent read operations.
    Tuesday, January 24, 2012 5:57 PM
  • Cheers Jesse.

    Hassan


    Hassan
    Wednesday, January 25, 2012 5:02 PM
  • Hi Jesse,

    We have a working system and client has conme to like SyncFx now :-)   They have just reported an error though:  storage engine operation failed error code 25035

    25035

    SSCE_M_FILESHAREVIOLATION

    There is a file-sharing violation. A different process might be using the file.

    The file sharing was on the metadata file on the server.  Client run again now and it worked.  Could you shed any light on what might be happening.  Appreciate any input

     

     

     


    Hassan
    Thursday, February 2, 2012 5:29 PM
  • that looks like a concurrency issue with multiple clients synching at the same time and the remote metadata file is in the server.

     

     


    • Edited by JuneT Thursday, February 2, 2012 10:17 PM
    Thursday, February 2, 2012 10:15 PM
  • Could you be that you are running detect changes at the same time as a sync?  The detect changes could be modifying the metadata while the sync process is trying to read it.
    Thursday, February 2, 2012 10:39 PM
  • June, I assumed that multiple clients synching (download only) at the same time does not create concurrency.  If only one client can sync at one time then the framework will not work for most scenarios.  It certainly won't on our current need.  We have 100 clients who sync at any time

    Jesse, that is possible and is the first thing that I asked my client.  If you remember our discussion I created a tool ServerMetadata refresher which runs DetectChanges on the server folder.  This tool is on the server and the client may have run it (I am dealing with a network admin who has access to everything).  I asked this to be scheduled to run once a day, midnight, to be safe - they may have run it though (ignoring my advice).

    What I would like is your input regarding multiple clients doing a download only sync, will this cause concurrency issues? There is no detect changes in the client code

    FileSyncOptions

     

    options = FileSyncOptions

    .ExplicitDetectChanges;

    And DetectChanges never called form the client

    Thanks
    Hassan

     


    Hassan
    Friday, February 3, 2012 12:20 PM
  • i just did a quick test of two clients doing concurrent syncs (download only) and i got a sharing violation on the server metadata file.
    Monday, February 6, 2012 6:26 AM
  • Thanks for doing that June.  Really appreciate it.  Is there a workaround that you are aware of?

    If this in fact happens then we have lost this client for good and our name tarnished.

    Jesse are you able to provide any input regarding this please.

    Cheers

    Hassan


    Hassan

    Wednesday, February 8, 2012 4:47 PM
  • I investigated this further and it appears my previous assumption was incorrect and that we do lock the metadata files no matter what the operation is (upload or download).

    So you have 2 options:

    1) Use separate metadata files for each client (you will now have to run detect changes on each set, but you can run those concurrently at least)

    2) Only process one client sync at a time.

    Option one seems reasonable in that you can still run DetectChanges only when you want, but you just run it for each registered client at the same time.

    -Jesse

    Wednesday, February 8, 2012 6:49 PM
  • Thanks Jesse.

    Option 2 will just not go well with our client.  Could you go through option 1 with me please:  Currently I run DetectChanges on the server folder (at the server) every night.  How could it work for me?  I can't visualise it.

    I have another more pressing issue - sorry for dropping this in, I hope you can help.

    I am doing both DB sync and file for our app.  DB sync has been working fine.  Clients just reported that they are not getting documents from the server DB for the past week.  I tested it and - no errors but my local SQL Express DB is out of sync.  We have one table that often changes (documents) it is a few days behind.  Where would you start looking to troubleshoot this?

    Thanks


    Hassan

    Wednesday, February 8, 2012 7:04 PM
  • So each client-server pair will have it's own set of metadata, it's own scope.  At night when you run DetectChanges, instead of running it once on the one set of server metadata, you run it on each specific server metadata file corresponding to a client.  The only cost is more processing, and you can run these concurrently.

    As for you db sync question, please start another thread in this forum, providing more details about what providers you are using, etc.

    -Jesse

     
    Wednesday, February 8, 2012 7:26 PM
  • Cheers Jesse,

    I get the idea that I need to end up with multiple server metadata files (1 for each client and obviously named so that they have different names).  What I don't understand is how to create these files and match then with clients.  Currently I am creating only one server metadata file and do not need to know about clients.  Does it mean that I have a list somewhere of the clients that are installed? i.e machineA, machineB then create machineA.metadata machineB.metadata (calling DetectChanges) then on sync at client side pass this file name (machineA.metadata) to the SyncProvider constructor?

    Will create another thread for the DB sync question


    Hassan

    Wednesday, February 8, 2012 8:07 PM
  • hi hassan,

    out of curiousity, how long does your detectchanges run and how long does each sync take on average?

    before going down the path of creating multiple server metadata files, have you considered simply retrying the sync when a file sharing violation error is fired?

    if you choose to create multiple server metadata files, try this:

    1. create one server metadata file, let's call it serverMetadata.

    2. for every client  that wants to sync the first time, copy the serverMetadata file to a client specific metadata file (maybe a client+guid filename?)

    3. have the client save/remember its own server metadata file and use that for subsequent syncs

    4. when you run detectchanges on the server, you can either have the server app loop thru all the client specific server metadata files and do detect changes on each. Or, run detectchanges once on the first server metadata file (serverMetadata), then copy this file to the client specific server metadata files making sure you keep the same client specific metadata file names. (copy ServerMetadata to client1metadatafile, copy ServerMetadata to client2metadatafile, and so on)

    Thursday, February 9, 2012 1:50 AM
  • Yes, these are good ideas that should work.  Only run DetectChanges once and copy it to all other metadata files.

    Taking this even further, you could make "on demand copies" where every time any client syncs, you make a copy of the "master" metadata file and you use that for that sync.  That way you never have to keep track of clients.  The only cost is that you have a metadata file copy before every sync, but the file should be pretty small and fast, and you could even do it in advance of a client syncing.

    Thursday, February 9, 2012 1:57 AM
  • Brilliant ideas June and Jesse,

    June,
    My DetectChanges are run on server and take less than 1 minute.  They are run once a day at night.  The sync is a different matter alltogether.  Last week a client synched at home via his broadband connected to company VPN and it took 7 hours! Company WAN takes 3 hours and the company LAN takes 20 minutes.

    Because of the length of time sync takes retry is not good in my opinion.

    Point 2 above, June, to know client is synching first time, we check for the existnece of client metadata file, right? I specify the client's server metadata file via the constructor of the sourceProvider as I have done for destinationProvider below?

    sourceProvider = new FileSyncProvider(sourceReplicaRootPath, filter, options);
    destinationProvider = new FileSyncProvider(destinationReplicaRootPath, filter, options, metadataFolder, "filesync.metadata", temp, conflictsFolder);

    Jesse,

    On demand sounds great.  Simply, before synching a client, I copy the server metadata to a unique file (at the server and so not downloaded to client) then use that for synching that client? Confirm if I got it right or not please.  What happens if several clients try ot copy the master at the same time?  I am sking this as I never expected you guys to lock the server metada file for download only, so I am double checking everything!

    Thank you both for helping me


    Hassan

    Thursday, February 9, 2012 9:41 AM
  • yes, specify the location of the client specific server metadata file.

    jesse's approach is better. i dont think file copy operations actually puts a lock on the file so you should be able to have multiple clients copying the file at the same time.

    Thursday, February 9, 2012 10:03 AM