locked
How do LiveMesh folder synchronization work? RRS feed

  • Question

  • I'm trying to figure out if what I'm seeing is expected behaviour without filing bug report.
    I have folder with 100 GB of big files (4-5 GB MKV files). I setup this folder to be Live Mesh Folder and another server setup as partner. Both servers are on local subnet. Original server has very high CPU and IO by MOE.exe for 4-5 hours now. Typical log entry is below. Since I don't know how Live Mesh works I can just assume that it stores hash of each file on a clod and decides if files needs to be replicated based on that. So to compute hash it'll have to read whole contents of the file and perform some math calculations (which is consistent with what I'm seeing - high CPU and IO). What does not make sense is time which takes to make those calculations, computer can read 100 GB in around 1-2 hours so I would expect it would be finished by now but it's still going, so I'm wondering if it's expected behaviour.


    2009-04-03 18:23:09.109,INFO,Moe,100,0,21,Comms: 1 Notifications received from cloud. Acknowledging watermark:2083.12992.0
    2009-04-03 18:24:15.156,INFO,Moe,107,0,11,Comms: Subscription updated for SubscriptionParam 91b7c750-dc2d-4feb-9cc3-457988798d66. Link:CoreObjects/IZF3YFQOEKFEVCIGZMUTL6QIDA/PendingMembers/Subscriptions/EW6JBEKFH75UFKE2FW3SJ6XJOM, MaxAge:16569
    2009-04-03 18:24:16.921,INFO,Moe,100,0,23,Comms: Processing notification Type:ResourceChanged, Watermark:2084.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:24:16.921,INFO,Moe,107,0,23,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 877
    2009-04-03 18:24:16.921,INFO,Moe,100,0,23,Comms: 1 Notifications received from cloud. Acknowledging watermark:2084.12992.0
    2009-04-03 18:24:17.312,INFO,Moe,100,0,6,Comms: Processing notification Type:ResourceChanged, Watermark:2085.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:24:17.312,INFO,Moe,107,0,6,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 878
    2009-04-03 18:24:17.312,INFO,Moe,100,0,6,Comms: 1 Notifications received from cloud. Acknowledging watermark:2085.12992.0
    2009-04-03 18:24:19.140,INFO,Moe,44,0,11,EnclosureRetrievalDownloadTask.Run - GETting feed;EnclosureRetrievalFeedDownload;Fetcher;ObjectRelation=FeedInfo;IdentityId=d084cb75-811f-4c55-a3ce-ef639e0bdbb9;CoreObjectId=e15ff85c-e2e3-4201-bad1-63ea7e5abf49;FeedId=6fbbff9e-e7d5-4319-8bd4-b23bb0439556;ObjectRelation=FeedInfo;
    2009-04-03 18:24:19.250,INFO,Moe,44,0,11,EnclosureRetrievalDownloadTask.Run - Feed does not exist locally anymore;EnclosureRetrievalFeedDownload;Fetcher;ObjectRelation=FeedInfo;IdentityId=d084cb75-811f-4c55-a3ce-ef639e0bdbb9;CoreObjectId=e15ff85c-e2e3-4201-bad1-63ea7e5abf49;FeedId=6fbbff9e-e7d5-4319-8bd4-b23bb0439556;ObjectRelation=OldETag;
    2009-04-03 18:24:19.250,INFO,Moe,44,0,11,EnclosureRetrievalFeedDownloadTask.ProcessEnclosureRetrievalFeed - Feed not modified;EnclosureRetrievalFeedDownload;Fetcher;ObjectRelation=FeedInfo;IdentityId=d084cb75-811f-4c55-a3ce-ef639e0bdbb9;CoreObjectId=e15ff85c-e2e3-4201-bad1-63ea7e5abf49;FeedId=6fbbff9e-e7d5-4319-8bd4-b23bb0439556;
    2009-04-03 18:24:19.812,INFO,Moe,100,0,11,Comms: Processing notification Type:ResourceChanged, Watermark:2086.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:24:19.812,INFO,Moe,107,0,11,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 879
    2009-04-03 18:24:19.812,INFO,Moe,100,0,11,Comms: 1 Notifications received from cloud. Acknowledging watermark:2086.12992.0
    2009-04-03 18:24:31.453,INFO,Moe,100,0,23,Comms: Processing notification Type:ResourceChanged, Watermark:2087.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:24:31.453,INFO,Moe,107,0,23,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 880
    2009-04-03 18:24:31.453,INFO,Moe,100,0,23,Comms: 1 Notifications received from cloud. Acknowledging watermark:2087.12992.0
    2009-04-03 18:24:33.531,INFO,Moe,100,0,11,Comms: Processing notification Type:ResourceChanged, Watermark:2088.12992.0, ResourceLink:Devices/WH4LZD4HIJYETGICJOYUTINV3I/DeviceConnectivityEntries, Reason:
    2009-04-03 18:24:33.531,INFO,Moe,107,0,11,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/WH4LZD4HIJYETGICJOYUTINV3I/DeviceConnectivityEntries. ResourceChanged received so far: 77
    2009-04-03 18:24:33.531,INFO,Moe,100,0,11,Comms: 1 Notifications received from cloud. Acknowledging watermark:2088.12992.0
    2009-04-03 18:25:16.000,INFO,Moe,100,0,11,Comms: Processing notification Type:ResourceChanged, Watermark:2089.12992.0, ResourceLink:Devices/XDSZFZ5DVMVEHBYZVLVDHWAKFQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:25:16.000,INFO,Moe,107,0,11,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/XDSZFZ5DVMVEHBYZVLVDHWAKFQ/DeviceConnectivityEntries. ResourceChanged received so far: 73
    2009-04-03 18:25:16.000,INFO,Moe,100,0,11,Comms: 1 Notifications received from cloud. Acknowledging watermark:2089.12992.0
    2009-04-03 18:25:43.906,INFO,Moe,100,0,11,Comms: Processing notification Type:ResourceChanged, Watermark:2090.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:25:43.906,INFO,Moe,107,0,11,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 881
    2009-04-03 18:25:43.906,INFO,Moe,100,0,11,Comms: 1 Notifications received from cloud. Acknowledging watermark:2090.12992.0
    2009-04-03 18:25:54.093,INFO,Moe,100,0,11,Comms: Processing notification Type:ResourceChanged, Watermark:2091.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:25:54.093,INFO,Moe,107,0,11,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 882
    2009-04-03 18:25:54.093,INFO,Moe,100,0,11,Comms: 1 Notifications received from cloud. Acknowledging watermark:2091.12992.0
    2009-04-03 18:26:01.718,INFO,Moe,100,0,6,Comms: Processing notification Type:ResourceChanged, Watermark:2092.12992.0, ResourceLink:Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries, Reason:
    2009-04-03 18:26:01.718,INFO,Moe,107,0,6,Comms: Raising ResourceChanged event for resource https://accounts.mesh.com/Devices/IU3WQHXUEUJU5ORNNK454FIQTQ/DeviceConnectivityEntries. ResourceChanged received so far: 883
    Friday, April 3, 2009 6:32 PM

Answers

  • The length of time needed to sync a large amount of data, even if it is done via P2P, is normal. As long as it is still going, all is well in that regard. Sync is slow, plain and simple. The Live mesh team is working to improve this, especially in P2P cases. I do not believe that Live Mesh cares about the file contents. It does not delta sync, it cares about file size, last update time stamp, etc. and the newer file wins in an update compare or gets thrown to the conflict list if the file was previously synchroniized and now has been found to have changed from the last sync checkpoint on more than one device/location.
    The high CPU issue is the concern. I would bug that and submit logs so it can be analyzed. It should not be consuming excessive CPU.
     

    How to Submit Bugs and Live Mesh Logs

    Post the link to your bug submission back here, please, so that others can vote/validate the bug.

    -steve


    Microsoft MVP Windows Live / Windows Live OneCare & Live Mesh Forum Moderator
    Friday, April 3, 2009 6:56 PM
    Moderator

All replies

  • The length of time needed to sync a large amount of data, even if it is done via P2P, is normal. As long as it is still going, all is well in that regard. Sync is slow, plain and simple. The Live mesh team is working to improve this, especially in P2P cases. I do not believe that Live Mesh cares about the file contents. It does not delta sync, it cares about file size, last update time stamp, etc. and the newer file wins in an update compare or gets thrown to the conflict list if the file was previously synchroniized and now has been found to have changed from the last sync checkpoint on more than one device/location.
    The high CPU issue is the concern. I would bug that and submit logs so it can be analyzed. It should not be consuming excessive CPU.
     

    How to Submit Bugs and Live Mesh Logs

    Post the link to your bug submission back here, please, so that others can vote/validate the bug.

    -steve


    Microsoft MVP Windows Live / Windows Live OneCare & Live Mesh Forum Moderator
    Friday, April 3, 2009 6:56 PM
    Moderator
  • But it's not syncing right now. All I see is high CPU and IO but not sync taking place. I would expect actually that Mesh will do hashing on the files and not just look at file sizes.
    Friday, April 3, 2009 7:00 PM
  • Check network traffic. The user interface isn't always up to date with what is actually going on. If there's activity happening in terms of processor, I/O and network then things are happening.

    Mesh stores repository information locally as well using SQL Everywhere Edition, so it could well be updating that
    Friday, April 3, 2009 8:10 PM
  • There is no network traffic. It's happening even if peer is down. All I see right now is high CPU and IO and nothing is happening (regardless peer is up or down). My guess about hashing files probably is right. I'm watching what handles are open my MOE process and I can see my 4-5 GB MKV files are opened and closed in alphabetical order. It appears to be reading contents of the file since it takes 15-20 minutes to switch to another file. All this is happening while not synchronisation is in place (not network activity and replication partner is down).
    Can somebody from Live Mesh team comment?
    Friday, April 3, 2009 8:23 PM
  • Filing the bug is the way to go. Your logs will then be reviewed by the Live Mesh team.
    -steve
    Microsoft MVP Windows Live / Windows Live OneCare & Live Mesh Forum Moderator
    Sunday, April 5, 2009 4:59 PM
    Moderator
  • I did but was just interested about technology behind livemesh sync.
    Sunday, April 5, 2009 5:03 PM