the right way to move to RAID and a +2TB volume? RRS feed

  • Question

  • I'd like to do a quick check with the terrific brain trust here before I commit.  The problem I'm solving for: I'd like more than 2TB of data on my local machines, which will be backed up on my WHS2011 server, which will be backed up on rotating offsite drives.

    Why three places?  My wife and I are enthusiastic amateur photographers, with very large photo collections.  We want a local catalog on our desktops for quick retrieval and manipulation; a local backup (preferably client images); and an offsite backup because we live in earthquake country.

    I built my own WHS2011 server, and I have plenty of processor power and memory.  Having finally understood that I can't back up a household of clients which total > 2TB, I'm following Ken's suggestion and moving to RAID.  I've installed a RR2710 card, and I'm just about done intializating a 3x2TB RAID 5 array = 4TB of storage.

    My goal, when I'm done, is to have something like the following:

    • 1st disk - partitioned into two volumes   C: for the OS (approx 60GB), and D: for household shared music and videos (approx 1.8 TB)
    • 2nd / 3rd / 4th disk - E: for client backups (4TB)

    My two questions:

    (1) What do I do when the array is finished initializing in order to have the full 4TB available for client backups?  Ken's earlier response said something about adding the array as a single large disk using GPT partition table.  I can Google how to do this, but thought I'd check if anyone had a link which might help me.  After partitioning and adding the disk, is there anything else I have to do in order for WHS2011 to see the disk, and use it for client backups? 

    (2)  Suggestions on how to back up the server?  I know that Server Backup won't work with the new 4TB disk.  I'm contemplating an ICY Dock, or some other solution which I could physically rotate offsite without too much pain.  But it's got to be 4TB, so I assume I'm looking for a RAID 0 box with 2x2TB inside.  Anyone know of a good solution for this?  Secondly, since Server Backup won't handle the 4TB RAID disk, I need another way of automatically backing it up to whatever external box I find.  I've been reading posts for a while, and I see some recommendations for SyncToy (a utility I love) and a few statements that there are no good consumer solutions to handle this.

    Many thanks for any suggestions you might have.

    Friday, November 25, 2011 9:56 PM

All replies

  • I have a similar issue with protecting photos from disaster and have two WHS2011 servers (the first as a Primary server and the second to back up the first). I do not use RAID as I would always like access to my data even if the RAID card is down. Consequently I do a one to one mapping between the 1st and 2nd servers, that is I sync between shared folders on the first Server to the Second so that one server can go down and I still have full access to all my data. I also have the primary photo database on my desktop which is synced to the 1st and 2nd servers and thus have data on 3 local computers. Offsite storage is done by rotating external drives (limited to 2TB) of raw photo files rather than using Server backups. I different solution to yours but it works for me.

    When (if) I start to run out of space, it will be a new HDD for each Server and another external drive. That way I do not need to use RAID at all (which incidentally I am not a big fan of). The only downside is that I have multiple drive letters and potentially more than 1 Shared Folder containing photos when my collection exceeds 2TB - a long way to go for that even though I shoot in RAW.

    For syncing folders I use a combination of SyncToy and SyncBack (which is similar in principle to SyncToy but has much more flexibility/capability).

    It does not answer your questions but it may be helpful in your thinking process.


    Phil P.S. If you find my comment helpful or if it answers your question, please mark it as such.
    Friday, November 25, 2011 10:38 PM
  • Thanks for your thoughts, Phil.  I appreciate that.  Unfortunately I'm not quite ready to build a second server, take the cost hit, or find more room to store it.  The idea is an excellent one but I don't think it will work for me.  Or maybe I'm just resistant because I don't feel like I should have to do that in order to get value out of this damn thing.

    I've continued to read this forum and TechNet.  A few interesting details for those who may have landed on this thread and are curious to learn more:

    --I'm now reading the Help section on "Server Backup" on my server (by Remote Desktop connection).  It's giving me some useful education on the Backup tool.  General information on Backup and Restore is also posted here, though I didn't find the answer to my questions.  I'm still looking for recommendations on either using wbadmin (not for the novice, though I found this) or a third-party utility which will make this simpler than the command line.

    --Be aware that larger USB external backup disks may not be compatible with WHS2011 because of their physical sector size.  There's a very helpful article on TechNet, with a link to an External Backup Drives Compatibility List.  The TechNet article has a bit of a troll problem, so I suggest scrolling about 75% down the thread to a nice summary by "KarlBystrak".  A bit further down the page is a link to a Western Digital page which gives a work-around for this error when triggered by one of their larger external drives.




    • Edited by cornwell Saturday, November 26, 2011 8:12 AM
    Saturday, November 26, 2011 8:11 AM
  • I would wager that you don't need to keep all your images locally (on clients). I'm a photographer myself, and all I keep is about the last 6 months of images. I have little call for images further back than that, so I store them on my server where they get backed up regularly. I use Lightroom for organizational and editorial purposes, so I have a catalog of recent images (files stored locally) and a few catalogs for older images (files on the server). This works well for me, and keeps the amount of data stored locally to a manageable level (important, as "local" is a notebook).  
    I'm not on the WHS team, I just post a lot. :)
    Saturday, November 26, 2011 2:51 PM
  • No question, Ken--you're right.  There's no need to keep all my images locally.  But I'm operating on a bit of a different model, at least until I find out that mine doesn't work.  My first priority was keeping three copies of every image, distributed across three physical locations.  I've chosen one on the client, one on the server, and one offsite.

    Also: I built a desktop with a lot of storage, memory and speed as an image workstation.  Because I'm undisciplined about developing my digital images, I have hundreds which I haven't worked on yet, despite taking them many months ago.  I won't edit images stored remotely (on the server), and I don't want to shuffle images back and forth.

    At any rate, thanks for letting me know how you set up your workflow.  I'm always interested to hear how others approach this issue.

    I think I've found the answer to my first question.  I used Disk Management (remote desktop into the server, click Start, right-click on "Computer", click "Manage", select "Disk Management") to setup a GPT volume, and then simply formatted it.  That went smoothly.

    I'm curious if anyone has suggestions on backing up a server disk which exceeds 2TB, or (more to the point) two server disks which include one over 2TB.  Let's say I've got an external hard drive big enough to back up all data on the server--3 or 4TB.  Do I SyncToy / SyncBack the folders on all of the server volumes to the external hard drive?  Will this also preserve the client drive images?

    Sunday, November 27, 2011 3:12 AM