locked
Drive Extender vs RAID RRS feed

  • Question

  • I come from a technical background in the server world where RAID is king. I've seen way too many disasters averted by proper array planning to completely write RAID off without a fight. But, instead of buying your typical 4 drive NAS where I can configure RAID5 and then sleep well at night, I figured I'd give this WHS Drive Extender a shot. So I purchased the MediaSmart EX470 and put two extra 1TB drives in it (1 x 500GB + 2 x 1TB = 3 drives total). I thought that this Windows Home Server product sounded way too cool to pass up and figured it's worth trying to make it work (it's a heck of a lot easier than trying to get Windows 2003 Server to do the same thing). After playing with WHS for a few days, my confidence in its ability to protect my data is not as strong as I would like it to be. I was hoping someone here can help assure me that my data is safe WHEN one of my drives fail.

    Before I put anything important on this thing, I want to put it through some basic tests. First I copied some files to the server from my laptop. Then I downloaded and installed Duplication Info and waited for the data to be duplicated. Then I pulled one of the drives out. The drive of course went "Missing" as expected. I then tried to access the files I copied over to make sure my files were still accessable. What I found was, anything that was duplicated to that drive was displayed, but inaccessable (file not found error). Anything that was NOT duplicated to that drive worked perfectly fine.

    The way I understand it, Drive Extender is supposed to do a straight copy from one drive to another. That way the file will still be available should the drive fail. In a RAID1 or RAID5 array, a user would never have known that the drive was bad/missing unless they were looking for it. With Drive Extender the file was simply gone until I plugged the drive back in.

    My questions are...is this expected behavior? Shouldn't having a copy of a file on a different hard drive mean that I'll be able to access that file regardless of whether or not one of the drives fail? Maybe I'm just not understanding how WHS handles drive failures. Can anyone help explain how this is supposed to work?



    I only pretend like I know what I'm doing
    Friday, October 17, 2008 9:55 PM

Answers

  • Hello,
    while the files are still there (you would see them for verification  on one of the remaining disks hopefully, if you toggle Show hidden files and folders in Explorer options on and go through each single disk on the server directly by checking D:\DE\shares for the  primary disk and c:\fs\<volumemountpoint>\DE\shares for the other disks), the tombstones you get presented in Shared folders may link to the lost files still.
    These tombstones will be corrected, if you remove the missing disk via WHS console.
    Not the ideal solution in my opinion ...

    Anyway, as with RAID also with Drive Extender you will need an external backup to be best possible protected against disasters, which you know and a RAID also cannot cover (like overwriting the file, burning down all disks due to an overvoltage, getting the machine stolen ...).
    This can be done with the console after assigning this role to a drive, which is still not in the storage pool or with a script maybe invoking robocopy.

    Best greetings from Germany
    Olaf
    Friday, October 17, 2008 10:34 PM
    Moderator

All replies

  • Hello,
    while the files are still there (you would see them for verification  on one of the remaining disks hopefully, if you toggle Show hidden files and folders in Explorer options on and go through each single disk on the server directly by checking D:\DE\shares for the  primary disk and c:\fs\<volumemountpoint>\DE\shares for the other disks), the tombstones you get presented in Shared folders may link to the lost files still.
    These tombstones will be corrected, if you remove the missing disk via WHS console.
    Not the ideal solution in my opinion ...

    Anyway, as with RAID also with Drive Extender you will need an external backup to be best possible protected against disasters, which you know and a RAID also cannot cover (like overwriting the file, burning down all disks due to an overvoltage, getting the machine stolen ...).
    This can be done with the console after assigning this role to a drive, which is still not in the storage pool or with a script maybe invoking robocopy.

    Best greetings from Germany
    Olaf
    Friday, October 17, 2008 10:34 PM
    Moderator
  • Thank you for your response Olaf.

    So all I need to do is to "Remove" that drive from the pool, and the tombstones will be updated? That sounds easy enough. I'll give it a try when I get home tonight. I really want this server to work for me, I'm just not used to its methods yet.

    I realize that RAID isn't the end-all solution to never losing a file. And I do plan on adding a backup drive at some point in time. Right now I'm mainly just concerned about leaving all my eggs in one basket. Having a RAID1 or RAID5 array just gives me one chance to lose a drive and be able to recover from it. Without that protection, it's a one shot deal...bad drive --> lost files --> a really bad day. All I'm looking for with this solution is...bad drive --> frantic dash to computer store for a replacement drive --> phew, I'm glad I had that Drive Extender to save my butt.

    I only expect to have the same level of protection with Drive Extender as I do with RAID. I know neither one of them is a "fool-proof" solution, but they're better than just leaving files on my laptop. If I have to click a "Remove" button so I can access my files after a failure with Drive Extender, then that's perfectly fine with me.



    I only pretend like I know what I'm doing
    Saturday, October 18, 2008 12:01 AM
  • You need to simply remove the missing hard drive and your files will be accessible again.  I also had been using raid on my important files in the past but am really liking the way WHS allows me to mix and match any drives I want.  I have a hodge podge of different drives I have gotten and if I wanted a raid, well you know they have to be the same and best if identical.  Good luck trying to explain to my non computer literate wife where to put things, what drive letter, etc.  Now she clicks on the icon on her desktop and then pictures or documents.  It's simple for her.
    athlon 3400, 2gb ram, 7 drives totaling about 3.15 tbs.
    Sunday, October 19, 2008 4:08 AM
  • One thing I never understood about duplication (I don't use it which is probably why) is how it can ensure duplication with odd drive sizes. For instance, let’s say you have one 500GB as the main drive (20GB C:\ & 480GB D:\) and you add a 200GB drive to the pool (680GB total). If you copy 400GB of data to the server with duplication, how can it possibly fit all of the duplicates?

    Another thing I don't like about duplication is that large transfers crawl at a snail’s pace (at least before PP1, haven't tried it since). Correct me if I'm wrong, but if you're using duplication - you're really copying the file 3 times (Pool -> Destination drive -> Duplicate). Whereas if you use RAID, it's only twice (Pool -> Destination drive) (well technically 3, but the RAID card handles the third (mirror) which is simultaneous and much faster.

    Regardless, I would like to be able to control when extender kicks in a bit (like every night at midnight, do some house cleaning on D:\) - or at least have it be intuitive enough to know not to kick in during a large transfer unless it's running low on space.

    If you're copying a large amount of data and extender kicks in, things start to chug. If duplications is on also, things downright crawl (for me at least).

    I run RAID 1 pairs (2x750GB + 2x1TB + 2x1TB + etc) and have a 2nd WHS server (no raid or duplication) that I back up to using SyncToy. So both drives in a pair plus the drive containing that data in the 2nd server have to fail before I lose anything (as far as drive failure is concerned).

    Monday, October 20, 2008 1:42 AM
  • It's all about ease of use.  WHS isn't necessarily targeted at you or it's drive extender features.  It's about putting something on WHS and knowing that it handles everything. 

    If there is not enough room for duplication, WHS tells you.  simple as that.  PP1 doesn't use the pool anymore, it copied straight to destination drive.  Then every hour it duplicates it.  Things are much faster now. 

    athlon 3400, 2gb ram, 7 drives totaling about 3.15 tbs.
    Monday, October 20, 2008 2:14 AM
  • Removing the drive did the trick. Once I did that, I was able to access my files again.

    I suppose it's not that big of a deal, but it is kind of a pain that you actually have to click anything in order to access your files should a drive fail. Maybe I'm just being overly picky, but I'm just used to the RAID controller kicking in and doing its job without any intervention on my part. You'd think that since WHS has the ability to scan a data drive and recreate all tombstones should you need to re-install the OS, it shouldn't be that big of a task for it to update the tombstones should a drive simply go missing. I can live with a delay while WHS rewrites the tombstones and/or creates new duplicates. But if I don't press the magical "remove" button, then what? It'll sit there crippled until a second drive fails? This product is aimed at the everyday household. People who don't know the first thing about server administration. I know it's not a lot to instruct my grandmother to log into her server and "remove" that hard drive, but it will still cause her all kinds of stress and headache when she has to call me up to tell me that not only is there a red light on the front of her MediaSmart server, but all of her files are inaccessable. How is the average user going to know that "removing" a hard drive will actually SAVE their files (so long as they have duplication turned on) rather than destroy them? It sounds completely counter intuitive to me and I actually know something about server administration!

    I think that if you're going to market a new technology to compete with an industry standard, all the while touting how much better and easier this new technology is, then it should be able to the same job, if not better. I agree that RAID is pain to deal with, much more of a pain than the average Joe is willing to deal with, however the best thing about it is it's "hands-off" nature. You don't have to tell a RAID controller to protect your data, it just does it. Drive Extender is leaving itself vulnerable by depending on the user to click "remove" before it will rewrite the tombstones and re-duplicate the files. What if I'm on vacation? What if I'm running a website off of it? There are a thousand scenarios that could leave a bad hard drive sitting for days/weeks/months, all the while leaving the files on the server completely inaccessable in the mean time.

    Still, I wouldn't say this is a deal breaker for me. You can't even compare WHS to a simple NAS. The additional features WHS offers over a NAS greatly outweigh the inconvenience of having to manually "remove" a bad hard drive, which, in essence, does the same thing as a RAID 1 array (provided you turned on duplication of course!). But I do say, it wouldn't be that hard to build in some logic that would automatically "remove" a missing drive, maybe even after an hour or two just so that it doesn't start changing tombstones the moment you remove a drive to reorganize it's cable or something. An hour or two is much more reasonable than the open-ended "whenever I get around to clicking the remove button" time frame.



    I only pretend like I know what I'm doing
    Monday, October 20, 2008 4:07 PM
  • I would say, while it is not convenient and not showing elegance, it gives the user some reason to notify a broken disk ASAP and act.
    Although this can lead to mistakes by users who tnd to act before considering the outcome of their action or not understanding the results.
    (I.e. disable duplication before swapping the disk ...)

    Automated removal of drive would be considered bad, since WHS regulary supports also external disks as storage. So a disconnected USB disk would be autoremoved, which would not be good.

    Best greetings from Germany
    Olaf
    Monday, October 20, 2008 6:07 PM
    Moderator
  • That is a good point. RAID was never designed to work with USB drives. Ok, so you can't have an auto "remove" function.

    However, the system still does have a tombstone that points to two copies of the same file. How about some logic built into Drive Extender that when someone double-clicks a file, Drive Extender says "If the primary shadow is missing (or corrupted), then use the secondary". And then open the secondary copy of that file without interrupting the user with a "file cannot be found" error. That should at least allow access to duplicated files without having to "remove" the drive.



    I only pretend like I know what I'm doing
    Monday, October 20, 2008 6:50 PM
  • This idea is not that new one, I am sure. Maybe register on connect and make a suggestion there or vote for one of the existing? This could help to bring such stuff into future versions or upgrades.
    Best greetings from Germany
    Olaf
    Monday, October 20, 2008 6:53 PM
    Moderator
  • bricklayer said:

    One thing I never understood about duplication (I don't use it which is probably why) is how it can ensure duplication with odd drive sizes. For instance, let’s say you have one 500GB as the main drive (20GB C:\ & 480GB D:\) and you add a 200GB drive to the pool (680GB total). If you copy 400GB of data to the server with duplication, how can it possibly fit all of the duplicates?

    Another thing I don't like about duplication is that large transfers crawl at a snail’s pace (at least before PP1, haven't tried it since). Correct me if I'm wrong, but if you're using duplication - you're really copying the file 3 times (Pool -> Destination drive -> Duplicate). Whereas if you use RAID, it's only twice (Pool -> Destination drive) (well technically 3, but the RAID card handles the third (mirror) which is simultaneous and much faster.

    Regardless, I would like to be able to control when extender kicks in a bit (like every night at midnight, do some house cleaning on D:\) - or at least have it be intuitive enough to know not to kick in during a large transfer unless it's running low on space.

    If you're copying a large amount of data and extender kicks in, things start to chug. If duplications is on also, things downright crawl (for me at least).

    I run RAID 1 pairs (2x750GB + 2x1TB + 2x1TB + etc) and have a 2nd WHS server (no raid or duplication) that I back up to using SyncToy. So both drives in a pair plus the drive containing that data in the 2nd server have to fail before I lose anything (as far as drive failure is concerned).



    From my understanding about duplication, the instance you're describing would not be possible, since once a share (or shares) are marked for duplication, your storage aggregate "D:/" drive storage space is, at most, halved, so that your 680GBs of space becomes 340GBs, such that it would not be possible to move 400GBs of data to a pool.  It's halved (at most) to make sure that every cluster that's duplicated is not copied to the same disk.

    I've probably missed a few nuances in the technology, but that's how I understood it.

    As far as the transfer goes, I believe PP1 removed the DE behavior of writing a file to the first (OS) hard disk before "spreading it out".
    Friday, November 7, 2008 8:40 PM