locked
RAID Advice RRS feed

  • Question

  • Looking for recommendations on RAID cards or setups.  Feel free to hijack this thread.  Smile

     

    I'm looking at this card here at Newegg (Adaptec 3805).  I've used a couple Adaptec cards in the past and I like them, but I'd be more than willing to look at something else in approximately the same price range.  I'm looking to do a RAID 5 setup.  I don't believe a RAID 6 setup is necessary despite seeing it mentioned in the other thread.  As long as I get a notification or alert that a drive is failed I can shut WHS down for a while until I get a replacement.  According to the tech specs, I can have up to a 512 TB array.  So... on to the questions:

     

    1.  So at a minimum I'm looking at three 1-TB drives, correct?  Or do I need 4 drives with one for the OS and the other three for storage?  If I read that other thread correctly, I would have two partitions for the entire set of drives, one partition for the OS and one for storage.

     

    2.  How much space can I add to this array?  Seems like there's a 2 TB limit somewhere, right?  I can't just start adding 1-TB drives to this all willy-nilly, can I?  What about when the 2 TB drives come along; would that be "bad" for the array to put just one in?  Do I need to put them in pairs or something?

     

    3.  If we truly trust that a reinstall will restore all our tombstone buddies, and I'm really not concerned about the throughput performance of the unit, won't this, in the end, be more trouble than it's worth?  Simply get a 250 GB drive for the OS and tombstones and two 1-TB drives for storage and duplication.

     

    4.  For that matter, wouldn't a simple Ghost (i.e. image) of a bare bones install suffice?  I'm thinking WHS, PerfectDisk, and anti-virus and that should be all I'd need, I suppose.  Or, during the reinstall, is there a part where you answer a question "Hey, this is a reinstall, examine the other disks for files."  Or can you do that after installation and simply rebuild the database?  I suppose that's probably answered in one of the white papers.

     

    I think that's enough for one post.  Thanks for the help folks!

     

     

    -Matt

     

    edit:  OK, after reading this thread I'm concerned.  If my OS drive fails, I thought I could simply drop another one in and reinstall.  Looks like the procedure is more complicated and I would require external storage to complete it correctly???

     

    Monday, February 11, 2008 6:56 AM

Answers

  •  ryan.rogers wrote:

    John, in the future I recommend not being quite so liberal with the editing if you are going to reply so tersely.  Almost all of my statements are presented out of context.  When I read your post I feel like I am watching Bill O'Reilly.



    Ryan,

    I did that quite intentionally.   My intention in this "debate" is to separate fact from opinion.  I have been watching a whole bunch of OPINION floated around as if it were fact.

    John, simple question:

     

    Is it acceptible to partition a MBR formatted RAID array into multiple 2TB max partitions and have those partitions managed in the DE pool?


    Clearly based on your post you think it is, but I just wanted to confirm.

     

    Thanks,

    Ryan



    First understand that the RAID ARRAY doesn't have a clue what MBR even is.  MBR is specific to a VOLUME which is contained on an array.  Any given volume  may or may not be a MBR VOLUME as YOU decide.

    With that out of the way, my assumption is that you are really asking "is it OK to have multiple VOLUMES on a RAID ARRAY handled by DE".  My answer is "of course it is OK".  Always keep in mind that the ARRAY is a support mechanism for handling redundancy of data so that a drive failure does not cause data loss.  Nothing more, nothing less.

    I need to stress again (as I did in my original response directly to the person posting the question) that we are talking about a technology here.

    A raid array can be thought of as nothing more than a HUGE hard disk.  You take 2 or more literal PHYSICAL hard disks and turn them into a single "ARRAY".  That ARRAY is a SINGLE entity.  What would you do if I handed you a 30 TERABYTE hard disk that just took care of all aspects of data integrity?  Whatever you would do with that is what you would do with a RAID 5 or 6 array, because that is what a RAID 5 or 6 arrray gives you.  You have taken N physical surfaces and created ONE LOGICAL surface called an ARRAY. 

    Now you take the ARRAY and create one or more LOGICAL surfaces called VOLUMES.  VOLUMES are what the OS writes data onto and reads data from.  The OS does not know whether the VOLUME is contained on an ARRAY or on a physical hard drive.

    The nice thing about this HUGE hard disk (ARRAY) is that it automatically handles the issue of dead disk drives for you.  The ARRAY doesn't die when a disk dies.  VOLUMES don't die when a disk dies

    Imagine for a moment a single hard disk.  I am sure that you know that MOST physical disks of any size have multiple platters and a head on each side of that platter.  Now imagine for a moment that a single head could "die", but that the DISK DRIVE could simply grab an unused platter side and head and use that to "replace" the dead head / platter side.  Furthermore imagine that this Wunderdisk had written your data around on the platters such that if any platter / disk died your data was still all there.

    Think of a RAID ARRAY in those terms and suddenly your perspective changes.  The ARRAY is a single LOGICAL "drive", but much bigger than any individual drive that you own, in fact (more or less) the simple sum of all the physical drives that you give it to manage.  Any single drive can die without effecting the ARRAY or the VOLUME(s) on the ARRAY.

    The advantage that we have with the ARRAY is that WE DECIDE how to carve up the "huge disk" that it presents us.  The ARRAY is RAID 5 but it can contain 1 VOLUME or 30 VOLUMES, it can contain 3 physical disks or 24 physical disks.  I can have 1 disk for "checksum" data and the rest for data, or I can have 1 disk for a hot backup and 1 disk for checksum and the rest for data but in the end, the DETAILS can be completely hidden from me as the administrator of the ARRAY.  I throw a bunch of disks into the array, make a decision about the configuration and forget it.  From that point on it acts like a SINGLE physical DISK to the system.

    You are quite familiar with taking a terabyte drive (a SINGLE DISK) and carving out partitions right?  You might build a 100 gig system partition (which is a VOLUME BTW).  You decide that is enough to hold Windows, Office, Firefox, music players and all the other "stuff" that makes up a "system".  You might create a 300 GIG Data partition for storing all your music files.  You might create a 600 GIG volume to hold Video.  But in the end, while you have THREE VOLUMES, there is only ONE physical disk.  The problem here is that if that ONE physical disk dies, ALL of your storage dies.  I am sure we all agree that is bad.

    So... We have a RAID controller.  It can handle 8 physical drives.  We throw FOUR drives on it.  We make a decision to use three of them for data and one for parity.  We will worry about a hot backup later (or never?)  We now start carving that ARRAY up.  However you want, you create VOLUMES of the sizes that YOU decide are appropriate for whatever YOU need to do with that amount of storage.

    Of course there are decisions to make at the beginning.

    Let's take TWO WHS scenarios.

    SCENARIO ONE:

    I will start with (3) terabyte disks and build an array.  I never expect to have more than two terabytes but if it happens I will simply add more DISKS to the ARRAY and create a new VOLUME.  So... I create a single THREE terabyte ARRAY and tell the controller to make it Raid 5.  The controller takes one of the drives and uses it for parity and presents me 2 terabytes of blank space (no VOLUME yet).  Because I think that I will PROBABLY never need more or at the very least it will be a long time before I do, I create a Single Large Volume using ALL of the 2 terabytes and hand that to WHS.  WHS immediately resizes the volume, carves out 20 gigs for itself and makes the rest the SINGLE data drive.  All is happy in whoville.  Max performance, no DE etc.

    NOTICE that I have an "overhead" of 1 terabytes out of 3 terabytes or 33%.

    Now I need more space two years down the road.  I could make more decisions (and I might if 5 gig drives have come along) but assuming that I am still using 1 terabyte drives... I grab two more, add it to the ARRAY.  The ARRAY now has two gigs of EMPTY space.  NOTICE... that the ARRAY simply added the two new drives to the "pool" and they can automatically be RAID 5 if I want them to be.  I do.  I now create a brand new 2 gig VOLUME and hand it to DE. 

    NOTICE that I now have an overhead of 1 terabytes (checksum storage) out of FIVE terabytes or 20% overhead.  NOTICE that with RAID 5 and a SINGLE  ARRAY, I always have a single drive used for overhead.

    For the first time. DE now has to do it's thing and does so.  For THREE YEARS I had nirvana in Whoville, but now I have DE.  Hopefully during the intervening three years the WHS team has fixed many of the bugs in DE and I have a much better DE experience, but in ANY EVENT I will have no WORSE an experience than if I just handed DE a couple of disks to manage.  PLUS I have RAID 5 and all that means (data integrety, low overhead etc).

    SCENARIO TWO. 

    I am immediately going for a 20 terabyte solution, right now, out of the box.  This is a very different scenario and I MAY make some different choices.  I have made a decision UP FRONT that I am going to put up with the DE "stuff".  It matters not whether I use RAID or not, DE is going to do its DE thing.  That certainly in no way affects my decision to use RAID however.  A raid array is just a tool to handle the data integrity issue for me.  Nothing more, nothing less.  So I still use RAID.

    I grab 22 terabyte disks and a largish 24 port controller and I go to town.  I build an ARRAY with 20 disks for data, one for parity and one for hot spare.  I now have a single HUGE 20 terabyte "hard disk" available to me.  WHS DICTATES to me that it will only accept a maximum size of 2 terabytes in any single volume.  Nothing I can do about that limitation and in no way different from just handing DE a bunch of drives.  No problem.

    Where the decision comes in however is in whether to make the "landing zone" a full 2 gigs or not.  In this case I probably would not.  I would probably make the "landing zone / system disk" a x hundred gig volume.  I make a decision based on the types of files I will be putting on the drive, how many at a time, and their size and then choose... 400 gigs (pulled ouot of thin air for this example).  WHS takes over, carves 20 gigs out for the system disk and creates a 380 gig "landing zone".  I start creating 2 terabyte volumes and handing them to DE.  I would end up with:

    (1) 20 gig volume for the system disk.
    (1) 380 gig volume for the landing zone.
    (9) 2 terabyte volumes.
    (1) 1.6 terabyte volume which is "room left over" after creating as many 2 terabyte volumes as I can.

    NOTICE though that ALL of these VOLUMES reside on a single RAID 5 array.  WHY do I make it a single array?  to "share" the one single "parity" disk with the entire array, to MAXIZE the amount of disk that is used for STORAGE as opposed to OVERHEAD.  In fact if it were me, I would go RAID 6 for technical reasons.

    In any case, ALL of the volumes are RAID VOLUMES.  ALL the volumes are "redundancy protected".  A drive failure simply has NO effect on the operation of the server.  Should ANY physical disk fail, the "hot spare" raid drive is automatically "swapped in" by the controller, the RAID ARRAY is rebuilt to give me back my "redundancy protection" and life is sweet.  I don't even have to be home while all this happens.  I could be in the Bahamas sitting on the beach, and my RAID controller is busy rebuilding my array for me.

    When I get back from the Bahamas, the controller notifies me that a disk drive died and "oh by the way" I need to order a new disk to replace my hot spare which was pulled into the ARRAY (automatically) to replace the dead disk.  I pull the dead disk out, throw it in the trash, put the new disk in the position of the removed drive, and tell the array controller to use it as the hot spare.

    WAAAAAAAAAaaaaaaaayyyyy easier than jumping on a plane to rush home to rebuild my computer, spending the day trying to rebuild my system disk should that happen to be the disk that failed. 

    NOTICE that (at least with a hardware controller) streaming read speed is N * Individual drive speed (or close to it).  If I have 3 drives in the raid, and each individual drive is capable of STREAMING data off the drive at 30 megs / second, then streaming read is 90 megs / second.  If I have 8 drives, then it is 8 * 30 megs second (240 megs / second).  So another benefit of using a large ARRAY is that the streaming read speed rises with every drive placed in the array.  OF COURSE a bottle neck will pop up somewhere.  Perhaps the speed of the interface between the RAID controller and the CPU, perhaps the gigabit NIC used to ship the data out to other computers.  However in the end the increase is real and can have significant impacts on total performance.

    And of course, all of this is for instructional purposes only.  Exactly how you would build your specific RAID system does absolutely depend on your finances, your motherboard, your intentions, and so forth.  All of which in no way negates the POTENTIAL to have RAID solve the data integrity issue for you.

    RAID is simply a tool, which turns a bunch of individual hard drives into a SINGLE ARRAY and which then manages data redundancy for you such that a physical disk failure does not ruin your Bahama vacation.  Nothing more, but isn't that enough?
    Tuesday, February 12, 2008 3:27 AM

All replies

  • First off, that card will be way overkill for 1TB drives, unless you plan on separating C: and D: (more on that below).

     

    1. (4) 1TB drives would be too many due to the 2TB MBR limit, unless you used one for a hot-spare.  While you can make GPT work on WHS through drive prep outside WHS, you can't have  GPT boot disk on WHS.  So yes, (3) 1TB drives would be it.  1 would be used for parity, and 2 for data, and since C: would  be 20gb that leaves you the remainder for D:  One thing to consider is most RAID controller cards allow you to add a hot-spare later.  So you could buy a 4-port card and 3 drives, and add the 4th driver later if money is tight now.  This would be a fine solution for the 3405 4-port card.

     

    2.. I don't believe you can really add any space to the array without a hitch, as it will have to come in as a new volume, which means to add it to your usable storage pool, it would bring all of the DE issues along with it.  You would have to add the new array outside of DE, which means exposing it as network shared directly, which isn't really within the spirit of WHS.  But if all you need is fine sharing, it's certainly fine.

     

    From my point of view, the whole point of using RAID5 is you need to build your ENTIRE array up front.  If this is limiting, don't use RAID5.  Use DE.   MS is assuming most customers fall into this camp in their target market, and I think they are right.  And if DE worked 100% properly, a lot of these discussions would be moot.  So the 3805 doesn't make sense when looking @ 1TB drives. It does for smaller drives.

     

    3. Yep, but now you're not talking RAID5 either.  That's more of a DE setup.  Again, the C: and D: would go to the first 250GB disk.  Disregarding that 250GB may be a bit small for the landing zone down the road, you have now brought DE file-wrangling into the equation.  The primary benefit with consdiering RAID is to get the DE issues out of the way.  If you have both, you are only compilcating things for yourself.  You pay a LOT for the RAID overhead, you should get it's benefits.

     

    4. During and after reinstall it will find your data and rebuild the reparse points / tombstones.  Every now and then people post that this isn't working, but there are workarounds to get it up and running.

     

    It sounds like you still have some requirements to iron out. 

     

    In short:

     

    Just how much storage do you need?

     

    How soon do you need it?

     

    How much do you want to spend now?

     

    The first and most important question is this: is 2TB enough for the managed DE pool of data?

     

    If you need > 2TB now or down the road, then RAID5 (by itself) is not for you due to the MBR 2TB limit.  In this case, you either use DE, or you do something a bit crazy...

     

    I've overcome this limit by using RAID-1 boot drives containing C: partition only and RAID5 array for data on (GPT) containing D: only.  This is the only solution that solves all 3 issues:

     

    - resolves lack of redundancy of the system partition and drive

    - avoids various DE "issues" including corruption and performance

    - avoids 2TB limit with a single-volume RAID5 solution

     

    This is how and IT pro would build a typical mid-range file server, database server, etc.  That way the OS and data arrays are managed completely independently and MBR vs. GPT is moot.  Yes, it is far overkill for 99% of all WHS users.  Yes, it wastes a lot of space (although not as much from a GB standpoint, as the RAID-1 drives would typically be 80GB).  Yes, hippies cry at the sound of your hard-drives burning watt after watt. ;-)  But if you need to address all 3 issues stated above, this is how you do it.  It's a tried and true solution in the IT world.  In order to do this on WHS, you will need a 3rd party tool such as DriveImage to move D: off of HD0 and onto your GPT prepped RAID array, and then expand C: to fill HD0.

     

    You also can use the same RAID controller for both arrays.  Let's say you buy that 3805 8-port card.  So that uses 2 ports for the first array, and 6 for the data array.  Only drives in the same array have to match!  So, for example, you can have 80GB RAID1 C: and 6TB RAID5 D: (5TB usable).

     

    If you think 3.5TB to 4TB would be enough, then simply go with 750GB drives instead.  They are cheaper per GB anyway.

     

    Also, another thing to consider is if your motherboard has RAID built in, you can use the on-board RAID1 for the boot drives.  This would allow you to go up to a whopping 8TB, or, purchase only a 4-port card assuming 1TB data drives,

     

    So as you can see, it all boils down to the requirements questions I asked above.  How much storage, and how soon, and what budget?  If you are only going to need around 2TB ish, then you can live with the MBR limit and just do RAID-5. 

     

    If you can live with 2TB, you have to consider the following: power consumption vs. read speed vs. drive spaces available vs. do I want a hot-spare.  The analysis of these four questions will dictate whether you need a 4-port card with large (4) 1TB drives (2TB usable, hot-spare), or 4-port card with (4) 750GB drives (2.25TB usable, 250gb wasted, no hot-spare), or 8-port card with (8) 320GB (1.92TB usable, hot-spare), or (6) 500GB drives (2TB usable, hot-spare) or (5) 500GB drives (2TB usable, no hot-spare).

     

    As you can see, there are a LOT of ways you can assemble ~2TB, even for about the same cost.  Depending on your requirements, you may use a 4-port or 8-port card.  Each has it's advantages and disadvantages.  You simply have to prioritize the 4 issue above to see which is most important.  Logistics have a lot to play here.  For example, if your case you are dead-set on only has room for 4 drives, that helps eliminates all but the first two choices.  If on the other hand have a bigger case and don't mind the noise or burning the watts AND read-speed is super important, then the 320's would be a fine approach.


    But, either way, if you want > 2TB storage, then you are looking at your RAID array on D: being physically separate from C:.  This is advanced, but doable.  I've already done it and have a working WHS server with 80GB C: and 1TB D: (GPT).  Getting it working with D: on an array is a simple matter of prepping the volume differently in one step

     

    Ryan

     

    Monday, February 11, 2008 10:37 AM
  • First, WHS and RAID is an unsupported scenario. You can do it (it's not even difficult, if you're technically proficient), and there are some advantages at this time to doing so, but you should think through the costs and benefits before you start, because if RAID turns out not to be a good solution for you, you will very rapidly paint yourself into a corner with no way out. Do some reading, the benefits and costs have been laid out repeatedly in the forums.

    So, to your questions:
    1. You can go either way (3 or 4 drives). If you put 4 drives in the array, you will create two volumes that you'll present to WHS as disks.
    2. Because WHS uses the MBR style of partition tabke, you are limited to a maximum of 2 TB in any single volume. As for adding space, that card doesn't support Online Capacity Expansion, that I can see. You would have to add an additional 3 disks and create an additional array to add capacity. OCE would let you add a disk to a functioning array, but will usually still require you to create an additional volume to expose the added storage (not a lot of cards that will grow volumes into the new space...).
    3. Should your system disk fail, WHS will recreate your tombstones when you reinstall. It will take a long time (multiple terabytes of data in the storage pool can easily take days to go through) but if you wait long enough it will work. This question is part of that cost/benefit analysis I mentioned earlier.
    4. There is no supported way to back up the system disk on WHS, due to the way WHS manipulates the file system. When you do a server reinstallation, you specify up front that you're recovering a server, and it does the right thing.
    In general, if you follow the KISS principle, WHS "just works." If you complicate things, trying to build a server out of super-high performance components (not needed, WHS has enough bottlenecks that you'll never see the benefit), or trying to use WHS as a general purpose "do everything" server, you will possibly have problems.
    Monday, February 11, 2008 12:29 PM
    Moderator
  •  Matt Greer wrote:


    I'm looking at this card here at Newegg (Adaptec 3805).



    Matt, that card will be fine.  The Areca 1220 is faster but generally about $70 more, though you can find it on sale occasionally.  That is what I own, but only because I wanted maximum performance for a SQL Server.  For WHS I would not recommend spending more money for more performance than this one provides.


    1.  So at a minimum I'm looking at three 1-TB drives, correct?  Or do I need 4 drives with one for the OS and the other three for storage?  If I read that other thread correctly, I would have two partitions for the entire set of drives, one partition for the OS and one for storage.


    Correct assuming that you want to end up with 2 tbyte of storage.


    2.  How much space can I add to this array?  Seems like there's a 2 TB limit somewhere, right?  I can't just start adding 1-TB drives to this all willy-nilly, can I?  What about when the 2 TB drives come along; would that be "bad" for the array to put just one in?  Do I need to put them in pairs or something?


    When talking about RAID it is important to distinguish between ARRAYS and VOLUMES.  The array is the entire set of DISKS which taken together creates a storage platform.  A VOLUME is a logical disk created on the ARRAY.  The VOLUME is what gets written on by Windows.


    Let's take an example.


    Assume that you bought THREE terabyte DISKS.  You put them on the controller and assign them all to a single ARRAY.  That ARRAY now contains 3 tbytes of total space, however 1 of those disks will be used for "parity", making 2 terabytes available for "data" storage.  You could then create a 400 GIG VOLUME and a 1.6 TByte VOLUME (totals 2 tbytes).  VOLUMES are what the OS writes to.

    You would end up with two VOLUMES on a single ARRAY.  The ARRAY has three physical DISKS.

    Later on you add two more 1 TByte DISKS to the ARRAY.  The ARRAY itself now has FIVE physical disks and (at this point TWO volumes - 400 gig and 1.6 tbyte.  You tell the controller to create a new VOLUME of 2 tbytes.  Your system now has 3 volumes - 400g, 1.6 tbyte and 2 tbytes.

    Starting at the top then you have DISKS, which make up an ARRAY, which are used to create VOLUMES which are what Windows actually uses to write data to.

    NOTICE that if you have enough ports you can also do something like a RAID 1 ARRAY using let us say a pair of 500 gig disks and a RAID 1 array using a pair of 1 tbyte drives.  Not terribly efficient but you could do it and might choose to do so for economic reasons or if you just happened to have a pair of (matching) disks already hanging around.

    NOTICE!!! that if you are using a RAID 5 ARRAY (with sufficient ports) when you add TWO more drives to the array you get a TWO TByte volume to add to WHS.  If you are NOT using a raid controller, then you hand the two drives to DE and get ONE TByte of additional storage (assuming duplication).  An ARRAY with RAID 5 always wins long term when it comes to how much storage you get from your disks.  I have said it before but the equations are (N/2 * size) for mirror (duplication) and ((N-1) * Size) for RAID. 

    Notice also that RAID software will often allow you to start with a simple mirror (two disks).  If you add a disk you can then tell the controller to "convert" the array to RAID 5.  So... you start with 2 tbyte drives mirrored for a 1 tbyte RAID 1 array.  If you add a third disk, the controller can convert the ARRAY to RAID 5 and you end up with 2 tbytes of storage.

    Note also that I do NOT necessarily recommend any of the above configurations, they were purely to demonstrate the concepts of DISK, ARRAY and VOLUME. 

    I recommend creating a single maximum size volume (2 gigs) RAID 5 and let WHS use that as it will.  Using this configuration you will end up with a 20 gig system volume and a 1.98 tbyte data storage.  The reason that I recommend this is simply that you keep DE out of the mix to the maximum extent possible.  A single huge data drive means

    1) No duplication
    2) No "landing zone" per se.
    3) No shuffling off to another disk.


    Later, if you need more storage, add drives to the array and use them to create the largest volumes that WHS supports (and you can support).  So for example add 2 more tbyte drives to the existing RAID 5 ARRAY, and create a 2 tbyte VOLUME which you hand to DE.  WHEN YOU DO SO, you will end up with the DE issues BUT you sitill have all the advantages that RAID brings to the table.


     

    3.  If we truly trust that a reinstall will restore all our tombstone buddies, and I'm really not concerned about the throughput performance of the unit, won't this, in the end, be more trouble than it's worth?  Simply get a 250 GB drive for the OS and tombstones and two 1-TB drives for storage and duplication.


    This depends on who you are and how much faith you have in your hard disks.  Hard disks die, a LOT.  It is the single highest failure point in any computer by far.  DE thrashes disks a LOT.  Realistically, thrashing and dying go hand in hand.


     

    4.  For that matter, wouldn't a simple Ghost (i.e. image) of a bare bones install suffice?  I'm thinking WHS, PerfectDisk, and anti-virus and that should be all I'd need, I suppose.  Or, during the reinstall, is there a part where you answer a question "Hey, this is a reinstall, examine the other disks for files."  Or can you do that after installation and simply rebuild the database?  I suppose that's probably answered in one of the white papers.


    Maybe.  Interesting idea.  However given all the threads about trying to do a restore...


     

    -Matt

     

    edit:  OK, after reading this thread I'm concerned.  If my OS drive fails, I thought I could simply drop another one in and reinstall.  Looks like the procedure is more complicated and I would require external storage to complete it correctly???



    Not for me.  I use RAID!
    Monday, February 11, 2008 1:35 PM
  •  Ken Warren wrote:
    First, WHS and RAID is an unsupported scenario. You can do it (it's not even difficult, if you're technically proficient), and there are some advantages at this time to doing so, but you should think through the costs and benefits before you start, because if RAID turns out not to be a good solution for you, you will very rapidly paint yourself into a corner with no way out. Do some reading, the benefits and costs have been laid out repeatedly in the forums.


    Good advice


    1. As for adding space, that card doesn't support Online Capacity Expansion, that I can see. You would have to add an additional 3 disks and create an additional array to add capacity. OCE would let you add a disk to a functioning array, but will usually still require you to create an additional volume to expose the added storage (not a lot of cards that will grow volumes into the new space...).


    It absolutely DOES have online capacity expansion.


    In general, if you follow the KISS principle, WHS "just works."


    LOL.  Yea, it "just works" until the system disk dies and then you are spending the day trying to rebuild your server.  My time is worth enough not to do that.


    If you complicate things, trying to build a server out of super-high performance components (not needed, WHS has enough bottlenecks that you'll never see the benefit), or trying to use WHS as a general purpose "do everything" server, you will possibly have problems.


    Ken, why do you make statements like this?  It is just SOOOOOooooooo not true, either statement.

    Let's start with the "possibly have problems".  WHAT does THAT mean?  I'm sorry but that is a very wimpy statement with no explanation of the why.  You ABSOLUTELy 100% WILL have problems without RAID if your system disk dies!  How's THAT for a non wimpy statement!

    WHS does have bottle necks.  Hardware raid in particular MAY completely eliminate some bottlenecks.  With ANY Raid, 1 or 5, you can completely eliminate the file duplication which is the CAUSE of many of the "bottlenecks" caused by DE.

    I can understand that YOU personally do not want RAID but to generalize your feelings to the world is just silly.  I use RAID with WHS and I am THRILLED with the results.  I use high performance components and I DO see the benefits.  I can stream high volumes and write high volumes at the same time, and that is due entirely to a high performance disk subsystem.

    I have stated the benefits in "the other thread" and yet you continue to make statements like those above without ever addressing HOW those beneifts are not true.

    Respond to the following questions please.  Note the correct answer is capitalized in case you have any problems with the questions.  If you choose to select the wrong answer, please explain your logic.

    0) RAID is NOT supported.  TRUE or false.  (I had to throw that one in there because that seems to be the ONLY focus of too many people out there.
    0a) WHO CARES if you are building your own system.  NOT ME.  YOU.  (both answers are correct apparently ;-)

    With that out of the way, let's get down to the test.

    1) With RAID a single drive failure has NO EFFECT on your use of the system.  TRUE or false?
    2) With a hot spare in place ANY single drive failure is completely and automatically corrected without any action on your part.  TRUE or false.
    3) A Single Large Volume system largely takes DE out of the equation.  TRUE or false?
    4) "Largely taking DE out of the equation" immediately speeds up WHS due to the lack of duplication and all that entails.  TRUE or false.
    5) As you add capacity later RAID 5  outperforms file duplication in every aspect (speed and available storage size).  TRUE or false?
    6) With a hardware RAID controller System Writes are AT LEAST as good as DE alone and streaming READS can be many times faster.  TRUE or false.

    Until you can logically explain WHY the statements above are NOT all true I would respectfully request that you cease and desist with the *** about "KISS" and WHS "just working".  If in fact EITHER one of those were true, you wouldn't have this forum, and you wouldn't have soooooo many people interested in RAID.  WHS in fact "just works" until it doesn't.  Not MY kind of solution!!!

    RAID is a TOOL, nothing more or less.  It has its place and you continue to attempt to say that it is a PROBLEM.  It is NOT a problem it is a SOLUTION to a problem.  If you in fact recognize the PROBLEM, then RAID is a solution to that problem.  And WHS has PROBLEMS, which RAID SOLVES very neatly thank you!

    For anyone technically competent, RAID in fact makes ANY SYSTEM (WHS or otherwise) MUCH more reliable and EASIER to maintain!!!  That has absolutely NOTHING to do with WHS.  It is just a plain simple FACT.  ANY SYSTEM, WHS or otherwise is more reliable and easier to maintain if it uses RAID.  Why do large data centers ALWAYS use RAID?  Because... it makes the system easier to maintain and more reliable.
    Monday, February 11, 2008 2:22 PM
  •  ryan.rogers wrote:

    First off, that card will be way overkill for 1TB drives, unless you plan on separating C: and D: (more on that below).


    Ryan, c'mon.


     

    2.. I don't believe you can really add any space to the array without a hitch, as it will have to come in as a new volume, which means to add it to your usable storage pool, it would bring all of the DE issues along with it.


    Not ALL of the DE issues.  You still get rid of file sharing and all the performance hits that entails.  And until you need that additional storage you get the best of both worlds, RAID and no DE.



     

    From my point of view, the whole point of using RAID5 is you need to build your ENTIRE array up front.  If this is limiting, don't use RAID5.  Use DE.  


    From MY perspective there are several reasons to use RAID5, even if you plan to add more volumes later.

    1) More storage per disk added
    2) More streaming read performance per disk added.
    3) Elimination of "file duplication" and all the performance issues related to that.
    4) Single Large Volume operation until you need the additional storage later.

     

    The primary benefit with considering RAID is to get the DE issues out of the way. 


    Nonsense.  The primary benefit of RAID is data integrety.



    If you have both, you are only compilcating things for yourself. 


    Nonsense.  Raid is NOT complicated.  It is an OLD technology (as opposed to WHS) that pretty much "just works"



    You pay a LOT for the RAID overhead, you should get it's benefits.


    You will in any event.

     

    If you need > 2TB now or down the road, then RAID5 (by itself) is not for you due to the MBR 2TB limit.  In this case, you either use DE, or you do something a bit crazy...


    Nonsense.  Where in the world do you come up with that statement?


     

    I've overcome this limit by using RAID-1 boot drives containing C: partition only and RAID5 array for data on (GPT) containing D: only.  This is the only solution that solves all 3 issues:

     

    - resolves lack of redundancy of the system partition and drive

    - avoids various DE "issues" including corruption and performance

    - avoids 2TB limit with a single-volume RAID5 solution

     



    Nonsense.  As single RAID 5 ARRAY answers all these issues, and does so with a single RAID array.


    This is how and IT pro would build a typical mid-range file server, database server, etc. 


    That may be true but not for the reasons stated.  In general big iron machines have separate raid boot disks for a variety of reasons starting with the fact that they use FASTER and more EXPENSIVE disks for the system disk, slower disks for main "who cares" storage, and faster and more expensive disks for things like SQL Servers etc.


    All of which is totally irrelevent to WHS.


    But if you need to address all 3 issues stated above, this is how you do it. 


    Nonsense.  You are adding complexity for no valid reason IN THIS ENVIRONMENT.  A single RAID 5 array works just fine thank you.



    In order to do this on WHS, you will need a 3rd party tool such as DriveImage to move D: off of HD0 and onto your GPT prepped RAID array, and then expand C: to fill HD0.


    Precisely, so don't do this.  Simply use RAID 5 or 6 if you need more space.  One ARRAY!  This is WHS not an ISP server or a SQL Server.


     

    You also can use the same RAID controller for both arrays.  Let's say you buy that 3805 8-port card.  So that uses 2 ports for the first array, and 6 for the data array.  Only drives in the same array have to match!  So, for example, you can have 80GB RAID1 C: and 6TB RAID5 D: (5TB usable).


    True but why would you do that?  A single RAID 5 array with the same size volumes makes things simpler.


    Also, another thing to consider is if your motherboard has RAID built in, you can use the on-board RAID1 for the boot drives.  This would allow you to go up to a whopping 8TB, or, purchase only a 4-port card assuming 1TB data drives,


    True but you immediately end up with DE in the mix.  If you start with RAID 5 and a single large volume, then until such time as you want more storage you get the best of both worlds.  If you use the "separate" RAID1 boot drive you immediately bring DE into the picture doing the landing zone stuff, EVEN if you only have a single "data drive".



    WHY would you do that?


     

    So as you can see, it all boils down to the requirements questions I asked above.  How much storage, and how soon, and what budget?  If you are only going to need around 2TB ish, then you can live with the MBR limit and just do RAID-5. 


    True.  And if you want more than 2 tbytes, with the rest added later, then RAID 5 works a treat (as our English friends would say)!


     

    If you can live with 2TB, you have to consider the following: power consumption vs. read speed vs. drive spaces available vs. do I want a hot-spare.  The analysis of these four questions will dictate whether you need a 4-port card with large (4) 1TB drives (2TB usable, hot-spare), or 4-port card with (4) 750GB drives (2.25TB usable, 250gb wasted, no hot-spare), or 8-port card with (8) 320GB (1.92TB usable, hot-spare), or (6) 500GB drives (2TB usable, hot-spare) or (5) 500GB drives (2TB usable, no hot-spare).

     

    As you can see, there are a LOT of ways you can assemble ~2TB, even for about the same cost.  Depending on your requirements, you may use a 4-port or 8-port card.  Each has it's advantages and disadvantages.  You simply have to prioritize the 4 issue above to see which is most important.  Logistics have a lot to play here.  For example, if your case you are dead-set on only has room for 4 drives, that helps eliminates all but the first two choices.  If on the other hand have a bigger case and don't mind the noise or burning the watts AND read-speed is super important, then the 320's would be a fine approach.


    Yep, yep and yep.


    But, either way, if you want > 2TB storage, then you are looking at your RAID array on D: being physically separate from C:. 


    Nonsense.  I am currently running a large Single Volume RAID array and can add more storage any time I want.  It does bring DE into play but so does that "solution" that Ryan suggests, and his solution brings DE into the mix from the gitgo.  A single array Raid 5 solution truly provides the best of both worlds.  Raid 5 with a hot spare makes the whole thing that added bit more reliable.



    This is advanced, but doable.  I've already done it and have a working WHS server with 80GB C: and 1TB D: (GPT).  Getting it working with D: on an array is a simple matter of prepping the volume differently in one step

     

    Ryan

     



    Doable yes but totally unnecessary, and as Ryan indicates ADDS (unnecessary) complexity.  A single RAID array with all your volumes on that array works very well.  Why Ryan thinks otherwise is truly a mystery.

    To be completely honest, there is one SINGLE advantage to splitting the system drive out onto a separate array, and that is that if the RAID controller dies, the system can still boot.  However you still don't have your data so what is the point (in this WHS environment)?  It absolutely DOES make a difference if the server has multiple raid cards serving different functions as you might have in a company environment.  In that case if a raid card dies, only that server functionality goes south, however in the case of WHS the functionality is to store your precious files and backup.  If the raid controller dies then you need to replace the controller. 

    However electronics are waaaaaay reliable.  I have had computers run for 6 or 8 years, until I had to replace them because they were just too slow to be useful.  Disks die, the machine lives on.

    Look, I am not saying that you do not need to weigh your options, but I am saying that much of what is being spread around in these RAID threads would be better used for fertilizer in your garden than as advice for building a nice, fast, reliable, functional WHS. 

    I have a nice, fast, reliable WHS.  I use RAID 5 (6 actually but the difference doesn't matter in this context).  I use a single array.  It works, and it works well, it is dead simple to set up and all the rest of the "stuff" belongs in the garden.

    Just my humble opinion of course.
    Monday, February 11, 2008 4:17 PM
  • I'm not going to take your little 6 part quiz, John. We both know, and agree on, the answers that you're looking for, and I think we both know that I could make a conditional "false" a reasonable reply to half of them.

    As for question 0a, Microsoft cares. "RAID is not a supported scenario." Microsoft doesn't want you to put a RAID array in your server. They specifically forbid OEMs to do so, as a matter of fact.

    Now, regarding OCE for the card the OP linked. Mea culpa, I only read the product description, which didn't mention it. I'll admit that it seemed a little surprising even to me, considering the price...

    Monday, February 11, 2008 6:45 PM
    Moderator
  •  Ken Warren wrote:
    I'm not going to take your little 6 part quiz, John. We both know, and agree on, the answers that you're looking for, and I think we both know that I could make a conditional "false" a reasonable reply to half of them.

    Ken,

    I don't think that at all.  I think it is VERY important to distinguish fact from opinion, and that is exactly why I state things the way I do.

    Notice that I answered the posts above point for point with FACTS, not opinions.  You consistantly refuse to do so.

    It is my opinion that the world is flat, and that the US never sent any men to the moon.  That is my opinion (not) but I cannot claim that to be FACT. 

    "In general, if you follow the KISS principle, WHS "just works."  YOUR OPINION!

    Like bloody hell it does!  Look at all of the TENS OF THOUSANDS of posts in these forums!!!!!  Obviously an opinion, which you are certainly entitled to but certainly NOT a "FACT".  WHS has very real problems, and RAID IN FACT addresses many of these issues.  Not my OPINION, FACT.  While I have stated how these are facts, you "take the high road" and claim that somehow you could prove otherwise if you so desired but without ever doing so (and make the rather dubious statement that I would agree with you if you did so). 

    "
    If you complicate things, trying to build a server out of super-high performance components (not needed, WHS has enough bottlenecks that you'll never see the benefit)".  YOUR OPINION!

    I can testify that this is most certainly not MY opinion or experience.  The FACT is that I have been using RAID since almost DAY 1 with high end components and it is a marvelous system.


    As for question 0a, Microsoft cares. "RAID is not a supported scenario." Microsoft doesn't want you to put a RAID array in your server. They specifically forbid OEMs to do so, as a matter of fact.


    And I care because... 

    I am not asking Microsoft to support my system.

    Let me tell you something Ken.  In dealing with MS since the late 80s I have NEVER, even ONCE, EVER had any useful support from Microsoft.  Not once.  EVER.  I have in fact called them.  I have in fact PAID for support.  And I have been read to from scripts and I have been shuffled and I have been... 

    What I have NEVER, EVER, (even once) had was a resolution to my problem come from Microsoft "support".  Which is not to say that they can't support the easy stuff, IF it happens to fall in their scripts.

    I had a situation where Microsoft Access was abruptly closing, no warning no nothing.  I offered to set up my development system to allow remote desktop, etc and let them come in and trougbleshoot LIVE on my development machine.  I was told to "ship them the data and the FE (and the libs)" and they would look at it.  Data that contained CLIENT personal medical records and personal information.  Uh yea!  I spent AGES (tens of hours) doing their job, locating the error to a specific line of code.  I spent tens of hours futzing up the data to obscure the data so that it was no longer personably identifiable.  I shipped them all the stuff.  I got a very polite "we cannot recreate the problem".

    DO I SOUND IMPRESSED WITH MICROSOFT SUPPORT? (Hint, I do not!)

    So... now you are trying to tell me (and all the dear folks reading these forums) that I should NOT do what I have PROOF will make my system more stable and help me prevent wasting MY TIME AND MONEY because "Microsoft won't support me" if I do these things?

    Microsoft does NOT pay me for my time and expenses cleaning up their messes.  I know because I clean up their messes on a DAILY basis!  I am a developer, a professional consultant who has to deal with bugs in their software that have been there for a decade while they push version after version out to make more money instead of fixing the very real bugs that I CONTINUE to deal with a decade later.

    Excuse me Ken, do NOT talk to me about "Microsoft won't support me".  I support myself, and others who Microsoft won't (or can't) support.

    NOW, lets stick to the facts ok.  I have already conceded long ago that Microsoft won't support RAID, and I for one simply do not care.  It is not your job to MAKE me (or anyone else in here) care.  If I don't need support for whatever reason, then it is perfectly LEGAL for me to use RAID if I so desire.  I simply won't ask to be "supported".

    And finally, I absolutely LOVE WHS.  Admitting that it has real problems does not make anyone a "bad person".  WHS has problems.  I am about solutions, NOT OPINIONS, and NOT about the party line.  If you would come in here and preface everything with "in my opinion" I would have less of a problem with this stuff but you come in as an "expert" and then make statements that are simply not true.  People (I myself included) view you as an expert.  You obviously know a LOT about this stuff.  Which hands you an obligation to be very careful to distinguish between YOUR OPINION and FACTS.

    It is MY opinion that the world is flat, and that WHS is a figment of the government's imagination.  The FACTS do not support those opinions, and IMHO the FACTS do not support your opinions (on this stuff) either.  So state your opinions if you must, but clearly label it as your opinion and expect me to post the FACTS as I see them.  Any time you care to dispute FACTS, please jump right in.  You have already straightened me out on a couple of issues (which I appreciate BTW), just not RAID related facts.
    Monday, February 11, 2008 7:59 PM
  • John, in the future I recommend not being quite so liberal with the editing if you are going to reply so tersely.  Almost all of my statements are presented out of context.  When I read your post I feel like I am watching Bill O'Reilly.

     

    Now, I'm not going to take the time to reply to each of those (now), but I will perform a bit of active listening as I believe there has been a communication breakdown here.


    John, simple question:

     

    Is it acceptible to partition a MBR formatted RAID array into multiple 2TB max partitions and have those partitions managed in the DE pool?


    Clearly based on your post you think it is, but I just wanted to confirm.

     

    Thanks,

    Ryan

    Tuesday, February 12, 2008 12:55 AM
  •  ryan.rogers wrote:

    John, in the future I recommend not being quite so liberal with the editing if you are going to reply so tersely.  Almost all of my statements are presented out of context.  When I read your post I feel like I am watching Bill O'Reilly.



    Ryan,

    I did that quite intentionally.   My intention in this "debate" is to separate fact from opinion.  I have been watching a whole bunch of OPINION floated around as if it were fact.

    John, simple question:

     

    Is it acceptible to partition a MBR formatted RAID array into multiple 2TB max partitions and have those partitions managed in the DE pool?


    Clearly based on your post you think it is, but I just wanted to confirm.

     

    Thanks,

    Ryan



    First understand that the RAID ARRAY doesn't have a clue what MBR even is.  MBR is specific to a VOLUME which is contained on an array.  Any given volume  may or may not be a MBR VOLUME as YOU decide.

    With that out of the way, my assumption is that you are really asking "is it OK to have multiple VOLUMES on a RAID ARRAY handled by DE".  My answer is "of course it is OK".  Always keep in mind that the ARRAY is a support mechanism for handling redundancy of data so that a drive failure does not cause data loss.  Nothing more, nothing less.

    I need to stress again (as I did in my original response directly to the person posting the question) that we are talking about a technology here.

    A raid array can be thought of as nothing more than a HUGE hard disk.  You take 2 or more literal PHYSICAL hard disks and turn them into a single "ARRAY".  That ARRAY is a SINGLE entity.  What would you do if I handed you a 30 TERABYTE hard disk that just took care of all aspects of data integrity?  Whatever you would do with that is what you would do with a RAID 5 or 6 array, because that is what a RAID 5 or 6 arrray gives you.  You have taken N physical surfaces and created ONE LOGICAL surface called an ARRAY. 

    Now you take the ARRAY and create one or more LOGICAL surfaces called VOLUMES.  VOLUMES are what the OS writes data onto and reads data from.  The OS does not know whether the VOLUME is contained on an ARRAY or on a physical hard drive.

    The nice thing about this HUGE hard disk (ARRAY) is that it automatically handles the issue of dead disk drives for you.  The ARRAY doesn't die when a disk dies.  VOLUMES don't die when a disk dies

    Imagine for a moment a single hard disk.  I am sure that you know that MOST physical disks of any size have multiple platters and a head on each side of that platter.  Now imagine for a moment that a single head could "die", but that the DISK DRIVE could simply grab an unused platter side and head and use that to "replace" the dead head / platter side.  Furthermore imagine that this Wunderdisk had written your data around on the platters such that if any platter / disk died your data was still all there.

    Think of a RAID ARRAY in those terms and suddenly your perspective changes.  The ARRAY is a single LOGICAL "drive", but much bigger than any individual drive that you own, in fact (more or less) the simple sum of all the physical drives that you give it to manage.  Any single drive can die without effecting the ARRAY or the VOLUME(s) on the ARRAY.

    The advantage that we have with the ARRAY is that WE DECIDE how to carve up the "huge disk" that it presents us.  The ARRAY is RAID 5 but it can contain 1 VOLUME or 30 VOLUMES, it can contain 3 physical disks or 24 physical disks.  I can have 1 disk for "checksum" data and the rest for data, or I can have 1 disk for a hot backup and 1 disk for checksum and the rest for data but in the end, the DETAILS can be completely hidden from me as the administrator of the ARRAY.  I throw a bunch of disks into the array, make a decision about the configuration and forget it.  From that point on it acts like a SINGLE physical DISK to the system.

    You are quite familiar with taking a terabyte drive (a SINGLE DISK) and carving out partitions right?  You might build a 100 gig system partition (which is a VOLUME BTW).  You decide that is enough to hold Windows, Office, Firefox, music players and all the other "stuff" that makes up a "system".  You might create a 300 GIG Data partition for storing all your music files.  You might create a 600 GIG volume to hold Video.  But in the end, while you have THREE VOLUMES, there is only ONE physical disk.  The problem here is that if that ONE physical disk dies, ALL of your storage dies.  I am sure we all agree that is bad.

    So... We have a RAID controller.  It can handle 8 physical drives.  We throw FOUR drives on it.  We make a decision to use three of them for data and one for parity.  We will worry about a hot backup later (or never?)  We now start carving that ARRAY up.  However you want, you create VOLUMES of the sizes that YOU decide are appropriate for whatever YOU need to do with that amount of storage.

    Of course there are decisions to make at the beginning.

    Let's take TWO WHS scenarios.

    SCENARIO ONE:

    I will start with (3) terabyte disks and build an array.  I never expect to have more than two terabytes but if it happens I will simply add more DISKS to the ARRAY and create a new VOLUME.  So... I create a single THREE terabyte ARRAY and tell the controller to make it Raid 5.  The controller takes one of the drives and uses it for parity and presents me 2 terabytes of blank space (no VOLUME yet).  Because I think that I will PROBABLY never need more or at the very least it will be a long time before I do, I create a Single Large Volume using ALL of the 2 terabytes and hand that to WHS.  WHS immediately resizes the volume, carves out 20 gigs for itself and makes the rest the SINGLE data drive.  All is happy in whoville.  Max performance, no DE etc.

    NOTICE that I have an "overhead" of 1 terabytes out of 3 terabytes or 33%.

    Now I need more space two years down the road.  I could make more decisions (and I might if 5 gig drives have come along) but assuming that I am still using 1 terabyte drives... I grab two more, add it to the ARRAY.  The ARRAY now has two gigs of EMPTY space.  NOTICE... that the ARRAY simply added the two new drives to the "pool" and they can automatically be RAID 5 if I want them to be.  I do.  I now create a brand new 2 gig VOLUME and hand it to DE. 

    NOTICE that I now have an overhead of 1 terabytes (checksum storage) out of FIVE terabytes or 20% overhead.  NOTICE that with RAID 5 and a SINGLE  ARRAY, I always have a single drive used for overhead.

    For the first time. DE now has to do it's thing and does so.  For THREE YEARS I had nirvana in Whoville, but now I have DE.  Hopefully during the intervening three years the WHS team has fixed many of the bugs in DE and I have a much better DE experience, but in ANY EVENT I will have no WORSE an experience than if I just handed DE a couple of disks to manage.  PLUS I have RAID 5 and all that means (data integrety, low overhead etc).

    SCENARIO TWO. 

    I am immediately going for a 20 terabyte solution, right now, out of the box.  This is a very different scenario and I MAY make some different choices.  I have made a decision UP FRONT that I am going to put up with the DE "stuff".  It matters not whether I use RAID or not, DE is going to do its DE thing.  That certainly in no way affects my decision to use RAID however.  A raid array is just a tool to handle the data integrity issue for me.  Nothing more, nothing less.  So I still use RAID.

    I grab 22 terabyte disks and a largish 24 port controller and I go to town.  I build an ARRAY with 20 disks for data, one for parity and one for hot spare.  I now have a single HUGE 20 terabyte "hard disk" available to me.  WHS DICTATES to me that it will only accept a maximum size of 2 terabytes in any single volume.  Nothing I can do about that limitation and in no way different from just handing DE a bunch of drives.  No problem.

    Where the decision comes in however is in whether to make the "landing zone" a full 2 gigs or not.  In this case I probably would not.  I would probably make the "landing zone / system disk" a x hundred gig volume.  I make a decision based on the types of files I will be putting on the drive, how many at a time, and their size and then choose... 400 gigs (pulled ouot of thin air for this example).  WHS takes over, carves 20 gigs out for the system disk and creates a 380 gig "landing zone".  I start creating 2 terabyte volumes and handing them to DE.  I would end up with:

    (1) 20 gig volume for the system disk.
    (1) 380 gig volume for the landing zone.
    (9) 2 terabyte volumes.
    (1) 1.6 terabyte volume which is "room left over" after creating as many 2 terabyte volumes as I can.

    NOTICE though that ALL of these VOLUMES reside on a single RAID 5 array.  WHY do I make it a single array?  to "share" the one single "parity" disk with the entire array, to MAXIZE the amount of disk that is used for STORAGE as opposed to OVERHEAD.  In fact if it were me, I would go RAID 6 for technical reasons.

    In any case, ALL of the volumes are RAID VOLUMES.  ALL the volumes are "redundancy protected".  A drive failure simply has NO effect on the operation of the server.  Should ANY physical disk fail, the "hot spare" raid drive is automatically "swapped in" by the controller, the RAID ARRAY is rebuilt to give me back my "redundancy protection" and life is sweet.  I don't even have to be home while all this happens.  I could be in the Bahamas sitting on the beach, and my RAID controller is busy rebuilding my array for me.

    When I get back from the Bahamas, the controller notifies me that a disk drive died and "oh by the way" I need to order a new disk to replace my hot spare which was pulled into the ARRAY (automatically) to replace the dead disk.  I pull the dead disk out, throw it in the trash, put the new disk in the position of the removed drive, and tell the array controller to use it as the hot spare.

    WAAAAAAAAAaaaaaaaayyyyy easier than jumping on a plane to rush home to rebuild my computer, spending the day trying to rebuild my system disk should that happen to be the disk that failed. 

    NOTICE that (at least with a hardware controller) streaming read speed is N * Individual drive speed (or close to it).  If I have 3 drives in the raid, and each individual drive is capable of STREAMING data off the drive at 30 megs / second, then streaming read is 90 megs / second.  If I have 8 drives, then it is 8 * 30 megs second (240 megs / second).  So another benefit of using a large ARRAY is that the streaming read speed rises with every drive placed in the array.  OF COURSE a bottle neck will pop up somewhere.  Perhaps the speed of the interface between the RAID controller and the CPU, perhaps the gigabit NIC used to ship the data out to other computers.  However in the end the increase is real and can have significant impacts on total performance.

    And of course, all of this is for instructional purposes only.  Exactly how you would build your specific RAID system does absolutely depend on your finances, your motherboard, your intentions, and so forth.  All of which in no way negates the POTENTIAL to have RAID solve the data integrity issue for you.

    RAID is simply a tool, which turns a bunch of individual hard drives into a SINGLE ARRAY and which then manages data redundancy for you such that a physical disk failure does not ruin your Bahama vacation.  Nothing more, but isn't that enough?
    Tuesday, February 12, 2008 3:27 AM
  • Woah woah woah...  I didn't want folks to get so excited here.  Suffice to say there is some outstanding advice and information in this thread.  I went ahead and ordered the WHS demo DVD last night.  I will have a spare computer in a few days where I can muck around with the software.  I think at this point in time I need to throw some hardware together and start to experiment with it.  I have 4 old Segate Ultra2 SCSI drives and an adaptec card (19160).  I don't believe RAID is built into that card, and if RAID isn't built into WHS I'm probably SOL.  Lots of TLAs there, huh?

     

    Most likely what I'm going to do is set up a server with three drives, set up two 1-TB drives, dump everything to it, and then wipe the OS drive on purpose.  Then reinstall and see how well the function performs.  Should be interesting.

     

    Anyway, once I've experimented a bit and read all the white papers thoroughly I'll come back.  The thread is yours, gentlemen.  Thanks again for taking time out to answer my questions.  Be nice to each other!! 

     

    Oh, to answer Ryan's questions, right now I only need 1 TB of storage with redundancy.  As soon as I start putting all the DV on my computer I'll need more space, probably approaching and passing 1 TB by just a tad, so if I want two copies of all the video (and I do!) I'll need over 2 TB of space.  After that, I'll start ripping DVDs and adding my music collection; I'll finally have space for lossless compression!  Big Smile

     

     

     

    Matt

    Tuesday, February 12, 2008 3:27 AM
  • LOL.  I am just a nubee to WHS who happens to love the product and am less than enamored with the shortcomings.  I understand and have used RAID for years and as such my OPINION is that for technical users (us GEEKS) RAID on WHS is a no brainer.

    I must admit I am tired of the same old "it is unsupported" stuff that I see over and over (and over and over and OVER AND OVER AND OVER AND OVER) when some poor guy (or gal) asks about RAID.  I get downright irritated when I see hundreds of "tell me about raid" posts and every single one of them starts with "it's unsupported" and "it's a bad idea".  Exactly how many times does THAT have to be said?  The question is never "is RAID supported" or "does Microsoft want me doing this", but the answer is always the same.

    I for one will provide real live information about how it works, what is involved, choices, considerations, whatever might need to be discussed.  If you have a specific question about RAID, ask me and I will try to give you a factual answer to your question.  OTOH if you want to know whether it is unsupported or whether MS wants you doing this... well... there are about a thousand answers to THAT question already out there on this forum.

    As for "it is unsupported"... YEA, I KNOW ALREADY.
     
    And yea, I know that MS does not approve too.


    I assume that you know this too?
    Tuesday, February 12, 2008 3:58 AM
  • John, at this point, approximately ¾ of this thread was written by you. Of that, perhaps 5% was an actual helpful answer to the original poster. The rest was a series of tirades and amazing displays of knowledge about RAID, none of which are helpful.

    Since the purpose of these forums is to be helpful to those who are having issues with WHS, and to answer their questions, and since the OP pretty obviously doesn't find most of this thread useful in any way (something I would agree with, though for different reasons I suspect), I have to ask you to please tone it down a very great deal.

    If you can't post in answer to a question about RAID without providing a 1,000 word lecture on the proper way to select hardware and configure an array (none of which would actually answer the OP's questions) and how certain people here are spreading lies and disinformation about RAID at every opportunity (which is frankly insulting and untrue, and thus against Forum Policy) then I must ask you to refrain from posting about RAID at all.
    Tuesday, February 12, 2008 4:54 AM
    Moderator
  • BTW John, you posted that response above mine just as I hit the submit button.  Woah woah woah should go up one in the order.  Smile

     

    Don't know why (because you're saying the same thing over and over again I know) but that second to last post make the little light bulb go off in my head.  I feel that I understand completely now.  I think what was missing is that in the past I have been using very low end (Adaptec 19160, for example) SCSI controllers and software RAID and didn't understand the ability to create multiple volumes on one array.  It's a fantastic concept and I can't see doing it otherwise.  In doing so, while I will get no support from Microsoft, I will optimize the money I spend on the drives and I will (most likely) never have to really worry about rebuilding the landing zone or reinstalling the OS.

     

    Someday I'll be replacing two 1-TB drives with one 2-TB drive.  Wonder when we'll be seeing those?

     

    I think any computer expert will tell you that there's no way to keep things totally failsafe, unless you go to redundant controllers I suppose.  DV tapes are cheap, so I'll keep those in a dark, cool, dry place.  But it'll be nice to have them all online so I can make awesome family DVDs.  Wink

     

    Thanks again for all the help.  I'm quite excited about WHS.  Can't wait to get my hands on the demo copy.

     

    -Matt

    Tuesday, February 12, 2008 5:02 AM
  •  Ken Warren wrote:
    John, at this point, approximately ¾ of this thread was written by you. Of that, perhaps 5% was an actual helpful answer to the original poster. The rest was a series of tirades and amazing displays of knowledge about RAID, none of which are helpful.

    Since the purpose of these forums is to be helpful to those who are having issues with WHS, and to answer their questions, and since the OP pretty obviously doesn't find most of this thread useful in any way (something I would agree with, though for different reasons I suspect), I have to ask you to please tone it down a very great deal.

    If you can't post in answer to a question about RAID without providing a 1,000 word lecture on the proper way to select hardware and configure an array (none of which would actually answer the OP's questions) and how certain people here are spreading lies and disinformation about RAID at every opportunity (which is frankly insulting and untrue, and thus against Forum Policy) then I must ask you to refrain from posting about RAID at all.


    LOL.  Ken get a grip.  I never said lies and disinformation, I said opinion presented as fact (and disinformation).  Is that insulting?   A subtle difference I understand. 

    In one post I also explained, point by point, where expressed opinion was nonsense.  If you want to call those tirades be my guest.  I don't see you pointing out how those opinions are nonsense.  The point of this forum is PRECISELY to present INFORMATION, not opinion dressed up as information.

    And yes, I said that SOME opinion being presented in RAID threads was better used in a garden as fertilizer.  That is MY opinion of course but unfortunately also true.

    The OP in that 1000 word case was Ryan who asked me a specific question.  From his question, it appeared to me that Ryan had some misunderstandings about RAID and how it really worked.  He  appeared to be confusing Arrays with Volumes and asking a question that just doesn't make any sense if you do understand RAID.  Thus THAT specific response was to Ryan.  Given his question I felt it entirely appropriate to discuss RAID concepts in depth and provide examples.  The examples were WHS examples.  He asked about RAID with WHS.

    And apparently the thousand word post caused a light bulb to go off in Matt's head so perhaps it was worthwhile for him as well. 

    If you want to call my telling you to knock it off about the "Microsoft support" (in a response to ME) a tirade, well you have a point.  I am absolutely tired of anything to do with "MS support".  While unfortunately every word of that was true, perhaps an MS forum was not a good place to point how useless MS Support can be.

    Now, if you as a moderator want to state flat out that no one is to discuss RAID because it is forbidden to do so in this forum, then of course I will respect that.  I would however ask that you as a moderator simply state in every one of your responses to RAID threads that discussing RAID is forbidden.  It certainly seems that if RAID is as illegal as you seem to believe, that would be a policy that would already be in place.  "It is illegal to use RAID in a WHS system and as such discussions of RAID are not allowed in this forum" would pretty much cover it, and shut me right up BTW.

    I have not seen that stated however (and I read the guidlines for posting - before I ever posted in here).  If it is NOT forbidden, then I will continue to present useful information to counter any possibly misleading opinions on the subject.

    PS I would like to point out that I am in no way an expert in RAID, and therefore it would be very difficult for me to present "amazing displays of knowledge about RAID".  I am a consultant and a database analyst / programmer.  I do use RAID and have set up several RAID systems in my SOHO, both hardware and software based.  The only reason that I reply at all in these posts is that no one else offers anything remotely approaching useful information.
    Tuesday, February 12, 2008 1:35 PM
  •  Matt Greer wrote:

    BTW John, you posted that response above mine just as I hit the submit button.  Woah woah woah should go up one in the order. 

     

    Don't know why (because you're saying the same thing over and over again I know) but that second to last post make the little light bulb go off in my head.  I feel that I understand completely now.  I think what was missing is that in the past I have been using very low end (Adaptec 19160, for example) SCSI controllers and software RAID and didn't understand the ability to create multiple volumes on one array.  It's a fantastic concept and I can't see doing it otherwise.  In doing so, while I will get no support from Microsoft, I will optimize the money I spend on the drives and I will (most likely) never have to really worry about rebuilding the landing zone or reinstalling the OS.

     

    Someday I'll be replacing two 1-TB drives with one 2-TB drive.  Wonder when we'll be seeing those?

     

    I think any computer expert will tell you that there's no way to keep things totally failsafe, unless you go to redundant controllers I suppose.  DV tapes are cheap, so I'll keep those in a dark, cool, dry place.  But it'll be nice to have them all online so I can make awesome family DVDs. 

     

    Thanks again for all the help.  I'm quite excited about WHS.  Can't wait to get my hands on the demo copy.

     

    -Matt



    Matt,

    Glad I could help.  WHS is a fantastic product.  I use it and love it, and RAID, while specifically disapproved of, provides the stable foundation that makes the concept work.  DE does have problems but the design team is working to fix those problems and will soon issue a patch to fix the corruption issues. 

    I think one of the things that is sometimes missed in all these discussions is that DE means DRIVE EXTENDER, not MIRROR RAID.  As a concept, DRIVE EXTENDER is pretty cool, and it in fact performs that function swimmingly.   As a RAID 1 implementation it pretty much... uhh... well I will just hold my tongue on that on.  Let's just say that there are entire companies that do nothing but RAID, and there is a reason for that.

    Anyway, good luck with your system and if you decide that RAID can help you build a stable WHS system, do not be afraid to try it.
    Tuesday, February 12, 2008 1:50 PM
  • John,

     

    I do have one more question pertaining to RAID controllers (and not using RAID on WHS, so hopefully no one will get upset!)  What features should I look for to ensure that I can have multiple volumes on one array?  (I hope I'm using the correct terminology.)  I was looking at a PDF from Adaptec and they're talking about having multiple arrays per controller, not just multiple volumes per array.  In the previous setup described with your 24 TB setup (btw, that's awesome, lol) that was one array, but we would set up multiple volumes using the RAID controller software/drivers before we installed WHS.  When would someone use multiple arrays?

     

    Just an interesting tidbit, I found a PDF discussing the 2 TB limit.  I haven't read through the whole thing yet but like anything geeky, I like it.  Smile

     

    http://www.adaptec.com/NR/rdonlyres/0877F6FF-458A-4CDE-BA50-179BB693A1FE/0/3759_2TB_WP.pdf

    Tuesday, February 12, 2008 4:23 PM
  •  Matt Greer wrote:

    John,

     

    I do have one more question pertaining to RAID controllers (and not using RAID on WHS, so hopefully no one will get upset!) 


    ROTFL.  But if you ask about something not WHS related then I will get my hand slapped for answering here.



    What features should I look for to ensure that I can have multiple volumes on one array?  (I hope I'm using the correct terminology.) 


    I can't really answer that question.  I have only ever actually used the Areca 1220 controller and the NVidia motherboard arrays.  I used the MotherBoard solution for Raid 1 for the system drives and was quite pleased with the system, however I also used the MB RAID 5 solution and frankly it was useless, simply horrid transfer rates, especially write rates.


    However I do have a Promise Supertrak ex8350 in-house and am waiting on the UPS truck as we speak to deliver (3) 500 gig drives to put on it.  When I do I will be setting that up in my SQL Server system to be a near-line backup for some huge databases that I need to have a backup of.  Once I do that then I will have another RAID solution that I can speak about.  (I am sure Ken is NOT looking forward to that ;-)


    I suspect that multiple VOLUMES on an ARRAY are just standard operating procedure.  It is not "shown off" as a special feature of the ARECA RAID controllers, as if only they could provide such a capability.  My ARECA controller software very clearly breaks out DISK functions, ARRAY functions and VOLUME functions, in fact they are completely different menu items in the software.  You create an ARRAY, assign DISKS to the ARRAY, then create VOLUMES on the ARRAY.  All of this is done inside of the controller software. 


    BTW when you are done you have "blank disks" to the OS which of course match the size of the VOLUMES that you created in the RAID controller software.  You then have to go create partitions and format those partitions.  This demonstrates again that to the OS a RAID VOLUME "appears to be" a hard drive.



    I was looking at a PDF from Adaptec and they're talking about having multiple arrays per controller, not just multiple volumes per array. <snip. When would someone use multiple arrays?


    Yes, this is also possible.  I suppose that there are scenarios where you might want to be able to take a subset of the drives and drop them off on another controller in a different (or even the same) system, turn it on and have that ARRAY up and functioning.  In that case temporarily setting up multiple ARRAYS on the same controller might very well make sense. 


    For all I know there also might be (probably are) performance issues where multiple arrays are faster than a single array.  I do know that modern computers / disks can use NCQ to queue disk commands.  So if you were to have two SQL Server databases (for instance), you could have each database on a different physical array even though it was on the same controller.  Now you could perform data reads and writes to the different databases, and since they each had their own array their NCQ information would work for the betterment of each database.  Likewise the cache on the hard drives would be used to maintain database information for only one database as opposed to be continually flushed to read / write a completely different database.


    This is rapidly digressing into a non-WHS area though and I have been warned...



    In the previous setup described with your 24 TB setup (btw, that's awesome, lol) that was one array, but we would set up multiple volumes using the RAID controller software/drivers before we installed WHS. 


     I want to hurriedly note that I do not OWN a 20 terabyte array though I would love to be able to say that I did.  I was just speaking to the dreamers among us.  There was actually a gentleman discussing how he wished he could do a 14 tbyte (or some such) WHS but since he couldn't he was giving up on WHS.  Such an ARRAY would perhaps fill his needs?

    BTW, one issue I have already run up against is that these RAID cards tend to use PCI Express and motherboards often have limited numbers of these connectors sufficient or correct for these raid cards.  As an example I have an Asus M2N32-SLI Delux motherboard which only has TWO PCI Express connectors.  I use one for the video and the other for an 8 port RAID controller.  There is simply no slot to drop in another RAID controller card.  When I purchased my last MB I carefully selected one with 4 PCI Express slots at least 8x each.  It narrowed my choices down considerably!

    Enough about that though.  I don't want to get kicked for flagrantly ignoring a moderator.
    Tuesday, February 12, 2008 5:08 PM
  • Matt and John

     

    Can I suggest if you want to talk RAID that you exchange email addresses rather than posting questions and answers on the WHS forum.

     

    This is not me kicking you, this is me suggesting that non-WHS topics and questions should not be asked here.


    Andrew

     

    Tuesday, February 12, 2008 5:18 PM
    Moderator
  • Andrew,

    I would be amenable to responding to people offline, whoever I do not see a way to give Matt my email without posting it publicly in this thread which isn't going to happen.  Is there a way to send an email to a member privately?
    Tuesday, February 12, 2008 5:45 PM
  • Hi John

     

    No, as public forums there are no ways that I am aware of for sending private messages.

     

    If you dont want to publish your email address, and I can understand that, then you really will need to keep to topic (ie WHS issues), otherwise Ken will be on your back (and to be fair, rightly so).

     

    Andrew

    Tuesday, February 12, 2008 5:52 PM
    Moderator
  • My hotmail account.  Replace "nonsense" with "matt".  Smile  Either way I'm sure my questions are done for the moment, heh.  It would be nice if this forum offered PMs.

     

    greer_nonsense at hotmail.com

     

    My apologies for this thread getting so heated (not my fault, I swear!) but I do think that it will go a long way towards allowing hobbyists experiment with WHS and different types of hardware.

     

     

     

    -Matt

    Tuesday, February 12, 2008 6:11 PM
  • No, the heat was all mine.  People ask about raid, aren't told "no you can't discuss this" but receive no support.  Well, this is an MS forum so what should I expect.  At least the info is out there now whether I am allowed to add to it or not.  I will probably just plop in a message and insert links to these two threads to anyone asking about RAID and leave it at that.
    Tuesday, February 12, 2008 7:22 PM
  •  John W. Colby wrote:
    No, the heat was all mine.  People ask about raid, aren't told "no you can't discuss this" but receive no support.  Well, this is an MS forum so what should I expect.  At least the info is out there now whether I am allowed to add to it or not.  I will probably just plop in a message and insert links to these two threads to anyone asking about RAID and leave it at that.

     

    Actually John, Ken (and my) point was that it is an MS Windows Home Server forum and as such the topics should be related to Windows Home Server. Once the topic of conversation goes off that subject it should move elsewhere.

     

    It has nothing to do with it being an MS forum, or an anti-RAID forum.

     

    Both Ken, and myself would be saying the same thing if people were talking about any other topic that was not WHS related..


    Andrew

    Tuesday, February 12, 2008 7:27 PM
    Moderator


  • I understand.

    How do I report another possible corruption issue? 

    I have no idea whether it has been reported or not but it appears that WHS may also corrupt the licenses of downloaded music.  I downloaded about 100 titles from Walmart a few years back.  I placed them on WHS and then contacted Walmart to get the licenses renewed.  The files played just fine until two days ago when suddenly it started telling me that the licenses were not valid.  ALL the licenses not just one or two.  I have probably been playing these for three or four weeks now. 

    Of course I am not certain that WHS is the culpret but I thought I'd pass it along so that the team could look at whether the license files get actively modified in any way during play.  If so then they would be another candidate for the corruption bug.  In fact I cannot even confirm that it the license file or the music file itself, though if I try to just open the form directly from Explorer it says that the license file is invalid, thus my assumption that the file was in some way corrupted.

    I play(ed) them through Media Player 10.0.0.3998, directly on the WHS machine which sits in my office.  It is that machine that the new licenses are "attached" to.  All of my non-licensed music (ripped CDs) still play just fine.

    Tuesday, February 12, 2008 8:12 PM
  • Hi John

     

    Send an email to whsforum@microsoft.com and include as much information as you possible can, then someone will get back to you.


    Andrew

     

    Tuesday, February 12, 2008 9:00 PM
    Moderator
  • After reading through all the RAID posts, I decided to buy a RAID controller off Ebay and move my WHS to a RAID also.  I like the stability of my Vista system on RAID.  If this thread moves elsewhere, I would like to follow it too. My email is THoffman15@hotmail.com.nospam

     

    Wednesday, February 13, 2008 12:19 AM
  • What kind of controller did you purchase?  Is there a driver for your card for Windows Home Server?  Or would we just use the Windows 2003 Server driver?  I started reading information from Adaptec and even called them.  There is no specific driver for Windows Home Server, but there is for Vista, Vista x64, and so forth.

     

    During the call they did inform me that you would set up the different volumes on the array and then load the OS.  Of course, you'd have to do the old F6 trick to load a driver, I assume.  Thing is, if the driver is not written for WHS, what Microsoft OS is "closest" to WHS?  Does WHS have a "compatability mode" that would allow running a config utility or driver for another OS?  Is WHS based off of Windows Server 2003, or is it a completely new OS?

     

    (To the moderators, I believe this to be still on topic and related to hardware compatibility with WHS.  Hopefully you've seen that I have behaved well here and am simply wanting to experiment with this new technology.  )

     

    Thanks,

     

    -Matt

    Thursday, February 14, 2008 1:10 AM
  • WHS is a shell over the top of Windows 2003 so you would use drivers for that.  If they don't have drivers for that, try Windows XP, sometimes those will work for Windows 2003.

    And yes you would set up the volumes from inside of the RAID BIOS software.  Usually (for hardware raid) that is available immediately AFTER the normal BIOS stuff.  You will need to watch closely and then tap some key, perhaps tab, F6 or something similar.  The BIOS will tell you what to tap. 

    That will take you into the RAID bios where you assign disks to array(s), then create volume(s) on the array.  Once that is done the volume will appear to Windows 2003 (WHS) to be a disk drive when you get to the point of installing the OS.  After setting up the array and volumes you have to exit the RAID bios.  Sometimes it will continue, sometimes it will restart the BIOS boot sequence.  In any event, the second time through do NOT enter the RAID BIOS unless you specifically need to make changes.  Just keep you hands off the keyboard and WHS will take over and start the load process - UNLESS it asks you to tap any key to load from CD in which case you will need to do that..

    You WILL need the floppy with the driver on it.  Just put it in the floppy drive and leave it there.  WHS will ask for it during the initial phase, then again when it gets to the "blue screen" install phase (NOT the BSOD) look carefully down at the bottom and if it asks you to hit F6 if you need special drivers, start tapping F6 until it stops asking.  At some point it will stop and present you with a menu of available drivers (from the floppy), usually but not always only one. 

    Make your choice from the menu and you are off with the rest of the normal install.  Don't worry, it is easy.

    BTW all of this is exactly the same thing you would do to set up RAID for any windows OS.  The only difference is that WHS runs stuff AHEAD of the normal windows 2003 install stuff.

    Moderators, if I am not allowed to answer these kinds of questions please say so and I will cease. 
    Thursday, February 14, 2008 1:52 AM
  • Thanks for the answer, John.

     

    I tell you what I'm going to do, Adaptec offered no other advice.  I'm going to get a 3405 card and test it out with that.  I'll eat the restocking fee and get the 8 port card if I can get the 4 port card to work (or actually not, either way I'm going to send it back).  The cards are the same tech (SAS/SATA) so I assume the drivers will be very similar.  This is actually what Adaptec told me to do, believe it or not.  "Buy the card from somewhere that will allow returns.  If it doesn't work, send it back."

     

    I've got my media server all set up here on NewEgg (I'll post the hardware list in another thread for feedback.)  I have two RAID cards in there just as a placeholder.  I am going to add BeyondTV to it so I can go the PVR route.  I just need to add memory, floppy, PS and case and I'll be good to go.

     

    I'll need some advice on testing the rig out.  For example, adding a fourth drive, replacing a dead drive, adding another volume, and so forth.  If you have something in mind, John, please email it to me and I will make sure to put the rig through its paces.

     

    Hopefully about a month from now I'll be able to post the definitive treatise on RAID with WHS.  Just imagine, WHS on a RAID 5 where you never have to worry about losing backups!  Pretty frikkin awesome.

     

    (For info on losing backups, dear readers, check out the white paper on page 23).

     

     

    -Matt

    Thursday, February 14, 2008 2:19 AM
  • I bought a Promise SX6000 off ebay.  It will support my IDE drives and RAID 5 up to six channels.  It also support JBOD if later want to remove the RAID.  I have have good success with my Promise TX4000 with Vista and XP.  I figure I will give the SX6000 a chance.  I got it for $54 shipped to me and it supports Windows Server 2003. :-) 

     

    I am really looking forward to getting the WHS on a RAID.  I figure I will setup my 3 400gb drives + my 500gb drive.  I will lose 100gb on the 500gb drive but figure I can always swap it out later with another 400gb. 

    Thursday, February 14, 2008 2:19 AM
  • Well said everyone.

     

    To wrap things up I think it's important ot understand there are 2 types of men in this world.

     

    The first thanks master for the apple juice in his cup.

    The other makes sure to get the S.O.B who pissed in his.

     

    Regards

    Thursday, February 14, 2008 6:13 AM
  •  Matt Greer wrote:

    Thanks for the answer, John.

     

    I've got my media server all set up here on NewEgg (I'll post the hardware list in another thread for feedback.)  I have two RAID cards in there just as a placeholder.  I am going to add BeyondTV to it so I can go the PVR route.  I just need to add memory, floppy, PS and case and I'll be good to go.

     

    (For info on losing backups, dear readers, check out the white paper on page 23).

     
     

    -Matt


    Matt, You probably can't use that motherboard.  You need one PCI Express connector for the video card and one for the RAID controller.  Neither will work in the X1 slot.  To be honest I haven't even found any useful cards that use the X1 slot, they are just wasted space to me.  When I spec'ed my last MB I went for four PCI Express slots.  It cost more but I currently have my video and TWO raid controllers in there.
    Thursday, February 14, 2008 1:04 PM
  • John,

     

    Thanks for looking over my list so closely.  I picked that motherboard for a couple reasons, and I can't remember all of them right now.  Smile  One, there's a lot of SATA connections, two, it has onboard video so I won't need an expansion card, and three, I believe it's compatible with future quad-core procs.  I'll double check all that.  If I cannot get the RAID card to work, then I'll fall back on all those SATA ports.  In the end, the WHS disk drive "scheme" is good, in my opinion, and I wouldn't be so bad off if I must use it.

     

    -Matt

    Thursday, February 14, 2008 1:51 PM
  • You could get 50% of the bene of RAID by going RAID 1 on the system drive and then just adding single drives to the rest.  Using your current list of drives That would give you a 1 gig RAID 1 for the system disk and landing zone, plus a 1 gig for the DE to use to make the total close to 2 gigs of data.  The problem of course is that a) DE is in the mix immediately "doing the DE shuffle" and b) you only get 1 gig total storage because you pretty much have to use file sharing now since the second drive is not RAID. 

    If you were to decide to do that, I personally would go for two 250g drives for the RAID 1 system disk and then use (2) 1 tbyte drives RAID 1 as well, and leave DE file sharing off.  That lowers your total cost because two 250 gig drives are way less than one 1 gig drive.

    That maximizes the advantages of RAID and leaves DE out of the mix for file sharing, minimizing the shuffling of files by DE as it tries to balance the file shares.  DE would of course still use a landing zone but would then (I presume) just move everything off the landing zone onto the data drive and be done.

    That would of course use up all of your onboard SATA connectors.
    Thursday, February 14, 2008 2:54 PM
  • You know what I just realized is that, no matter what, you need to swap out a drive when you start to get close to using that last drive.  For a standard WHS setup, let's say one 500 GB drive and two 1-TB drives for data.  If you get more than 1 TB of data, for example, you need to add one more drive.  Let's say you have a motherboard with 3 SATA ports, for example.  One would be used for the OS and the other two for the data drives.  As soon as you got to 99% of that first data drive, you better find some room to expand because you'll never be able to remove a drive to install a bigger drive.

     

    I wonder if most folks realize this?  But I guess that's the benefit of having the ability to add USB external drives and so forth, so you can shuffle data to that drive while you swap out the drive you're going to need to remove.  You could add a 500 GB USB drive for a short while during which you get than new Seagate 2-TB drive.  Smile  (No, there is no Segate 2 TB drive, but there will be someday!)

     

    I still think, for the test setup, going with three (or four, actually) 1-TB drives would allow me to test out the functionality without losing anything precious.  I think what would need to be tested is:

     

    1. Setup a 3-drive RAID5
    2. Partition C: as 20 GB and the rest (now the D: drive, right?) as one big volume (that would be less than 2 TB, so I'd be OK)
    3. Move a bunch of data to the drives, and then yank one drive to see how it performs.
    4. Put the drive back in and rebuild it.
    5. Add a 1-TB drive to the RAID card and add it to the set
    6. Get back into WHS and add the new volume to the OS, I imagine now volume E (discounting the DVD drive, of course)
    7. Repeat #3 and #4
    8. What else should be done?

    I guess it does not matter how many volumes you add to WHS because WHS doesn't care, it just adds it to the "pool" and it is seamless to the end-user.

     

    Come to think of it, better get a PATA DVD drive so I don't waste a valuable SATA port!  Smile

     

    -Matt

    Thursday, February 14, 2008 4:27 PM
  • Matt,

    Unfortunately it is not quite that simple.  The issue is that most folks want file duplication (or I would guess anyway) for the "safety" of not losing files if a drive dies.  DE file duplication is a poor implementation of RAID 1 where you don't add ONE drive, you add TWO drives.

    Furthermore it really depends on how you configure your system to start.  If you are going to use File Duplication using DE you had better use the SAME SIZE drives every time.  Why?

    Build a system with DRIVE1 = 500 gig and DRIVE2 = 1 tbyte drive.  Turn on file duplication.  The immediate thing that happens is that the landing zone starts filling up with  duplicate files.  It HAS to use the landing zone because it HAS to duplicate the files and if there are only two drives then...  So for 1.5 gigs of drive space you have .5 gigs of USABLE duplicate file storage.  Not a wonderful ratio (33% usable).  But it doesn't get much better.

    Additionally, the system drive DRIVE1 is different size from DRIVE2 so at some point well short of 500 gigs WHS is going to inform you that you can no longer duplicate files because DRIVEA is too small to hold more.  OOOps

    So you go out and buy a 1 terabyte DRIVE3.  WHS goes to work for the next two weeks shuffling all of the duplicate files off of DRIVE1 onto DRIVE3.  Eventually the landing zone empties and you have an additional 500 gigs to store more duplicated files.  Notice that for 2.5 gigs of disk space you have 1 gig of USABLE duplicated file storage.  40% usable space.  Well... that is better than 33%.

    Now you fill up the 1 gig and go add another terabyte.  Hmm... you are back to the first scenario that by adding a terabyte drive you added only 500 gigs USABLE space.  PLUS you are throwing money into a "pool" where you will NEVER exceed 50% actual usage no matter what you do.

    So what does that tell you about adding "whatever drive you have available" to your storage pool?  It will create a mess where the USABLE space never equals what you added AND DE does the "DE file duplication shuffle" most of it's life trying madly to keep your duplication system working.

    YUK!

    Can you say "RAID 5"?

    All you can say about the system is that DE works exactly as advertised.  It extends the total storage size exactly as it is supposed to do.  Throw File duplication into DE and the whole system becomes a mess.  RAID 5 straightens out that mess because data integrety is handled by a system designed to do data integrety and DE gets to do DRIVE EXTENSION which it does so well.  ADDITIONALLY instead of approaching 50% ACTUAL data storage with file duplication, whereas you approach 100% ACTUAL data storage with RAID 5. 

    The best of both worlds.


    Thursday, February 14, 2008 5:55 PM
  • John,

     

    When I setup my RAID, would you recommend setting up the RAID 5 as one large disk?  I am going to use 3 400gb + 1 500gb. I figure I will have 1200gb of usable space.  My understanding is by doing this I would would longer need to use DE or have to worry about the corruption bug until it is fixed.  I am also hoping it will speed up the file transfers.

     

    Please correct me if I am wrong.

     

    Thanks,

     

    Thursday, February 14, 2008 11:42 PM
  • From what I understand you would not need to use DE.  You would lose a few GBs from that 500 GB disk drive, but no biggie.

     

    Is that correct, not using DE resolves the corruption issues?

     

    -Matt

     

    Thursday, February 14, 2008 11:48 PM
  • I had read that the corruption issue does not occur on single drive systems. So if a RAID is setup where the system is seen as one drive, I am assuming it would no longer be a problem.  I am not sure if it has to do with the DE but you cannot use DE on a single drive system anyway. This is the other reason I like the RAID idea.

     

    Thursday, February 14, 2008 11:52 PM
  •  Todd Hoffman wrote:

    John,

     

    When I setup my RAID, would you recommend setting up the RAID 5 as one large disk?  I am going to use 3 400gb + 1 500gb. I figure I will have 1200gb of usable space.  My understanding is by doing this I would would longer need to use DE or have to worry about the corruption bug until it is fixed.  I am also hoping it will speed up the file transfers.

     

    Please correct me if I am wrong.

     

    Thanks,

     



    I cannot say for certain whether it will allow you to use the 400 gig in an array with the rest of the drives 400g, but I would guess that you will.  Try it and see.

    And yes, I would go with one big drive until the whole DE thing shakes out.
    Friday, February 15, 2008 12:34 AM
  • It will let me use the 500gb but will only use 400gb of it.

    Friday, February 15, 2008 12:41 AM
  • Todd, what motherboard/CPU combination are you using?  I'm getting somewhat frustrated trying to spec out a home-built unit having to make sure everything has drivers for Windows 2003 (until I hear otherwise, that is.)

     

     

    -Matt

    Friday, February 15, 2008 1:42 AM
  • I am using an old motherboard.  It is an ASUS P4P800-E Deluxe.  I do not recommend it because the board had some issues.  It has died on lots of people including me.  I was able to get a replacement from ASUS under warranty so works fine now.  The CPU is an Intel P4 3 GHZ CPU.  I am running 1.5gig of ram.  The system is nothing special.  It was an extra computer that was not getting much use.  The hardware Promise SX6000 RAID controller I bought from Ebay for $54 supports windows 2003 per the drivers on the company web site.  I just hope it works well for me.  I read in the forums about some people that had some issues with this RAID controller but not enough to concern me.

     

    My system is currently running two 400gb drives and the 500gb under the regular WHS install.  I am expecting to get the RAID controller in the mail any day now.  Hopefully I can get it installed and running this weekend.  I always like to try new things and figure I have nothing to lose. System will be backed up. :-)

     

    Friday, February 15, 2008 1:55 AM
  • I received the Promise SX6000 RAID card in the mail yesterday and installed it.  If anyone else decides to use it, make sure you have enough room in the case. It is a full lengh PCI card.

     

    I always hate learning the hard way.  I had installed the drivers off the USB flash drive and thought everything was fine except when you F6 for the drivers the 2nd time, you must have a floppy drive installed.  This is the only option WHS gives you.  I had removed the floppy drive from the computer.  To make a long story short, it is much easier if both the driver installs are done from the floppy disk.  Since the RAID was the only controller it flagged me both times for drivers.  I did not have to F6 because it needed the drivers for the controller.  

     

    I also had one of my Seagate 400gb drives die which was unrelated to the RAID or WHS. Lucky under warranty. For now I installed the two 400gb and one 500gb which gives me around 800gb total. :-(  Going to reinstall again once get the drive back from Seagate.  Wanted to make sure the RAID install went well.

     

    Hope this helps the next person that wants to install a RAID card. Happy with it so far. :-)

     

    Saturday, February 16, 2008 3:23 PM
  • I've been saying since I got here... Raid makes WHS work well.  You can ignore the file duplication and no worries about any disks dying.
    Sunday, February 17, 2008 12:52 AM
  • All I can say, WOW, flashbacks from the days I use to administer SQL clusters and system administrations.

     

    From my years of experience I will approaching this with a RAID5 configurations. Reasons being, too may reasons for being cautious.

     

    Suppose my only question is, what is the recommended PCI SATA raid controller with internal connections? Well, maybe not recommended as much as what will work on WHS?

     

    Microsoft is like my wife, love/hate relationship, but you continue to move forward...

    ~JH

     

     

     

     

    Sunday, February 17, 2008 10:21 PM
  • The recommended RAID controller for WHS is, well, none. But you knew that. Smile

    The one to get is the least expensive one you can find which supports the features you need and is listed on the Windows Server Catalog. Don't worry about hardware acceleration, hot swapping, OCE, ORLM or any of the "nice to haves" because they run the price up and you won't see much benefit anyway, in terms of performance. Buy the biggest disks you can afford, though, because you'll be sort of locked in to the array size you start out with.
    Sunday, February 17, 2008 11:42 PM
    Moderator
  • Hardware acceleration is probably the single BIGGEST thing that you can do in terms of increased performance under WHS.  Motherboard RAID 5 has never been much to shout about in the attempts I have made to use it, with write performance absolutely abysmal and read performance nothing to write home about.  A hardware controller OTOH will give you write performance roughly equal to the raw disk underneath and read performance that is nothing short of awesome. 

    As for the array size, that of course depends but it is quite common to be able to add disks to the array ("grow" the array).  The disk(s) added will most often have to be made into one or more new volumes, and then that fed off to DE.  But since the RAID handles data integrety issues, file duplication is never used and so DE does nothing more than copy the file off of the landing zone and it is done.
    Monday, February 18, 2008 3:26 AM
  • Thanks for the insight.

     

    I've was doing additional research last night and began leaning toward High-Point RocketRaid 2210. For the price and options it seems to be fairly good deal compare to other cards manufactures. It also has the Server 2003 supported drivers. The main difference here is the lower end cards do not have a processor onboard, thus utilizing main PC CPU. Nether the less, WHS doesn't take a lot of cycles to run so this is not a concern and I don't for see expanding over 2 Terebytes of data. I have the cash to spend but keeping costs down where I can.

     

    I figure, RAID5

    4 1024GB drives

    1 hot standby, 1 parity, 2 usable

     

     

    Monday, February 18, 2008 1:13 PM
  • I'm curious, do you actually have a PCI-X slot to plug that controller into? If not, but you have a PCI-E slot, I would consider the Highpoint RocketRAID 2300 instead.

    Also, if you're committed to a 2 TB RAID array, consider using 500 GB drives instead of 1TB drives. You might save some money since they cost less than ½ as much, even considering that you'll need twice as many drives, plus a case and RAID card that support more drives.

    Finally, don't underestimate your storage requirements. If you see yourself needing over a terabyte immediately, then increments of 100 GB (or two Blu-Ray disks) per month, in less than a year you've run out of storage. At that point, you can only add drives to your array through OCE, then expose them to WHS as additional volumes, and you'll be stuck with the significantly slower write performance of DE. You shouldn't assume that Drive Extender performance is going to improve significantly before V2 of WHS, and Microsoft hasn't discussed a timeline for that. (You shouldn't really assume it will improve then...)
    Monday, February 18, 2008 4:23 PM
    Moderator
  • The 750 gig drives actually seem to be the sweet spot right now (or darned close).  The problem with a 4 SATA socket controller is that you are then pretty much stuck.  There are motherboards with more PCI Express slots but they are not cheap.  And I am not finding 8 SATA socket controllers without the coprocessor, likely because the software RAID 5 performance really sux with too many drives involved.

    From that perspective, if you are going with a 4 socket no coprocessor board, then you want to go with the biggest drives you can afford.


    The Areca 1220 is on sale right now for $430, a pretty good price for an 8 socket, coprocessor RAID card and the fastest card in town.  I bought two of these for an average price of around $500 apiece.

    Having the extra sockets allows you to opt for the 750 g drives and expand the array later.  Only having a single drive as RAID overhead really helps keep the cost down.

    I keep pointing out that if you put eight 500 gig drives into a WHS system at $100 apiece (no raid at all) you get 2 tbytes of USABLE space (assuming that all files are duplicated of course) and the cost is $800.  If you spend $430 for that RAID card and buy four 500 gig drives you have spent $830 and have 1.5 terabytes RAID 5, WITH the coprocessor, AND 4 extra PCI sockets for expansion.  NO duplication gives you around 1/2 of the "DE file shuffle" when you have more than one volume.  And all the rest of the benes of RAID as well.

    Or do the same with two of the 750 gig drives RAID 1.  Then as you add more drives convert that RAID 1 ARRAY to RAID 5 and you have enough spare SATA ports to get you pretty well up there in terms of expansion.


    Yea, I know the coprocessor boards cost a lot more money but they provide a lot for that mony, both in terms of system speed as well as data integrety, system expansion and convienence.
    Monday, February 18, 2008 5:34 PM
  •  John W. Colby wrote:
    NO duplication gives you no file shuffle.
    Not true, John, at least not in the way that I believe you mean. Turning off duplication will have only a small effect on WHS performance with multiple volumes in the storage pool. As soon as you have even one secondary drive, Drive Extender starts moving files around (the system drive is used as primary storage only when there's only the system drive, or when there's no room at all on other drives), and performance drops significantly. Which is why I keep recommending maxing out the system volume up front if you're going to use RAID.
    Monday, February 18, 2008 6:31 PM
    Moderator
  • Yes Ken, I understand that and as you have seen from my previous posts, I also have recommended a "Single Large Volume".  I am going to go edit my post to reflect that. 

    However you have to admit that the DE shuffle pretty much HAS to drop by about 50% for multiple volumes since DE is no longer shuffling files around trying to keep duplicated files off on a different drive from the master shadow copy. 

    ALL of the stuff that I pasted in below from the PDF has to be run for the "alternate shadow".  All the "keep it up to date" stuff goes away.  All of the "keep it (the alternate shadow) on the same disk" stuff goes away.  In general, ALL of the tasks involved in handling the alternate shadow file goes away and that pretty much has to be 50% of all of the "DE Shuffle".

    Whatever it does for the primary shadow has to be done all over again for the alternate shadow, plus a whole lot more when a file is updated in any way.  ALL of that secondary shadow work just "goes away", and it "goes away" only IF there is a second volume that DE has to manage.  Until a second volume is brought into play there is no DE shuffle.

    From Windows_Home_Server_Drive_Extender.pdf pages 14

    If a file is in a shared folder that is configured for duplication, the Migrator service selects a second hard drive to store an alternate shadow.  If there is only one secondary data partition (such as a home server that has only two hard drives), or if all the secondary data partitions are full, the Migrator service will store an alternate shadow on the primary data partition. This means that the benefits of duplication are available even on systems with only two hard drives.

    Page 15:

    If a file already has the correct number of shadow files, the Migrator service will keep the alternate shadow files up-to-date and copy the contents of the master shadow to the alternate shadow(s) if the contents of the file change. When the filter intercepts a change to a duplicated file, it only writes the data to the master shadow. This simplifies the filter, and also offers a performance advantage by allowing the change to complete without waiting for multiple disks to finish writing. The Migrator service detects the change to the file and attempts to duplicate the file. The Migrator service will not duplicate a file while it is open. If an application tries to open a file while the Migrator service is duplicating it, the Migrator service will immediately release its handle to the file, and the Open request from the application will succeed. Having an extra copy of a file enables users to seamlessly access it when the main copy is unavailable. When opening a migrated file, the Windows Home Server Drive Extender filter attempts to open every shadow file. If the master shadow cannot be opened, but an alternate shadow can, the filter will promote the alternate shadow to be the new master. Promotion includes updating the reparse point to reflect which shadow is now the master shadow. The previous master shadow remains in the list of shadows in the reparse point, because it may be only temporarily unavailable (for example, because a USB cable was accidentally unplugged).

    Page 17:

    Balancing Storage
    The Windows Home Server Drive Extender filter and Migrator service offer choices for which hard drive to migrate a file to. One goal of the secondary selection algorithm is to keep related files together on the same hard drive. Copying music from a CD to a hard drive illustrates why this is important. If a single secondary hard drive failed, it is more convenient to lose all the music from a few CDs and then re-copy those CDs than to insert hundreds of CDs to re-create one track from each. One way to achieve this is to ensure that a set of files created around the same time are stored on the same secondary hard drive. An obvious method for choosing the secondary hard drive would be to use the one with the most space free, but that would result in sometimes alternating among secondary hard drives. Consider the CD scenario again. A moment ago, the second hard drive had the most space available, so Track 1 of this CD was saved there. Now that the second hard drive has this new file on it, the third hard drive has the most room to hold Track 2, but we would really prefer to store it on the second hard drive with Track 1. Choosing the hard drive with the least available space is a good choice because the hard drive with the least free space tends to remain the hard drive with the least free space for a long time, and the same secondary hard drive will be chosen. Another goal in selecting the secondary hard drive for a file is to ensure that migrated files have room to grow. If a migrated file is later opened and data is added to it, you need enough free space on the secondary hard drive to hold that new data. This suggests that there should be a buffer of free space remaining on the lowest-space secondary hard drive before placing a shadow file on it. If all of the secondary hard drives are so full that there may not be room for existing shadow files to grow, the Migrator service may move some shadow files to the primary data partition or alert the user that the home server is running low on storage. The Migrator service makes the appropriate updates to the tombstones whenever it moves a shadow from one secondary data partition to another or from a secondary data partition to the primary data partition.
    If there are duplicated files that have a shadow file stored on the primary data partition, the Migrator service will attempt to move those shadow files to a newly added hard drive. If the new secondary data partition is the second hard drive added to the home server,





    Monday, February 18, 2008 9:50 PM
  • Hello All,

    I am one of those people that tried to put together Ken Warren's WHS from ExtremeTech.  I wanted a bit more of an option when installing.

    Unfortunately, when I purchased, the Mobo that Ken recommended was not available so I picked up one that did not have a RAID 5 option.  I purchased 3 of the Western Digital Green Drives (750MB) since the 500's were also discontinued on NewEgg as well.

    Now, I am faced with either getting a new motherboard with built in Raid capability (all of them seem to be the same HDMI ones on NewEgg) or get a Raid Controller.

    Listed above is the Highpoint RocketRAID 2300 which is a x1 slot card and it looks like it has pretty good reviews.  My question is since this is software raid driven (ok with me since I have an E4500 sitting around doing nothing) is it MUCH better than the RAID built into the MoBo? 

    Are the RR2310 or RR2210 better choices for my Gigabyte GA-G31M-SL2?  Or should I go with a new Mobo like Gigabyte GA-73PVM-S2H?

    The purpose of this box is mostly for backup, but it will need to so some light media streaming.  Also, I would like to go with Raid 5 as these boxes will support OneNote and I am just not sure how well DE can handle multiple people working together on .one files.

    Thanks,
    Tuesday, February 19, 2008 1:39 AM
  • As far as which motherboard to go with it's hard to say.  WHS is based off of Windows 2003 server, but it seems that sometimes you don't need Windows 2003 Server compatible hardware, and sometimes you might.  It's a gamble.  What's nice is that Ken W. put together a relatively cheap server arrangement that, if you had to swap out the mobo or some other part you wouldn't be out too much money.

     

    What are you planning on doing with the server?  Just WHS stuff, or are you going to encode/decode video and audio streams, like running a PVR or something?

     

    I'm a newbie myself, just planning out the hardware before I install the thing.

     

    -Matt

    Tuesday, February 19, 2008 3:58 AM
  • I have never used that raid controller but I am tempted to buy one just to be able to test it.  I would like a non-coprocessor solution that was at least reasonable write speed and this one sounds like it might be that solution.  And of course finding a use for those X1 connectors is always a plus.

    I don't have enough experience with WHS to know how much balancing moves things around on a constant basis.  If it would write files out there and leave them in place then a moderate write speed with a fast read speed would be sufficient.  If it moves files around even after it finds a suitable home for them then it might be a bottleneck.
    Tuesday, February 19, 2008 5:22 AM
  •  Roger Huston wrote:
    Hello All,

    I am one of those people that tried to put together Ken Warren's WHS from ExtremeTech.  I wanted a bit more of an option when installing.

    Unfortunately, when I purchased, the Mobo that Ken recommended was not available so I picked up one that did not have a RAID 5 option.  I purchased 3 of the Western Digital Green Drives (750MB) since the 500's were also discontinued on NewEgg as well.

    Now, I am faced with either getting a new motherboard with built in Raid capability (all of them seem to be the same HDMI ones on NewEgg) or get a Raid Controller.

    Listed above is the Highpoint RocketRAID 2300 which is a x1 slot card and it looks like it has pretty good reviews.  My question is since this is software raid driven (ok with me since I have an E4500 sitting around doing nothing) is it MUCH better than the RAID built into the MoBo? 

    Are the RR2310 or RR2210 better choices for my Gigabyte GA-G31M-SL2?  Or should I go with a new Mobo like Gigabyte GA-73PVM-S2H?

    The purpose of this box is mostly for backup, but it will need to so some light media streaming.  Also, I would like to go with Raid 5 as these boxes will support OneNote and I am just not sure how well DE can handle multiple people working together on .one files.

    Thanks,


    I don't see the RocketRAID 2300 being any better than a motherboard raid.  Here is a review that compares the two:

    http://www.tweaktown.com/reviews/1288

    It compares the 2300 against Intel's onboard RAID solution (ICH9R).  That's not to say it's bad, it's just not much faster than motherboard raid solutions.


    Tuesday, February 19, 2008 7:53 AM
  • Roger, I don't write for Extremetech, so I'm not at all sure what you're talking about when you say "Ken Warren's WHS from ExtremeTech."

    I don't recommend running WHS on RAID. It's an "unsupported scenario" per Microsoft (who don't permit OEMs like HP to equip their servers with RAID), and I just don't think it's the best use of the average homeowner's money to make the investment in RAID. When someone asks about RAID, I advise against it.

    But when someone is determined to use RAID anyway, I offer up options that don't require investing $1000 up front in the storage subsystem in your server. That means software RAID and hard drives that are in the "sweet spot" for price/GB (That's still the 500 GB drive, BTW). Software RAID is slow, yes, expecially on writes, but it still delivers better write performance than WHS with Drive Extender does with multiple disks in the storage pool.
    Tuesday, February 19, 2008 12:20 PM
    Moderator
  • John, it would be nice to think that half the effort Drive Extender makes, and therefore half the performance hit, is from managing the second "shadow file". But it isn't. The duplication of files happens "after the fact" not while the file is being written. Turn off duplication entirely on a server with two disks and copy a large file to a share, and it still averages 2-5 MB/s writes.
    Tuesday, February 19, 2008 12:35 PM
    Moderator
  • Liam,

    That tweaktown test is not useful in this discussion though because they didn't test Raid 5.  Raid 5 performs some very math intensive operations that in my past experience drags down the write speed.  In my case I was getting 5 mbyte / seconds writes to a raid 5 array. 

    Now I will say that this was several years ago.  I have a couple of servers in my office that I use for SQL Server data mining on 100 to 200 gig databases and so I went to hardware assisted raid cards and have never looked back.

    The coprocessors on RAID cards are specialized for RAID math, IOW they have been designed specifically to perform those XOR operations quickly and efficiently, plus they all have a cache memory which generally starts at about 256 mbytes.  So they take on all of the work of distributing the data out across all of the drives.  They are real Intel computers, with a BIOS that contains all the program stuff required to perform all the RAID operations and can do so with zero assistance from the host system.  As a result these babies are FAST.

    Now I am trying to find something that would work for WHS.  According to the reviews at newegg the RAID 5 write speed for the mentioned RAID card is up there in the "reasonable" range.  I define reasonable as anything over 20 mbytes / sec. 

    More is better but I don't know how much you can expect from any card that doesn't have a coprocessor on it.  This specific card apparently has some kind of XOR hardware but it is not a coprocessor.
    Tuesday, February 19, 2008 1:21 PM
  • John, you may be interested to know that more recent Intel RAID chipsets are turning in the sorts of numbers that you would apparently consider "acceptable"... You might find George Ou's report on the ICH8R particularly interesting.
    Tuesday, February 19, 2008 5:23 PM
    Moderator
  •  Ken Warren wrote:
    John, you may be interested to know that more recent Intel RAID chipsets are turning in the sorts of numbers that you would apparently consider "acceptable"... You might find George Ou's report on the ICH8R particularly interesting.

    WOW.

    I have been living in the AMD world, and there are NO AMD motherboards (on Newegg anyway) that support ICH8R in Raid 5.

    Those are AMAZING numbers!!! (for motherboard RAID).  IMHO, if the RAID can get the CPU cycles it needs to perform at these speeds then that RAID implementation would rock for WHS.

    Sadly it would cost me an entire new system, almost from the ground up to take a look.  Sigh.
    Tuesday, February 19, 2008 6:29 PM

  •  John W. Colby wrote:
     Ken Warren wrote:
    John, you may be interested to know that more recent Intel RAID chipsets are turning in the sorts of numbers that you would apparently consider "acceptable"... You might find George Ou's report on the ICH8R particularly interesting.

    WOW.

    I have been living in the AMD world, and there are NO AMD motherboards (on Newegg anyway) that support ICH8R in Raid 5.

    Those are AMAZING numbers!!! (for motherboard RAID).  IMHO, if the RAID can get the CPU cycles it needs to perform at these speeds then that RAID implementation would rock for WHS.

    Sadly it would cost me an entire new system, almost from the ground up to take a look.  Sigh.


    Well, it would possibly still be less than a controller with dedicated XOR engine. Smile

    John, the tweaktown review does not show RAID 5, but I would be very surprised if a $100 controller would be any faster than the onboard MB's RAID 5 support.  Perhaps I should have stated my point - I was unimpressed with the RAID 0,1 numbers and didn't think the RAID 5 #'s would break the trend.
    Tuesday, February 19, 2008 8:02 PM
  •  Lliam wrote:



    Well, it would possibly still be less than a controller with dedicated XOR engine.

    John, the tweaktown review does not show RAID 5, but I would be very surprised if a $100 controller would be any faster than the onboard MB's RAID 5 support.  Perhaps I should have stated my point - I was unimpressed with the RAID 0,1 numbers and didn't think the RAID 5 #'s would break the trend.


    Liam,

    In THIS case the RAID 5 numbers just might break the trend because they claim to have an XOR assist function.

    Onboard (software) Raid 0 and 1 has always been decent, and mostly has always worked.  Raid 1 is just a mirror where the commands to write something to one disk is sent to another disk as well.  In essence there is nothing much special for the controller to do in the case of RAID 1.  Performance has always been quite normal, usually about the raw write speed of the hard disk.  You cannot get faster than that simply because there is no way to cram data into the raw disk any faster.  So Raid 1 really does not stress ANY RAID controller or the host system.  It is unrealistic to expect a RAID controller card to somehow magically make RAID 1 FASTER than a motherboard RAID solution.   What you are looking for in a test is to make sure that it doesn't slow down the RAID 1 implementation.

    RAID 5 or 6 is a whole nother ball of wax however.  In that case there are more disks to handle, AND the data is not simply written to the disk but each disk receives a "strip" of the whole, PLUS that strip is mathmatically manipulated to add in extra information such that if any drive is lost, the extra information in each strip on the remaining disks can be manipulated to extract and reproduce the data that was on the missing disk.  As  a result there is just a TON of extra work going on for RAID 5 / 6.

    Historically, in the case of SOFTWARE RAID 5 / 6 the WRITE throughput has dropped waaaaay down.  If a raw disk write could be done at 40 mbytes / second (just as an example), the same write to a SOFTWARE RAID array would be perhaps 5 mbytes / second.  Even this speed would be determined by the number of drives in the array, the higher the number of disk drives, the LOWER the write speed, simply because the anount of work on the software RAID controller increases for each drive.. 

    In fact the same write to a HARDWARE RAID array would typically be about the same as the raw disk, so in the example above it might be 40 mbytes / second, and that would be considered "good" simply because the software RAID write was so slow.  However in general a HARDWARE RAID solution WRITE rate would generally be pretty much constant simply because there is a dedicated coprocessor handling all of that work and it was able to keep up, at least to a point.

    RAID READS however tend to be much faster, both HARDWARE and SOFTWARE.  So while a SOFTWARE RAID 5 write might be 5 mbytes / second, the reads might be 60 mbytes / second or even higher.  Again, the more drives in the ARRAY the faster the data can be read and so the higher the READ speed.  The hardware RAID 5 tends to be (Drive speed * no of drives).  So if the read speed is 40 mbytes / second then a 3 drive raid 5 array would be 120 mbytes / second.  A 4 drive array would be 160 mbytes / second.  The reason is simply that the data is striped across N drives so "blocks" of data are read off of N drives at once.  That is one reason that a RAID array provides such blistering read speeds, even for WHS.  LOTS of data coming off of LOTS of disks all at the same time.

    Of course there is a lot of other things going on including individual disk drive cache, cache on the hardware RAID board, the type of coprocessor chip, the data path to the RAID controller and so forth but you get the idea.  In the case of WHS you also have DE dragging things down, but DE tends to drag down WRITE speeds (well PERCEIVED write speeds, more or less), so your READ speeds for a RAID WHS solution will still be much higher than in a non-raid WHS solution..

    Anyway, to me the fact that the dedicated RAID board (no coprocessor) does not "outgun" the motherboard solution in RAID 0 or 1 doesn't really mean anything, it is not expected to, and in fact if it did I would view the results with a great deal of suspicion.  RAID 5 is where any "outgunning" MIGHT occur because of the XOR assistance.  however they don't publish those numbers so we just can't tell.
    Tuesday, February 19, 2008 8:56 PM

  • John,

     

    I've been away from the forums for about a week or so and haven't been able to moderate very well.  I'm sorry I am late to this thread because this should have been locked with your first post.  The user asked a very specific question which was not directly answered by any of your posts.  In this thread contains a debate about RAID vs DE.  I'm not here to join that debate.  I am here to explain one time only, that you need to be here in this community to help them with WIndows Home Server and avoid creating off topic debates in threads.

     

    There are technologies that exist that may be good or bad for Windows Home Server.  If it is not supported, we explain that to forum users so they do not end up having to pay for support.  There are problems that happen if you do not keep your home server in a supported scenario. 

     

    If you have further questions or concerns, please e-mail me at whsforum@microsoft. com  (no spaces)

     

     

     

    Tuesday, February 19, 2008 9:35 PM
    Moderator