locked
Maximum data drive size? RRS feed

  • Question

  • Hi there,

     

    We've been trying to add a large hardware RAID5 disk to the WHS drive pool. The drive is approx. 5TB. Everything seemed ok, the "Windows Home Server Console" correctly reported that there were now a bit more than 5TB available.

     

    Going to the "Disk Management" console on the other hand showed that the drive had one partition with 2047GB and about 2800GB unallocated. Curious we filled data on the system and sure enough we couldn't add more than the 2047+primary data partition.

     

    Trying to extend the volume using diskpart didn't help.

     

    So my question is; is there any way to use disks larger than 2TB in WHS?

     

    Thanks,

    Brian

    Friday, August 17, 2007 7:19 PM

Answers

  • Umm, I think the maximum size of an MBR disk is 2 TB. So you may have to convert the array to a GPT disk. I would remove the array from the storage pool, convert it, and then add it back to the pool.

    You may find that WHS chokes if you do that, though (though I kind of doubt it). I'll be interested to hear how it turns out. Smile
    Friday, August 17, 2007 8:43 PM
    Moderator
  • First I'll mention that "RAID is not a supported scenario" for WHS. Microsoft doesn't test for it, doesn't recommend it, and their first advice if you have problems with it is likely to be "break the array, reinstall the OS, and call us back if you continue to have problems." WHS has some features which give basically the same data proteciton as RAID, but which are much easier for the non-technical admin to deal with. Your very best bet is to stick with the design Microsoft intends.

    If you decide to go ahead with RAID anyway, I think what Ahmad is trying to do is give a generic tutorial on the use of diskpart (which isn't very helpful in this case).

    I believe you've tried this, but just in case you haven't:
    • Remove the array from the storage pool
    • You may have to reinstall WHS at this point because things are too badly hosed up to continue with this installation; that's not completely clear to me. If you do, reinstall without the driver for the array controller. (Don't use the array as your system drive for this exercise.)
    • Install the driver after WHS is re-installed.
    At this point, you have WHS "seeing" the array (or maybe individual disks, if you break the array). But it's not in the storage pool yet. Now you can:
    • Take your array and clean/convert it to GPT.
    • Extend the partition over the remaining space on the array.
    • Add the partition to the storage pool.
    If that doesn't result in a partition with the full space available, I'm afraid you're not going to be able to do what you want. In that case, two arrays instead of one is probably your best bet (after dropping back to JBOD and letting WHS take care of it). You'll sacrifice an extra disk to parity that way, but will be able to build multiple arrays that are each less than 2 TB in size, so they'll work as MBR drives.
    Monday, August 20, 2007 12:33 AM
    Moderator
  • Ken,

    You are right in this issue, in this Topic someone conform your steps and found the way to do it. Brian should look to that.

    Here quote of it:

     rwicks61 wrote:

     

    Hi,

     

    It is possible to add a RAID array of more than 2TB.  This has to done BEFORE any data is placed on the array.  Do this at your OWN risk.  You will need a separate system disk to install WHS.

     

    Installed WHS to you designated system disk ( in my case 160GB WD ).

    Install the RAID Controller ( Highpoint-tech 2340  with 5x750GB attached )

    Create the array from the BIOS of the RAID controller

    Logon to the WHS locally.

    Add the RAID Controller driver.

    Install the RAID controller app. Make sure the array is initialized. So that WHS can see one disk.

     

    Run the home server console and add the new disk to the storage pool.

    It will now format the array to an 2TB MBR basic partition.

     

    Now open Explorer, goto C:\fs copy the C folder to your desktop. Do not copy the System Volume Folder.

    Close the explorer window

    Goto Services and stop drive extender and virtual disk services (this will stop other WHS services).

    Open disk management and delete the partition that was created on the array disk (this will take a bit of time and select yes when it says that other processes/services may be using the partition)

    You will  now see two unallocated regions of the disk.

    On the left side right click on the disk and click convert to GPT volume.

    Right click in the same place and click dynamic disk.

    no you see the whole disk as one unallocated volume ( in my case I saw 2.97TB )

    Right click on the unallocated space and click new partition/volume 

    The volume label needs to be set to DATA

    Check quick format (if you are sure the disks are ok)

    The volume needs to be mounted to a folder and select "c:\fs\c" as the folder.

     

    Now copy the contents of the c folder that is on your desktop back to the c:\fs\c folder.

    Restart the system.

     

    Logon locally again.

    If you launch the home server console you will now have the total amount of space available that is on the array is now available to the storage pool.

    In you get the message that the backup service is not running.

     

    open an command prompt

    change to D:\folders\{00008086-058D-4C89-AB57-A7F909A47AB4} and run the following command
    fsutil reparsepoint delete Commit.dat

    delete everything in D:\folders\{00008086-058D-4C89-AB57-A7F909A47AB4}

    restart the system.

     

    You should now be running.  Voila!!!

     

     

    Thanks

     

    Roger

     

     




    My best.


    Tuesday, August 21, 2007 3:09 PM

All replies

  • Hi,

    RD to your server, and from command propt use this:

    DiskPart.exe

    And then, use the extend command, as this:

    Syntax
    extend [size=N] [disk=N] [noerr]

    Parameters
    size=N
    The amount of space in megabytes (MB) to add to the current partition. If no size is given,
    the disk is extended to take up all of the next contiguous unallocated space.

    disk=N
    The dynamic disk on which the volume is extended. An amount of space equal to size=N is allocated on the disk.
    If no disk is specified, the volume is extended on the current disk.

    noerr
    For scripting only. When an error is encountered, DiskPart continues to process commands as if the error did not occur.
    Without the noerr parameter, an error causes DiskPart to exit with an error code.

    The Diskpart have others command as well.

    My best.

    Friday, August 17, 2007 8:18 PM
  • Umm, I think the maximum size of an MBR disk is 2 TB. So you may have to convert the array to a GPT disk. I would remove the array from the storage pool, convert it, and then add it back to the pool.

    You may find that WHS chokes if you do that, though (though I kind of doubt it). I'll be interested to hear how it turns out. Smile
    Friday, August 17, 2007 8:43 PM
    Moderator
  • I have 4 WD 750gb HD's in mine. It should read 3 TB - the OS which is 20 gig ?

     

    But what I see is only 2.7 TB

    Friday, August 17, 2007 11:15 PM
  •  Konflict wrote:

    I have 4 WD 750gb HD's in mine. It should read 3 TB - the OS which is 20 gig ?

     

    But what I see is only 2.7 TB

    2.7TB  is correct , 4 x 750 would be 3TB (3000), but 750GB is just under 750GB, about 680GB of usable space on that drive, just like anyother HDD.

    Friday, August 17, 2007 11:38 PM
  •  Konflict wrote:

    I have 4 WD 750gb HD's in mine. It should read 3 TB - the OS which is 20 gig ?

     

    But what I see is only 2.7 TB

     

    Well, don't forget that in a 750 GB HD, you will actually only see maybe 690 GB of usable space.  If you multiply that times 4, you get 2760 GB or around 2.7 TB... I don't have a 750 GB drive, so I'm guessing here... but none of my 320 drives actually show 320 GB... they all show around 298 GB of available space (total - used and unused space). 

     

    Nothing WHS can do about that...

     

    Hope that helped...

    Friday, August 17, 2007 11:40 PM
  • ah you guys are right, I totaly forgot about. Thanks Smile

     

    Saturday, August 18, 2007 1:18 PM
  • Yes, you are right - WHS creates a MBR disk when it adds a new disk. This happens no matter what format the disk was originally (e.g. GPT or Dynamic).

     

    One problem I encountered was that if I convert the disk to GPT I suddenly se TWO identical disks under Server Storage (WHSC) - same size, name etc. Adding one of these results in the initial problem. E.g. the disk is reconfigured to MBR and the fist partition (2047GB) is used in the "Storage Hard Drives" and WHS incorrectly states that there are 5TB free.

     

    I then tried to add the second ("twin") disk to see what happened (maybe it would use another partition on the original disk). Unfortunately this was not the case - the disk was added but the status was "Missing". WHS then thinks that there is nearly 10TB of storage in the server. And I don't know how to remove the "Missing" disk - when I try WHS states that the disk is not connected - pressing "Next" anyway gives a warning and clicking "Finish" here results in "Files not Moved" and nothing happens.

     

    So unfortunately - there seems to be no way to add a disk larger than 2TB :-(

     

    One more result from all this is that we're now getting the infamous "Invalid file handle" - trying all the suggestions around here (except reinstall WHS) has not helped anything. (We're running the WHS Evaluation version, build 3790).

     

    Thanks,

    -Brian

    Saturday, August 18, 2007 8:41 PM
  • Hi,

    Well, my suggestion based of my setting here, I have one server as 6tb, and other higher than that.

    Normally, I apply the above command, then format the arry within the disk managment, then you could try to add it to the whs pool.

    Till you the truth, I did test it on the whs pool, but if the disk management see it first as the 5tb, then no point the whs pool do not, unless something about the whs still not clear to me.

    My best.
    Saturday, August 18, 2007 9:17 PM
  • Hi abobader,

     

    Well, either there is another way of adding disks the the WHS pool besides using the "Windows Home Server Console" or I'm not getting what you are saying.

     

    No matter what I do to the disk to be added, WHS totally reconfigures the disk before adding it to the disk pool. It seems that it does something like:

     

    diskpart select disk x

    diskpart clean

    diskpart convert basic

    diskpart convert mbr

    diskpart create partition primary

    diskpart assign mount=C:\fs\xyz

     

    So no matter what you do to the disk before adding to pool, alle data, disk setup and partition info is lost!

     

    Please tell me I'm wrong or that there is some way to add the disk without having WHS convert the disk to basic/mbr.

     

    -Brian

    Sunday, August 19, 2007 9:20 AM
  • Hi Brian,

    You only need diskpart extend command, here how I do it normally:

    1 - I extend the array within the riad controller, in your case 5tb.
    2 - RD to the server and command prompt, diskpart.exe
    3 - extend disk=1 (as example)

    *I do not put size, since I wanted to be exred all.

    Then exit from diskpart, and I am done.

    My best.
    Sunday, August 19, 2007 11:13 AM
  • First I'll mention that "RAID is not a supported scenario" for WHS. Microsoft doesn't test for it, doesn't recommend it, and their first advice if you have problems with it is likely to be "break the array, reinstall the OS, and call us back if you continue to have problems." WHS has some features which give basically the same data proteciton as RAID, but which are much easier for the non-technical admin to deal with. Your very best bet is to stick with the design Microsoft intends.

    If you decide to go ahead with RAID anyway, I think what Ahmad is trying to do is give a generic tutorial on the use of diskpart (which isn't very helpful in this case).

    I believe you've tried this, but just in case you haven't:
    • Remove the array from the storage pool
    • You may have to reinstall WHS at this point because things are too badly hosed up to continue with this installation; that's not completely clear to me. If you do, reinstall without the driver for the array controller. (Don't use the array as your system drive for this exercise.)
    • Install the driver after WHS is re-installed.
    At this point, you have WHS "seeing" the array (or maybe individual disks, if you break the array). But it's not in the storage pool yet. Now you can:
    • Take your array and clean/convert it to GPT.
    • Extend the partition over the remaining space on the array.
    • Add the partition to the storage pool.
    If that doesn't result in a partition with the full space available, I'm afraid you're not going to be able to do what you want. In that case, two arrays instead of one is probably your best bet (after dropping back to JBOD and letting WHS take care of it). You'll sacrifice an extra disk to parity that way, but will be able to build multiple arrays that are each less than 2 TB in size, so they'll work as MBR drives.
    Monday, August 20, 2007 12:33 AM
    Moderator
  • Ken,

    You are right in this issue, in this Topic someone conform your steps and found the way to do it. Brian should look to that.

    Here quote of it:

     rwicks61 wrote:

     

    Hi,

     

    It is possible to add a RAID array of more than 2TB.  This has to done BEFORE any data is placed on the array.  Do this at your OWN risk.  You will need a separate system disk to install WHS.

     

    Installed WHS to you designated system disk ( in my case 160GB WD ).

    Install the RAID Controller ( Highpoint-tech 2340  with 5x750GB attached )

    Create the array from the BIOS of the RAID controller

    Logon to the WHS locally.

    Add the RAID Controller driver.

    Install the RAID controller app. Make sure the array is initialized. So that WHS can see one disk.

     

    Run the home server console and add the new disk to the storage pool.

    It will now format the array to an 2TB MBR basic partition.

     

    Now open Explorer, goto C:\fs copy the C folder to your desktop. Do not copy the System Volume Folder.

    Close the explorer window

    Goto Services and stop drive extender and virtual disk services (this will stop other WHS services).

    Open disk management and delete the partition that was created on the array disk (this will take a bit of time and select yes when it says that other processes/services may be using the partition)

    You will  now see two unallocated regions of the disk.

    On the left side right click on the disk and click convert to GPT volume.

    Right click in the same place and click dynamic disk.

    no you see the whole disk as one unallocated volume ( in my case I saw 2.97TB )

    Right click on the unallocated space and click new partition/volume 

    The volume label needs to be set to DATA

    Check quick format (if you are sure the disks are ok)

    The volume needs to be mounted to a folder and select "c:\fs\c" as the folder.

     

    Now copy the contents of the c folder that is on your desktop back to the c:\fs\c folder.

    Restart the system.

     

    Logon locally again.

    If you launch the home server console you will now have the total amount of space available that is on the array is now available to the storage pool.

    In you get the message that the backup service is not running.

     

    open an command prompt

    change to D:\folders\{00008086-058D-4C89-AB57-A7F909A47AB4} and run the following command
    fsutil reparsepoint delete Commit.dat

    delete everything in D:\folders\{00008086-058D-4C89-AB57-A7F909A47AB4}

    restart the system.

     

    You should now be running.  Voila!!!

     

     

    Thanks

     

    Roger

     

     




    My best.


    Tuesday, August 21, 2007 3:09 PM
  • Just a note to say that this method worked very well for me (so far!)

     

    I have a 5x750 RAID-5 array on an Areca 1120 PCI-X card.  I don't want to give up the improved read perf of a RAID array, and the inherent protection on _all_ data stored on it.

     

    The only thing I did different than "rwicks61" did was that I didn't convert the GPT volume to a Dynamic volume... I left it GPT.  WHS seems fine with that (it should, as all of its architecture should be above this layer).

     

    One additional note... my controller card offers SCSIPort and StorPort driver models.  For some reason, the StorPort drivers would not allow me to keep it as a GPT drive... I had to convert the GPT to dynamic.

     

    I really hope Microsoft adds native support for GPT volumes in the coming months... it won't be long before the 2TB single drive barrier is broken... probably by this time next year is my guess.

     

    -shoek

     

    Monday, November 5, 2007 6:53 PM
  • Hi Ken,

     

    thanks for your tips & tricks around WHS & RAID.

     

    You always advice to use a non RAID drive as system disk. So let's say I run a WHS on a single 160GB system disk and have a RAID-5 array attached to it to store my data on it.

     

    Questions:

    1.)

    What happens if my system disk fails and I have to reinstall WHS (after replacing the drive). Is it possible to re-attach the RAID-5 protected array? Do I loose all the data on it, when adding it to the storage pool? Because it was already formated, will WHS be able to handle this without reformating the drive?

     

    2.)

    If I have a RAID-5 protected volume in the disk pool and run out of space, could I add another disk to the pool (let's say another RAID-5 protected volume) and use Disk-Extender to have one large pool?

     

    3.)

    If I have volumes larger than 2TB (let's say I managed to overcome the WHS limitations using GPT), will Disk-Extender still work?

     

    Regards,

    Alex

    Friday, March 21, 2008 10:29 AM
  • Hi,

     

    Just another idea:

     

    If you use a RAID controller, why is it necessary to built one single large volume? Why to have all this work arround which try to break the barriers of the WHS limitations?

     

    Would it not make more sense to create multiple smaller logical volumes (max. 2TB) inside your RAID configuration and let WHS to the work to present it as one large disk space? I would anyway not recommend to create file systems larger than 2TB under any windows operating system, because if your systems crashes or your files system get corrupted, a file system check could run for days or even weeks...

     

    I will use HW-RAID (RAID-5) and use a standardized volume size. I own an external eSATA storage box which holds up to 10 harddrives. I will start with a 5-drive configuration, based on 1TB disks. In my case, I will standardize on 1TB volume size, so this will give me four 1TB volumes. I think this give me the best flexibility. In the future I might be able to use the other 5 slots in my storage box with 2TB or even larger drives, but I will simply add them inside WHS as 1TB volumes. So any disk migration will be very easy to do... So for example: I could migrate a 1TB volume from my first RAID-set to the newer one using larger disk drives.

     

    And also, I recommend not to have too many disk drives in a single RAID-5 configuration. So in my example, I go with 5 drives max.

     

    What do you think?

     

    Regards,

    Alex

     

     

     

    Saturday, March 22, 2008 9:31 AM
  • Alex, there are good reasons for going with a single large volume on your RAID array, rather than multiple volumes. First, WHS will perform much better with a single "disk" due to Drive Extender overhead. Second, until Microsoft releases a patch, there is a file corruption issue that only affects WHS PCs with multiple disks. There's also a bug with previous versions that only affects multi-disk setups, but that's less serious, IMO, as it doesn't corrupt files, just blocks the ability to retrieve old versions of them.
    Saturday, March 22, 2008 1:59 PM
    Moderator
  • Ok, thanks for your answer.

     

    What to you mean with "much" better performance? I guess that my ethernet network connection (even on GigE) will be the limiting factor (max. 70MB/s). Or will Drive Extender have such a big negative impact that my performance of the RAID array will be totally killed? I can live with lower performance (30%) as long as it will not totally block my system. But good to know, I will run some tests using the different configurations.

     

    Regarding the file corruption issue: Well, I hope that Microsoft will work on this, because they propagate multiple disk solutions and don't support RAID. So if they don't fix it, WHS will be dead anyway...  ;-)

     

    Ah, by the way: I had another question (on the first page of this thread) regarding the single system disk configuration. Can you help me there please?

     

    //Alex

     

     

     

    Monday, March 24, 2008 10:35 AM
  • Drive Extender imposes overhead similar to software RAID, i.e. what you can set up through the Disk Management MMC snap-in. You won't see any performance benefit from a RAID array unless it's the only "disk" in your WHS PC.

    Your other questions are all answered elsewhere in the forum. Do some searches.
    Monday, March 24, 2008 12:07 PM
    Moderator
  • I was really excited with the WHS in the beginning - it has a lot of nice features, but I'm rather disappointed by the storage implementation (Drive Extender).

     

    It's a long time since I actually used WHS, but as far as I remember the biggest performance hit is that any write to the system is done to the primary disk (drive C) and afterwards the system loadbalances the new data to the "data disks" - this means that there is a VERY high impact on the primary disk and no matter how fast your datadisks are they are really not gonna be used during writes. [please correct me if I'm wrong, but this is my understanding on the workings of WHS]. I don't really know about reads - I hope that they are done directly from the datadisks but I'm not sure!

     

    All in all I've been using Windows Server 2008 as a fileserver since my original post, as speed has been my primary concern. I really miss the web interface with the plug-in features, but comparing speed on the same hardware using WHS and W2K8 gave an increased disk performance of 2-10 times. Read speeds on W2K8 is also much higher than on WHS.

     

    Furthermore as stated in another post here using Raid5 or more secures all data - not just specific folders :-)

     

    Best,

    -Brian

     

    Tuesday, March 25, 2008 9:38 AM
  • Brian,

     

    I agree, that this architecture is really a pain for writes and if you need high write performance, WHS is maybe not the best choice.

     

    As far as I understand, will the primary disk drive be used to store the tombstones (kind of a shortcutt). Whenever you write a file to WHS, it will first written to the primary disk and later on be moved to secondary disk drives. However, WHS keeps a shortcutt of the original file on the primary disk. That's also the reason, why the primary disk drive should be big enough to support writing large files (so it should have enough free disk space for the largest possible file transfer).

     

    For reads, I think that there will be an access to the shortcutt to every fîle but the data blocks itself should be delivered by the disks where the file is physically stored on. So the read performance should not be that bad... Am I right??? Hm?

     

    I think that if a read works as described above, I can live with that. Because I don't require high write performance. And for ripping my CD's or DVD's the performance of a single disk is good enough. My focus will be mainly on read performance as I will use WHS as central storage for my media center. And 30-70 MB/s is good enough...

     

    Cheers,

    Alex

     

     

    Tuesday, March 25, 2008 10:53 PM
  • Alex, writes on multi-disk systems are significantly impacted by Drive Extender. You will generally see write speeds under 5 MB/s on a gigabit network. Reads are more in line with a single-disk read over a network.
    Wednesday, March 26, 2008 1:34 AM
    Moderator
  • Hi abobader,
    I tried to follow the procedure stated here but am confused:
    "Now open Explorer, goto C:\fs copy the C folder to your desktop. Do not copy the System Volume Folder" - c:\fs on my install dis not have C:\fs c.  It had C:\fs H.  I proceeded using H instead of C.  However, you said not to copy the system volume. - There really was not anything in this folder that I was able to copy.

    After I proceeded and converted to GPT then to dynamic, and formatted, I remounted the dynamic to c:\fs H.  Restarted, console saw one drive missing and the otehr not in the storage pool as yet.

    Tried the remainder of your procedure and ran fsutil reparsepoint but it said commit.dat was not acessible.  I proceeded to delete all from the folder and am attempting a restart now.

    Suggestions?

    Thanks in advance
    Tuesday, December 22, 2009 3:42 PM
  • Dynamic disks are not supported in WHS - even if they may work, a server reinstall will usually not do. Also with this kind of manual interception into the mechanics of WHS you increase the risk of loosing data in the future.
    Best greetings from Germany
    Olaf
    Thursday, December 24, 2009 7:29 AM
    Moderator