locked
What's new in Drive Extender for "Vail"

    General discussion

  • Drive Extender is a storage technology first introduced in Windows Home Server's first release. The 1st generation of the technology was file based, and worked on top of "vanilla" NTFS volumes using reparse points. To address the customer feedback we have received and improve the system's resiliency to partial drive failures (seen many times by our support), the Drive Extender technology was updated to use block based storage below the file system similar to software RAID systems.

    The following isn't an exhaustive list, but does try to enumerate the major new features as well as features which are no longer supported in the “Vail” version of Drive Extender:

     

    Features carried over from the previous release:

    ·         Duplication can be turned on/off per folder.

    ·         Duplicated folders can survive a single hard drive failure.

    ·         Storage pool can be easily expanded using different drive types and various sizes.

    ·         Graphical representation of storage usage (AKA the pie chart) - isn't present in the beta, but is planned for the next milestone.

     

    New/Improved features:

    ·         For duplicated folders, data is duplicated in real time to two separate drives - there is no hourly migration pass.

    ·         File system level encryption (EFS) and compression are now supported for Drive Extender folders.

    ·         File conflicts are gone, duplication works as intended for files in use as it is performed at the block level now.

    ·         The remaining amount of data to synchronize/duplicate is reported per storage pool.

    ·         All storage operations are executed in the background without blocking other server operations. Specifically, drive removal can be issued without impacting the online state of shares.

    ·         Drives in a storage pool can be named with a custom description to enable physical identification of the drive in the server.

    ·         Drive serial number and exact connection type is reported for each drive.

    ·         Drives which are bigger than 2TB can be added  to a storage pool.

    ·         iSCSI storage devices can be added to the a storage pool.

    ·         The system drive can be excluded from the storage pool.

    ·         A new low-level storage check and repair diagnostic operation was added.

    ·         All storage operations are performed with very low I/O priority to ensure they don't interfere with media streaming.

    ·         A new "folder repair" operation is available which runs chkdsk on the folder's volume.

    ·         To protect against silent storage errors (bit flips, misdirected writes, torn writes), additional information is appended to each 512-byte sector stored on drive. In particular, each sector is protected by a CRC checksum, which enables Drive Extender to detect data read errors, perform realtime error correction and self-healing (up to 2 bit errors per sector if duplication is disabled, and any number of bit errors if duplication is enabled) and report the errors back to the user and application. The overhead for this additional data is roughly 12% of drive space.

    ·         Data drives in storage pools can be migrated between servers, and appear as a non-default pool.  A non-default pool can be promoted to a default pool if no default pool exists.

    Deprecated features:

    ·         A data drive from a storage pool cannot be read on machine not running the “Vail” server software.

    ·         Data isn't rebalanced across drives to ensure even distribution. The data allocation attempts to keep drives evenly used. A periodic rebalance operation is considered for the next version.

    Known interop/support issues:

     

    ·         As with other software RAID solutions, Drive Extender isn't supported with BitLocker.

    ·         Drive Extender cannot share the same drive with other software based RAID systems (such as Microsoft Dynamic Drives)

    ·         Running low-level software storage tools—for example, defragmentation, full drive encryption, or volume imaging—on server folders may cause issues. These tools have not been fully tested in this release. Please avoid running these tools on the server.

    ·         Internally, the “Vail” software has been tested with up to 16 hard drives and with up to 16 TB of total storage capacity. We’re aware of a number of bugs that occur beyond these limits, so please keep your beta installations under 16 drives and 16 TB total drive space.


    Program Manager, Windows Home and Small Business Server Team
    Tuesday, April 27, 2010 7:13 PM

All replies

  • Mark,

    Thanks for the feature list.  This sounds really good.

    One quick question - can one exclude the system drive during the installation process? Or is this done by installing with only one drive in the machine?  I did not see this mentioned in the release notes.

    Thanks,

    Chad Wagner

    Tuesday, April 27, 2010 9:13 PM
  • Could you address the following concerns:
    - In block-based structure a single file can be spread across multiple drives, potentially across all drives, correct?
    - That means if duplication is turned off a single drive failure can result in losing more data than failed drive contained. Potentially, a single drive failure can render whole data storage useless if all files were spread between all drives.
    - If duplication is on, simultaneous death of two drives can theoretically result loss of all storage.

    This is deterioration of reliability comparing to WHS v1. With version 1 I know that I lose data only on dead drives, all other files will be intact.

    - Suppose I have motherboard with 8 SATA ports and all ports are connected to Drive Extender drives. Motherboard dies, I will not be able to access my data until I find a new motherboard with the same number of ports? Meaning that in order to read my data on another Vail machine I have to connect all drives from the old machine. And If I don't have suitable hardware - motherboard or SATA controllers, my data is not accessible. Correct?
    Tuesday, April 27, 2010 9:17 PM
  • Hi there,

    You can exclude the system hard drive during installation with the following steps:

    1. Install the server, wait for the OS install to complete.

    2. Close the Initial configuration window Alt-F4 (the registry keys below need to be added before Initial configuration).

    3. Create the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Server\Storage

    4. Create the following two DWORD values set to 1: FormatAndAddAllNonUSB1394ToStorage, ExcludeSystemDiskFromStorage

    Please note that with these values, all non-USB/1394 drives will be automatically formatted and added to the default storage pool (no warning about data loss), and the system hard drive will be excluded. Unfortunately, there is no way to trigger this behavior with an unattended installation file.

    thanks,

    Mark Vayman

     


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Tuesday, April 27, 2010 10:11 PM
  • "To protect against silent storage errors (bit flips, misdirected writes, torn writes), additional information is appended to each 512-byte sector stored on drive. "

    What about the newer 4k sector drives?

    Tuesday, April 27, 2010 10:17 PM
  • Hmm... I'm not entirely sure how I feel about this. One of the initial things that I liked about Drive Extender over Raid was that even if I had catastrophic server failure it was still really easy for me to grab the files off the disks, and I have used this functionality before. I'm rather disappointed that it is being removed. Could we perhaps at least get some sort of drive explorer tool to use on a non Vail machine?

    In my mind, you are removing one of the main positive differentiating factors in comparison to Raid. Why? 

    That being said, I'm not familiar enough with what you've gained by moving to block based storage to judge the decision too harshly, but am still concerned.

    Tuesday, April 27, 2010 10:19 PM
  • I do like the sound of all the new resiliency features though, I recently went through months of angst trying to figure out what was wrong with my server, only to finally identify that I have a faulty Seagate drive that needs a firmware update.

    Tuesday, April 27, 2010 10:24 PM
  • The new Drive Extender design is compatible with the advanced format (AKA 4K sector) hard drives. While these drives are 4K sector size internally, they report 512 sector size to the upper storage layers and the OS.

    thank,


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Tuesday, April 27, 2010 10:33 PM
  • All consumer grade 4K drives selling today emulate 512 byte logical sectors, so DE works the same way on them.

    We're well aware of this trend and are working closely with drive manufacturers on testing Vail on various flavours of 4K sector drives.


    Bulat Shelepov, Test Lead (Drive Extender), Windows Home and Small Business Server Team
    Tuesday, April 27, 2010 10:36 PM
  • cavediver, gmurray

    Since we have moved from a file based to block based approach, your concerns above are valid. The decision to change our approach did not come easily, in addition to the reasons Bulats mentioned in other threads (application compatibility issues, inability to duplicated files that are in use or to provide real time duplication), there were major architectural changes in the OS between Server 2003 (which was used by WHS v1) and Server 2008 R2 (which is used by Vail). These changes made it almost impossible to re-use the same file based approach for Drive Extender in Vail. Further when we took a look at the feedback, and the bug reports, we discovered that even after the V1 data corruption was fixed, silent hardware errors still caused several data integrity issues. The silent errors are especially common in commodity hard-drives that most of our customer base is using, and it was very important to us to have built in detection and correction of such errors. The only way we saw this possible was by changing to block level.

    thanks,


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Tuesday, April 27, 2010 10:40 PM
  • Those do sound like valid drivers behind the transition, even though I didn't really experience the negative effects myself, but I still feel that the ability to recover files from the hard-drives without needing a special os or hardware was a major selling point (at least for me) for drive extender. When I was considering different options in terms of networked home storage I came very close to going with a linux based RAID solution, until I started reading about drive extender. I had been concerned by the fact that it was often hard to recover from a failed RAID controller unless you could reproduce the exact hardware configuration, and that even software raid solutions could be tricky to recover. 

    Given these factors DE seemed very attractive to me. I think there is some elegance in a simple file based approach that appeals to technical people that couldn't be bothered becoming RAID proficient, or to people that have been burned by RAID in the past. I guess it just saddens me that the simple solution has flaws ;-)

    I'd like to learn more about the new solution (any whitepapers yet?) and I would be interested if there is anything you can do to keep the system transparent to users. There's nothing like being able to yank a drive and look at the files. Moving away from that simplicity obfuscates things a bit for those of us that are reassured by such things ;-) 

    Unrelated, but another thing that really sold me on WHS and DE was the software HP built on top for MediaSmart server line, so I'm glad to see you pushing the product in the direction their value adds have been pointing.

    Wednesday, April 28, 2010 12:23 AM
  • LOL... "..please keep your beta installations under 16 drives and 16 TB total drive space."

    I'll try to keep my massive 'one person, 3 computer' home network within those parameters.  :)

    Thanks for the compact outline of new and dropped features etc. :)

     

    Wednesday, April 28, 2010 1:40 AM
  • Those do sound like valid drivers behind the transition, even though I didn't really experience the negative effects myself, but I still feel that the ability to recover files from the hard-drives without needing a special os or hardware was a major selling point (at least for me) for drive extender. When I was considering different options in terms of networked home storage I came very close to going with a linux based RAID solution, until I started reading about drive extender. I had been concerned by the fact that it was often hard to recover from a failed RAID controller unless you could reproduce the exact hardware configuration, and that even software raid solutions could be tricky to recover. 

    Given these factors DE seemed very attractive to me. I think there is some elegance in a simple file based approach that appeals to technical people that couldn't be bothered becoming RAID proficient, or to people that have been burned by RAID in the past. I guess it just saddens me that the simple solution has flaws ;-)

    I'd like to learn more about the new solution (any whitepapers yet?) and I would be interested if there is anything you can do to keep the system transparent to users. There's nothing like being able to yank a drive and look at the files. Moving away from that simplicity obfuscates things a bit for those of us that are reassured by such things ;-) 

    Unrelated, but another thing that really sold me on WHS and DE was the software HP built on top for MediaSmart server line, so I'm glad to see you pushing the product in the direction their value adds have been pointing.


    gmurray,

    We will make a technical brief available at some point (cannot really commit to dates as I need to be focused on fixing bugs you guys find). We are considering to provide the ability to view files that on the disk in a read-only mode at some point in the future, but probably after the product RTMs. We did our best to keep the simplicity of usage to enable novice users to expand storage, and yes even to handle drive failures. That is one of the reasons we chose to continue with a mirrored approach as opposed to parity (which involves lengthy computations in the rebuild process).

    Thanks,


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 2:37 AM
  • @bulatS: I think the biggest problem I have is that the option to turn off duplication for a folder has far more impact than in v1, to the point that I'm surprised MS is providing the option at all.

    Personally, I'm far too married to WHS now and will buck up and buy a couple more 1.5-2.0 TB drives and be done with it, but providing some additional protection for unduplicated folders, or removing the option entirely seems like the safest approach.


    rtk
    Wednesday, April 28, 2010 2:39 AM
  • @rtk -- removing the option entirely is exactly what I *personally* think is the best idea. I'm very paranoid about my data, and my server at home has duplication persistently enabled on all shares, plus I have an identical offsite copy as well. However, a) I know a lot about storage and computers in general and b) I am willing to pay extra $500 for hardware if this makes my data bulletproof safe. The same cannot be said about the overwhelming majority of our customers though. Sad as it is, I'm building this product for a user very different from myself first, and for myself (and probably most other guys who are active on this forum these past days) second.


    Bulat Shelepov, Test Lead (Drive Extender), Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 3:03 AM
  • Agreed of course, but one of the best "features" of v1 was the ability to balance the risk, I've got 4.5 TB on 6.5, less than 1 is critical, irreplaceable, duplicated and backed up beyond WHS, the rest is just used to feed my hoarding/collection of media that if lost will be rebuilt on an ongoing basis. It's that data that is currently unduplicated. On v1 with my drives balanced to percentages I'm only risking 700-800 GB of back episodes and movies, on v2 the entire 3.0 TB+ will be at risk.

    I'm sure I'm preaching to the choir, I'll stop. ;-)


    rtk
    Wednesday, April 28, 2010 3:46 AM
  • Hi there,

    You can exclude the system hard drive during installation with the following steps:

    1. Install the server, wait for the OS install to complete.

    2. Close the Initial configuration window Alt-F4 (the registry keys below need to be added before Initial configuration).

    3. Create the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Server\Storage

    4. Create the following two DWORD values set to 1: FormatAndAddAllNonUSB1394ToStorage, ExcludeSystemDiskFromStorage

    Please note that with these values, all non-USB/1394 drives will be automatically formatted and added to the default storage pool (no warning about data loss), and the system hard drive will be excluded. Unfortunately, there is no way to trigger this behavior with an unattended installation file.

    thanks,

    Mark Vayman

     


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team

    Mark,

    Using a seperate system disk seems like the way to go.  I was thinking of putting a 7200 RPM 160 GB notebook drive in my server for the system disk.  This would give me a 7200 RPM system drive and also seems like it would make it easier to recover from a failed system disk.  Is this correct thinking?  If the system disk failed I would just need to replace the system drive and restore from the daily server "system" backup.  I would then be ready to roll without any effect to the storage pool, right?

    Chad

    Wednesday, April 28, 2010 4:22 AM
  • > on v2 the entire 3.0 TB+ will be at risk.

    This is not entirely true. If a drive dies, and a share is unduplicated, you will lose only the files that entirely or partially reside on the dead drive. All other files will remain intact.


    Bulat Shelepov, Test Lead (Drive Extender), Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 4:24 AM
  • I agree with Bulat here; I think duplication should be forced down end users' throats. Particularly with the new Drive Extender, which (as has been beaten to death) can lose everything in a single drive failure if duplication isn't turned on for all shares.
    I'm not on the WHS team, I just post a lot. :)
    Wednesday, April 28, 2010 4:31 AM
  • I agree with Bulat here; I think duplication should be forced down end users' throats. Particularly with the new Drive Extender, which (as has been beaten to death) can lose everything in a single drive failure if duplication isn't turned on for all shares.
    I'm not on the WHS team, I just post a lot. :)

    And we do try to encourage duplication whenever possible:

    ·          For the default folders, duplication will kick in automatically when a second drive is added.

    ·          The default duplication policy when adding a folder is duplication on

     

    Thanks,


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 4:58 AM
  • > on v2 the entire 3.0 TB+ will be at risk.

    This is not entirely true. If a drive dies, and a share is unduplicated, you will lose only the files that entirely or partially reside on the dead drive. All other files will remain intact.


    Bulat Shelepov, Test Lead (Drive Extender), Windows Home and Small Business Server

    Chances are, majority of files large than 1GB would have a part on this drive. If so, all of them will be lost.

    As for duplication for all data - it is just unreasonable requirement. There are reasons RAID types other than RAID1 exist. I don't think it's realistic to expect that people would double number of hard drives just to be able to use Vail.
    Besides, duplication now means more than doubling. Drive Extender itself has 12% overhead so duplication would use extra 124%?. Price is almost 2/3 of drive space just to protect from single drive failure?

    Your target, average Joe, may not realize the price he is paying until he finds out in hard way - either by his HDD space reducing too fast with duplication or by losing all data without duplication.

    I myself will be staying with WHS v1 plus FlexRaid.

    By the way, regarding this:

    "major architectural changes in the OS between Server 2003 (which was used by WHS v1) and Server 2008 R2 (which is used by Vail). These changes made it almost impossible to re-use the same file based approach for Drive Extender in Vail."
    FlexRaid author also made FlexRaid View, which is essentially the same thing as Drive Extender. It allows to combine data from different drives to set of volumes on file level. Each volume is seamlessly spread across multiple HDDs It works fine on server 2008R2.

    Wednesday, April 28, 2010 5:27 AM
  • Is the DE only aware of blocks or has it some knowledge of the files above. If the new DE knows about the files will it try to keep blocks from the same file on the same HDD(s)? The feature to readonly access the data on the disks from other OS's like XP, Vista, 7 should be there on releasedate or at least have a preview for me to consider the new WHS or I could rather build myself something around opensolaris ZFS.

    The NTFS in v1 has limits but it is a really nice feature for people with or without any IT skills to just grab the disks in case of a failure and pull out all survived data from any NTFS aware computer. I hope it will be as easy with v2...

    Wednesday, April 28, 2010 8:02 AM
  • Drive Extender is a storage technology first introduced in Windows Home Server's first release. The 1st generation of the technology was file based, and worked on top of "vanilla" NTFS volumes using reparse points. To address the customer feedback we have received and improve the system's resiliency to partial drive failures (seen many times by our support), the Drive Extender technology was updated to use block based storage below the file system similar to soft ware RAID systems.

    The following isn't an exhaustive list, but does try to enumerate the major new features as well as features which are no longer supported in the “Vail” version of Drive Extender:

     

    Features carried over from the previous release:

    ·          Duplication can be turned on/off per folder.

    ·          Duplicated folders can survive a single hard drive failure.

    ·          Storage pool can be easily expanded using different drive types and various sizes.

    ·          Graphical representation of storage usage (AKA the pie chart) - isn't present in the beta, but is planned for the next milestone.

     

    New/Improved features:

    ·          For duplicated folders, data is duplicated in real time to two separate drives - there is no hourly migration pass.

    ·          File system level encryption (EFS) and compression are now supported for Drive Extender folders.

    ·          File conflicts are gone, duplication works as intended for files in use as it is performed at the block level now.

    ·          The remaining amount of data to synchronize/duplicate is reported per storage pool.

    ·          All storage operations are executed in the background without blocking other server operations. Specifically, drive removal can be issued without impacting the online state of shares.

    ·          Drives in a storage pool can be named with a custom description to enable physical identification of the drive in the server.

    ·          Drive serial number and exact connection type is reported for each drive.

    ·          Drives which are bigger than 2TB can be added  to a storage pool.

    ·          iSCSI storage devices can be added to the a storage pool.

    ·          The system drive can be excluded from the storage pool.

    ·          A new low-level storage check and repair diagnostic operation was added.

    ·          All storage operations are performed with very low I/O priority to ensure they don't interfere with media streaming.

    ·          A new "folder repair" operation is available which runs chkdsk on the folder's volume.

    ·          To protect against silent storage errors (bit flips, misdirected writes, torn writes), additional information is appended to each 512-byte sector stored on drive. In particular, each sector is protected by a CRC checksum, which enables Drive Extender to detect data read errors, perform realtime error correction and self-healing (up to 2 bit errors per sector if duplication is disabled, and any number of bit errors if duplication is enabled) and report the errors back to the user and application. The overhead for this additional data is roughly 12% of drive space.

    ·          Data drives in storage pools can be migrated between servers, and appear as a non-default pool.  A non-default pool can be promoted to a default pool if no default pool exists.

    Deprecated features:

    ·          A data drive from a storage pool cannot be read on machine not running the “Vail” server software.

    ·          Data isn't rebalanced across drives to ensure even distribution. The data allocation attempts to keep drives evenly used. A periodic rebalance operation is considered for the next version.

    Known interop/support issues:

     

    ·          As with other software RAID solutions, Drive Extender isn't supported with BitLocker.

    ·          Drive Extender cannot share the same drive with other software based RAID systems (such as Microsoft Dynamic Drives)

    ·          Running low-level software storage tools—for example, defragmentation, full drive encryption, or volume imaging—on server folders may cause issues. These tools have not been fully tested in this release. Please avoid running these tools on the server.

    ·          Internally, the “Vail” software has been tested with up to 16 hard drives and with up to 16 TB of total storage capacity. We’re aware of a number of bugs that occur beyond these limits, so please keep your beta installations under 16 drives and 16 TB total drive space.


    Program Manager, Windows Home and Small Business Server Team

    Can you clarify how the splitting of files into 1GB chunks will work?    There seems to be confusion about that as it seems to indicate complete files are now scattered across multiple drives.    If duplication is enabled for a folder then regardless of where there 1GB chunks reside each chunk exists on two physical disks, is this correct?

    Granted it's not supported, but this new design seems like it would prevent a user from taking a singular drive and trying to recover data off of it from a non vail system as it may not have the complete file if they are very large.      

    I would really like the option to disable this 1GB chunking ideaology.     

    If this were disabled and complete files were kept together on each disk, then that would open up the door for MS to create a "Vail live CD" that would allow the user to boot into it from any PC.  This shell copy of the Vail OS would be able to load the new driver needed to read this disk format.   This would be a read only install that would allow the user to read the data off the drive and copy it off to a network share.

    Just thinking out loud here.

    Additionally in order to protect a user's data via duplication effectively cuts storage size by 50% (this is fine).    Adding an additional 12% overhead on top of this seems like a pretty hefty penalty!

     

    Wednesday, April 28, 2010 12:48 PM
  • Chad,

    I couldn't agree with you more. I have 5TB on my current WHS and shudder to think if my primary drive fails. At this point a WHS reinstall would take FOREVER to rebuilt itself (IMHO, based on when I had to do it when my WHS was onlt at 1TB in size.)

    Ted

    Wednesday, April 28, 2010 1:51 PM
  • hmm... yeah. I find all of this a bit disturbing. If you turn duplication off for a folder the system should at least try to assert that an individual file is fully contained on one physical volume. This is part of the semantics that make DE appealing to me. RAID is too complicated, too expensive, and too opaque to someone that doesn't understand the lower level details of how the parity and striping work and you sure wouldn't want to force it on the layperson that just wants to add some network storage to their household (the presumed target market for the device?)

    DE was very appealing because the concepts, at least, were extremely easy to grasp. Your file is one one disk or another, and if its folder supports duplication then it must be on two separate physical volumes. Thats very transparent, and it didn't matter how much scurrying the system needed to perform, under the covers, to assert this. To the user, there were very easy to understand rules about what would happen to their files. I think you'll be taking a step back if you stray from this user-facing transparency. It should always be completely clear what will happen when a drive fails, and what my recovery options will be. The user shouldn't have to understand the underlying technology to appreciate the impact of a drive failure. DE 1 had simple enough semantics, I believe, that even a layperson could anticipate what would happen upon drive failure.

    Some people seem to be crying for RAID support in WHS, but I think this is strange. Why use WHS if you really just want a standard server with a raid controller? I think the entire benefit of using WHS is that DE 1 was BETTER than raid in several important categories:

    1. Transparency
    2. Hardware/Software agnostic recovery options.
    3. Simplicity in understanding (even for non-technical users).
    4. No expensive hardware required.
    5. Asymmetric disk size/branding.
    6. probably forgetting other metrics....
    Not only are these categories important, but I would argue these are the ONLY important categories to the target market of WHS. A hardcore IT guy might geek out about all of the options with RAID and know how many hard-drives he/she needs in order to survive a multiple drive failure, and could go on about the performance benefits of RAID, but this is completely irrelevant to the target user-base of WHS (I think).

    In my mind, one of the main reasons WHS is good is because it isn't like a standard server with RAID. It uses much more accessible semantics and is easily discoverable and maintainable (across the board, not just confined to DE). So the solution to making DE better is not to make it more like RAID. Parity and striping are not important compared to keeping the user facade simple and transparent.

    It has been pointed out that a block based approach fixes other issues that were looming in the technical details of the product, and I agree that the benefits from using the new approach sound very appealing. But the fact that the the changes to the user facade to the technology are detrimental to most of the above DE benefits is a pretty concerning.

    Couldn't the system try to maintain the same user facing semantics in regard to how files aren't split between physical volumes, and the drives are examinable without being attached to a VAIL server? I think these things are pretty important. And are part of why a lot of us went with DE over a raid based product.

    To those clamoring for RAID in WHS? Why? What am I missing? Do you just want a cheaper server product? Or do you like the software facade of WHS?

    Wednesday, April 28, 2010 3:41 PM
  • Are there any performance metrics comparing the original drive extender tech to the new one? My primary complaint previously was really slow access times compared to the same setup not going through WHS. I'm hoping to see a perf improvement (not just for media streaming, but for all file I/O, because most of what I have is VIDEO_TS DVD rips that don't go through DLNA streaming).
    Wednesday, April 28, 2010 3:59 PM
  • I find performance is somewhat better for internal drives or drives on high-speed external buses. For USB 2.0 drives, performance is still not great.
    I'm not on the WHS team, I just post a lot. :)
    Wednesday, April 28, 2010 4:10 PM
  • This is a very concerning thread.  I cannot feasibly turn on duplication for some of my folders.  If losing a single drive now means losing the 8-10TB of data residing in my one folder I would never "upgrade" to Vail.
    Wednesday, April 28, 2010 5:01 PM
  • Question to whoever reads this from WHS team (or any WinServer-knowledgeable MVP): why are you guys seem to avoid sharing work with Windows Server team? You sort of develop your own drive extenders and other low level things instead of using WinServer solutions or co-developing solutions with WinServer team. Don't you think you may be able to save money and effort, and provide better solutions to both WHS and WinServer users if you were sharing your work with each other?
    Wednesday, April 28, 2010 5:10 PM
  • Hi there,

    You can exclude the system hard drive during installation with the following steps:

    1. Install the server, wait for the OS install to complete.

    2. Close the Initial configuration window Alt-F4 (the registry keys below need to be added before Initial configuration).

    3. Create the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Server\Storage

    4. Create the following two DWORD values set to 1: FormatAndAddAllNonUSB1394ToStorage, ExcludeSystemDiskFromStorage

    Please note that with these values, all non-USB/1394 drives will be automatically formatted and added to the default storage pool (no warning about data loss), and the system hard drive will be excluded. Unfortunately, there is no way to trigger this behavior with an unattended installation file.

    thanks,

    Mark Vayman

     


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team

    Mark,

    Using a seperate system disk seems like the way to go.  I was thinking of putting a 7200 RPM 160 GB notebook drive in my server for the system disk.  This would give me a 7200 RPM system drive and also seems like it would make it easier to recover from a failed system disk.  Is this correct thinking?  If the system disk failed I would just need to replace the system drive and restore from the daily server "system" backup.  I would then be ready to roll without any effect to the storage pool, right?

    Chad

    Has Chad's question been answered?  I am curious about this as well.
    Wednesday, April 28, 2010 5:22 PM
  • https://connect.microsoft.com/WindowsHomeServer/feedback/details/554996/allow-disabling-of-file-chunking-striping-for-non-duplicated-folders

     

    I have added feedback for disabling striping for non-duplicated folders.  Please vote it up.

    Wednesday, April 28, 2010 6:06 PM
  • Chad's question hasn't been answered by Microsoft, no. Personally I wouldn't try for an unattended install; I'd size the system partition as large as I could in setup, then after installation concluded I'd exclude the system drive from the storage pool.
    I'm not on the WHS team, I just post a lot. :)
    Wednesday, April 28, 2010 6:07 PM
  • Ken,

    This can be done after installation completes as well?  This would work for me - I just wanted to make sure it was excluded and didn't know if it was possible after installation.  I am trying this out now in Virtualbox...

    Thanks,

    Chad

    Wednesday, April 28, 2010 6:17 PM
  • Chad,

    Yes this can be done, simply select the system hard drive, right click exclude.

    thanks,


    Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 6:30 PM
  • So, since this is now block level, will programs like Carbonite work now (it seemed to have troubles with the virtual D: drive)?  Have you tested with it, by any chance?  Currently, I need to use SyncToy to a non-pooled drive and point Carbonite to it.

    Keith

     

    Wednesday, April 28, 2010 8:45 PM
  • Keith,

    We have not tested with them, but they should work, as we don't use reparse points any more. It would be great if you could test and report back.

    thanks,


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 8:52 PM
  • I have a couple of questions regarding recovery from a failed drive. Hypothetical Vail setup: system drive + 2 storage pool drives with all data duplicated. One storage pool drive fails.

    1) Will all data be immediately available in the shares?

    2) Will the data automatically re-duplicate when a replacement drive is added back to the storage pool?

    Wednesday, April 28, 2010 10:56 PM
  • Yes and yes. I believe this is stated above somewhere...

    For your second question, I think data will automatically reduplicate as soon as the failed drive is removed (assuming sufficient disk space), without waiting for the replacement to be added.


    I'm not on the WHS team, I just post a lot. :)
    Wednesday, April 28, 2010 11:10 PM
  • Yes and yes. I believe this is stated above somewhere...

    For your second question, I think data will automatically reduplicate as soon as the failed drive is removed (assuming sufficient disk space), without waiting for the replacement to be added.


    I'm not on the WHS team, I just post a lot. :)

    So we will still have to go through the drive removal process for the data to be re-duplicated? I'm asking because I had a problem with my one of my current WHS machines when one drive failed completely and showed as missing. The Drive Extender Migrator service stopped working. I couldn't remove the missing drive or add new drives to the pool. I reported a connect bug back in November and just last week got a suggestion from MS to remove the missing drive's entries from the registry which finally solved the problem. I don't want to have to go through something like that again.
    Wednesday, April 28, 2010 11:22 PM
  • I have a couple of questions regarding recovery from a failed drive. Hypothetical Vail setup: system drive + 2 storage pool drives with all data duplicated. One storage pool drive fails.

    1) Will all data be immediately available in the shares?

    2) Will the data automatically re-duplicate when a replacement drive is added back to the storage pool?


    For #1 - yes it will. Since duplication is done in real time in this version, DE will pick the other copy, which will be available for read and write.

    For #2 - new data written to the disk will be automatically duplicated if other disks exist. Existing data will be duplicated once the missing drive is removed, and other drives are available.

    thanks,


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Wednesday, April 28, 2010 11:23 PM
  • The usual answer to a failed drive that you can't remove is to shut down, physically remove it, and restart. Then the console will usually let you remove it from the pool (possibly with warnings about duplication and/or loss of backups). Did you not try that?
    I'm not on the WHS team, I just post a lot. :)
    Wednesday, April 28, 2010 11:36 PM
  • The usual answer to a failed drive that you can't remove is to shut down, physically remove it, and restart. Then the console will usually let you remove it from the pool (possibly with warnings about duplication and/or loss of backups). Did you not try that?
    I'm not on the WHS team, I just post a lot. :)

    Yes. I tried everything. The only thing that fixed the problem was editing the registry.  I just want to assure myself that the same scenario can't happen with Vail, since it rendered my HP MSS useless for almost 6 months.
    Wednesday, April 28, 2010 11:54 PM
  • The usual answer to a failed drive that you can't remove is to shut down, physically remove it, and restart. Then the console will usually let you remove it from the pool (possibly with warnings about duplication and/or loss of backups). Did you not try that?
    I'm not on the WHS team, I just post a lot. :)

    Yes. I tried everything. The only thing that fixed the problem was editing the registry.  I just want to assure myself that the same scenario can't happen with Vail, since it rendered my HP MSS useless for almost 6 months.

    This should not be a problem in Vail, I encourage you to try it out.

    thanks,


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Thursday, April 29, 2010 12:18 AM
  • Chad's question hasn't been answered by Microsoft, no. Personally I wouldn't try for an unattended install; I'd size the system partition as large as I could in setup, then after installation concluded I'd exclude the system drive from the storage pool.

    Is it possible to do a DISKPART > EXTEND to resize the system partition?

     

    Thursday, April 29, 2010 4:29 AM
  • I don't know, and won't be able to find out for a few more days; why not give it a try in the meantime and see? This is, after all, a beta , intended for testing, finding bugs and pain points, etc.
    I'm not on the WHS team, I just post a lot. :)
    Thursday, April 29, 2010 2:46 PM
  • You can extend the system partition after the system drive has been excluded from the storage pool.

    thanks,


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Thursday, April 29, 2010 3:05 PM
  • Thanks Ken, I will do that when I actually have a test platform.  I am still cobbling together a test rig and looking for more drives to give DE a proper showing.

    And thanks Mark, that is good news.  I will definitely be doing that with my system partition.

     

    Friday, April 30, 2010 2:45 AM
  • Let us know if you hit any issues,

    thanks,


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Friday, April 30, 2010 3:28 AM
  • I quite like the new drive extender of vail, for a simple reason: The demigrator no longer kicks in periodically and uses up all resources for a few minutes. The latter is a problem in WHS v1, even on a resonably powerful machine (HP EX 470). I often use the server as a source for movie files and watch them on a client PC. Streaming doesn't work, because so far I couldn't find a streamer or a streaming client that can work with either BluRay ISO images or high res MKV files. Therefore I use SMB/Shares which are very suspectible to demigrator kicking in.

    Yesterday I tried to watch a BluRay ISO file mounted into a virtual drive on the client PC. With Vail, I didn't get a single stutter, even though the Vail test server is the "old" and less powerful Scaleo Homeserver. Today I was back on my WHS v1 machine and had to take a 5 minute break due to demigrator messing up the playback.

    As for striping: I seem to have a different usage pattern than most, because it doesn't affect me at all. Like many, I have two categories of files: Those I cannot get back (such as pictures or videos taken myself), and those that are a lot of work to get back (e.g. the music collection).

    I use duplication on all files, because "a lot of work" is going to cost me more than additional drives. For the more critical files, I additionally have a permanent backup disk attached to WHS, and yet another disk offsite that periodically gets refreshed.

    My biggest scare is that I somehow get silent data corruption and over time kill all three storage locations. I actually lost one video file this way. So, I am very happy about the additional CRC and corruption checks of Vail.

    Regards, Martin

    Friday, April 30, 2010 8:29 PM
  • ...
    resonably powerful machine (HP EX 470)
    ...

    I can't believe you used the phrases "reasonably powerful" and "HP Ex470" in the same sentence. :) It meets the minimum requirements for V1, yes. But "reasonably powerful"?

    I'm not on the WHS team, I just post a lot. :)
    Friday, April 30, 2010 8:53 PM
  • I often use the server as a source for movie files and watch them on a client PC. Streaming doesn't work, because so far I couldn't find a streamer or a streaming client that can work with either BluRay ISO images or high res MKV files. Therefore I use SMB/Shares which are very suspectible to demigrator kicking in.

    OT: XBMC on a lowly Atom Ion, fanless and silent can output 1080p and 5.1 without breaking a sweat, and the only stutter I get is when the server's doing heavy processing such as extracting large files.

    Never even noticed the demigrator running, but granted this is on a homebuilt WHS with older but higher end hardware (x2 4600+ with 4 GB ram and all sata drives).

    Like many, I have two categories of files: Those I cannot get back (such as pictures or videos taken myself), and those that are a lot of work to get back (e.g. the music collection).

    I'd suggest that most people have a third category, a large collection of recorded tv and movies. This is a "nice to have" collection and if 10-20%+ of it disappeared in a drive failure, it'd cause little more than an "oh well". The reaction to losing all of it would be a little more dramatic.

    Maybe I'm wrong, but I don't see how a home user gets to 10 TB plus without a touch of the media hoarding disease. ;-)


    rtk
    Saturday, May 01, 2010 12:59 AM
  • I can't believe you used the phrases "reasonably powerful" and "HP Ex470" in the same sentence. :)

    Oopsie, I meant 490... The 470 would me more like my old Scaleo Home Server that I use now to test Vail (except for the 2GB RAM that I added).

    Sorry for the confusion! Regards, Martin

    Saturday, May 01, 2010 7:53 AM
  • I'd suggest that most people have a third category, a large collection of recorded tv and movies. This is a "nice to have" collection and if 10-20%+ of it disappeared in a drive failure, it'd cause little more than an "oh well". The reaction to losing all of it would be a little more dramatic.

    Agreed, one share of mine is not duplicated and contains such stuff. But I wouldn't even think of pulling out the drives and search for still recoverable elements of this share in case a harddisk goes bust.

    Something else crossed my mind: Two disk failure being "unlikely". I wonder if this is really true. After all, once a disk breaks, I have to either get it repaired / exchanged, or buy a new one. This takes a few days. Then I plug it in and let WHS rebuild the duplications, which takes almost a day to complete (for large disks, say 2TB ones).

    So a "two disk failure" would be defined as "two disks breaking in a timeframe of one week". With "home-grade" disks, such a scenario might be more likely that we'd wish.

    Regards, Martin

    Saturday, May 01, 2010 8:03 AM
  • Something else crossed my mind: Two disk failure being "unlikely". I wonder if this is really true. After all, once a disk breaks, I have to either get it repaired / exchanged, or buy a new one. This takes a few days. Then I plug it in and let WHS rebuild the duplications, which takes almost a day to complete (for large disks, say 2TB ones).

    So a "two disk failure" would be defined as "two disks breaking in a timeframe of one week". With "home-grade" disks, such a scenario might be more likely that we'd wish.

    Regards, Martin

    Secondary disk failure risk can be mitigated by prompt remedial action, either on the part of the storage controller (not there yet in WHS) or the administrator.  Do not wait for an RMA or a new drive from Newegg, since you will be exposed to this risk for (IMO) an unacceptable length of time.

    In v1, it has been a big problem for me, because I hate having my shares down during a removal, but that's what it takes to get the protection back.  In v2, the shares don't go down (hurray!) so the best practice would be an immediate drive removal once the drive is confirmed to be dead.  Do note that this would require enough unused space on the pool to take the loss of the dead drive, so a little forward planning is needed.  I always maintain enough space to lose my biggest drive (2TB) and add 500GB for good measure.  There are other benefits to running below capacity as well, so this is a good practice.

    So in summary, the length of time can be reduced from your estimated one week to about 7 hours, by my estimate, based on 2TB at 60MB/s.  This will reduce the risk by a factor of 20x.  The cost for this safety is that I need to overprovision by 2TB, but I am quite comfortable with that.

     

    Saturday, May 01, 2010 10:44 AM
  • Further,

    In V2, we count the number of times we had seen ECC errors on a drive (both corrected and un-corrected), and when that number is bigger than 1 (1 was chosen for beta), we raise an alert, recommending to replace the drive. It has been our experience that with commodity hard drives, once errors start occurring, the drive cannot really be trusted to keep data.

    thanks,

     


    This post is provided AS IS and confers no rights. Mark Vayman, Program Manager, Windows Home and Small Business Server Team
    Saturday, May 01, 2010 4:23 PM
  • In V2, we count the number of times we had seen ECC errors on a drive (both corrected and un-corrected), and when that number is bigger than 1 (1 was chosen for beta), we raise an alert, recommending to replace the drive. It has been our experience that with commodity hard drives, once errors start occurring, the drive cannot really be trusted to keep data.

    Mark, thanks for the information.  Although I agree in principle on the dangers of running a drive which has errors, it is next to impossible to RMA such a drive through consumer channels.  This leads us to the question of what to do with a bunch of slightly bad drives, although I know this is no fault of WHS itself.

    Anyway, my usual practice is to run the drives into the ground and trust DE to protect my data.  So far I have lost 3 drives and no data in v1, so that's a real world testimonial for you. :D

     

    Sunday, May 02, 2010 5:47 AM
  • Hello Mark,

    I really don't understand the reason for adding the high overhead CRC at the block level. Now, with duplication, we are at 225% of data storage, and worse, have all the computation overhead of software RAID cutting into throughput. If (I'm guessing here) you are adding this because you cannot correct by using the non-error duplicate block, then you have taken a giant step backwards from RAID 1.

    I have loved the concept of DE to replace RAID, but not at the expense of an additional level of software error correction now being added. If you really think this is needed, then what about the system partition? This isn't a feature any other server file system (read MS Server) seems to need, including your own SBS.

    Please don't take this as a negative input on one of your major decisions, but the impact is great, and I don't see the benefit with duplication active.

    Sunday, May 02, 2010 5:33 PM
  • Mark,  Thank you for the detailed description. I was hoping for more details on how EFS can be implemented? I would like to have everything encrypted, but don't see any options for that in the preview.
    Tuesday, May 04, 2010 1:08 AM
  • Hello Mark,

    I really don't understand the reason for adding the high overhead CRC at the block level. Now, with duplication, we are at 225% of data storage, and worse, have all the computation overhead of software RAID cutting into throughput. If (I'm guessing here) you are adding this because you cannot correct by using the non-error duplicate block, then you have taken a giant step backwards from RAID 1.

    I have loved the concept of DE to replace RAID, but not at the expense of an additional level of software error correction now being added. If you really think this is needed, then what about the system partition? This isn't a feature any other server file system (read MS Server) seems to need, including your own SBS.

    Please don't take this as a negative input on one of your major decisions, but the impact is great, and I don't see the benefit with duplication active.

    According to the feature list, the CRC is to detect silent failures.  Remember that unless a drive is clearly demonstrating signs of failure, there is no way for the computer to know which of two different copies of a file has an error.  I believe this situation probably existed in v1 as well, but it just was not brought to light.  I do have some experience with silent corruption issues and they can be quite nasty.

     

    Wednesday, May 05, 2010 9:28 AM
  • Hello Roddy,

    Well, thanks for the reply (though, I'd like to hear from Mark who I addresses the question to.) It's pretty easy to simply put out a term like "silent failure" but that term is meaningless in the world of hardware and software engineering. If a bit is wrong, it got that way due to a memory failure, a bus failure, or a sector on the disk (I suppose there are additional places.) Disks do have extensive error correction and will reliably report a sector failure. The CRC mechanism can certainly protect that 512 bytes of data, but it's not clear against what. Note that if the OS partition isn't also protected, there isn't much trust that the CRC calculator isn't doing as much harm as good. If a bit is getting flipped in memory or on the bus, then it could just as well be in the 64 bytes of the CRC as in the 512 bytes of the data block. This leads to a correct data block that is considered bad by the new FS. Not a good situation. I could go on, but I'm waiting for some details from Mark and his team.

    Thursday, May 06, 2010 2:49 PM
  • Silent failures describe a class of conditions where what is supposed to be stored on a disk is not actually stored.  The actual nature and scope of the failure depends on the exact case, but all of them behave exactly like a successful write and trigger no error conditions.  The error may be discovered when the data is read back, or may continue to go unnoticed if subtle enough.  For a rough idea: http://partners.netapp.com/go/techontap/matl/sample/0206tot_resiliency.html

    HTH.

     

    Thursday, May 06, 2010 5:52 PM
  • So let me make sure I understand this clearly.

    DEv2 now uses software striping for standard, non-duplicated data. Since there is no mention of a proper parity mechanism, other than sector level bit error correction, we can assume this is a RAID-0 type config..meaning the more drives one has, the higher the possibility of catastrophic failure. Unless I am misreading this, the total failure of a single drive means I've just lost most of my non-duplicated data? Ugh!

    duplicated data is either 1) mirrored on two separate drives (as before) or 2) has blocks that are mirrored on 2 separate drives. Either way, this takes doubles the amount of space, needed for basic protection.

    So, now I can safely say, hardware RAID is superior in almost every way. Why?

    - My data is now striped, cloud style, across my drives. Any single drive failure of non-protected data results in losing ALL such data. Bad...very bad. This is nothing more than software RAID-0.

    - Data is different. Some data sets are small, like photos and small files. I'm happy to mirror/duplicate that data. Other data sets are large, like AVCHD camera files, ISO images, and large (10GB+) recorded TV (OCUR cablecard shows), and BR backups for my media center. Do I want to waste 50% of my disk space protecting this? Heck no! RAID 5/6 is far superior here. At least before with DEv1, if I lost a single drive, I wouldn't lose my entire video collection!

    Software RAID (what you're doing here) is inherently unreliable. What happens when you take a power hit on your WHS server during a write operation? Can DEv2 handle that? Worse yet, what happens when you get an OS BSOD during that write operation...there goes your filesystem!

    Possible ways to address this

    1) Let the experienced user pick WHICH drive to store individual shares. Specifically, I want the ability to tie my "videos" directory to a single "drive" (or in my case, RAID6 array). Should be easy to implement

    2) Offer the ability to disable block level RAID-0 striping. Or, add parity and make it software RAID-5/6 Modern CPU's are certainly powerful enough to handle the overhead, moreover, I'd bet that most WHS workload is Read>>>write. At least make this an option for certain shares. If I lose recorded TV, its not the end of the world, but its still inconvenient. If I lose the family photos, I'm a dead man! We need an option in between RAID0 - Negative redundancy...single drive failure results in massive data loss, and RAID1 - mirrored data, with 2x storage requirement.

    3) If you're not already, with ANY software striping or RAID solution, disable OS write back cache...

    Ways in which I get around this with WHS V1 currently?

    1st WHS - Small 56 drive FC SAN, broken up into multiple 2TB LUNs. This was a bit nutty, but a great test of DEv1

    Current WHS - 12TB RAID6 server running Hyper-V with WHS running on a single vdisk. WHS is used purely for backup, and access to smaller critical files, while the base Server 2008 R2 OS is used for video and music serving. Yeah, its a kludge. So..can I switch to WHSv2 native?

    Yes..its properly x64, so I can leverage more than 4GB, 64-bit and multi cores for my transcoding needs....but I have to use the hack listed here to not include OS drive in DEv2, and I have to essentially disable DEv2 entirely by using a single large 12TB LUN  (now possible, thanks to GPT support). Is this my only option? Why would I want to chop down from 10TB usable (raid6) to 6TB (using the DEv2 method). I'm trading hardware RAID for unreliable software raid, and losing 40% of my disk space.

    Bottom line is, if you expect most WHS users to stick to the piddly 4-drive OEM stuff thats out there, well and good, But, if you want to build a robust solution that will scale to the demands of a power user, or a connected home (as in the high end Media Center space), then you're going to need to address these issues.

     

     

     

     

     

     

     

    Friday, May 07, 2010 5:22 AM
  • So let me make sure I understand this clearly.

    DEv2 now uses software striping for standard, non-duplicated data. Since there is no mention of a proper parity mechanism, other than sector level bit error correction, we can assume this is a RAID-0 type config..meaning the more drives one has, the higher the possibility of catastrophic failure. Unless I am misreading this, the total failure of a single drive means I've just lost most of my non-duplicated data? Ugh!

    No. Drive Extender V2 stores data in 1 GB blocks, but it doesn't "stripe" those blocks for performance reasons the way RAID 0 does. Microsoft sees duplication as the answer to losing files, and Previous Versions (Now supported! Yay!) as protection from accidental change or deletion, not the ECC code. That's to protect against certain types of silent failures (listed elsewhere) that would otherwise leave data in an inconsistent state. These silent failures can (and do) happen on Windows Home Server V1 computers today, and they are extremely difficult to sort out.

    The loss of a drive does, however, mean the loss of all blocks that were stored on that drive. So you should turn off duplication only for those shares whose contents you are willing to lose. This is in no way different from V1, which can also lose files from multiple unduplicated shares if a drive fails.


    I'm not on the WHS team, I just post a lot. :)
    Friday, May 07, 2010 9:35 PM
  • Now that striping has been introduced anyway, I wonder if it wouldn't be possible to add parity-based "duplication" as an option.

    Basically we would have to choose per share:

    1. No duplication; Disk failure usually means losing those shares; No overhead
    2. Parity-based "duplication"; Disk failure usually means no loss, unless another disk fails during the (lenghty) rebuild; Overhead depends on the number of disks (probably around 30% in a 4 disk system).
    3. Full duplication; Lowest risk of data loss; Overhead 1xx%.

    I would expect to keep the WHS unique selling point: That I can add any number/size disk. I understand that in this case a rebuild is even more complex than with RAID 5, because file by file the stripes and parity have to be found and recalculated, but for all those currently non-duplicated large media files even parity would be a very welcome added safety.

    Best regards, Martin

    Saturday, May 08, 2010 9:36 AM
  • roddy_o - thanks for the input. Yes, I’m quite familiar with Sundaram’s work at Network Appliance (more with Garth Goodson). Probably one of the most detailed studies of real-world errors was research supported by NetApps and presented as Analysis of Data Corruption http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html and you should note that everything about corrections assumed mechanisms above a RAID system

    I’m certainly very familiar with all the work and workings of Sun’s ZFS also.

    Terminology aside, there really are no “silent failures.” There are “noisy” failures, but I don’t think we are discussing the old true head crashes. Everything else is simply incorrect bits being transferred and correct ones being lost. A simple example is the block protection of a 64-byte ECC correction for each 512-byte block. What protection exists if the silent error is in the correction block, not the data block. And what is protecting the pointer to that block. And what is protecting the OS file system and partition directory structures that define what the whole data structure is (and even if it exists.) Again, a clever sound-bite isn’t the answer, but a complete paper describing all bit flows and all check and recovery mechanisms (and the cost of them.)

    Saturday, May 08, 2010 11:36 AM
  • roddy_o - thanks for the input. Yes, I’m quite familiar with Sundaram’s work at Network Appliance (more with Garth Goodson). Probably one of the most detailed studies of real-world errors was research supported by NetApps and presented as Analysis of Data Corruption http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html and you should note that everything about corrections assumed mechanisms above a RAID system

    I’m certainly very familiar with all the work and workings of Sun’s ZFS also.

    Terminology aside, there really are no “silent failures.” There are “noisy” failures, but I don’t think we are discussing the old true head crashes. Everything else is simply incorrect bits being transferred and correct ones being lost. A simple example is the block protection of a 64-byte ECC correction for each 512-byte block. What protection exists if the silent error is in the correction block, not the data block. And what is protecting the pointer to that block. And what is protecting the OS file system and partition directory structures that define what the whole data structure is (and even if it exists.) Again, a clever sound-bite isn’t the answer, but a complete paper describing all bit flows and all check and recovery mechanisms (and the cost of them.)

    "Silent failure" is simply a term used for classification, and I am using it for convenience, since Mark has already defined it in the header, but you can substitute any term you prefer.

    An error in the checksum itself would also be detected and corrected by the CRC.  As Mark stated in the original post, DEv2 is able to correct up to 2 bits per block, and this will be regardless of where those errors are.

    I get the impression that your opinion is that it is pointless to prevent any error unless all errors can be prevented.  Please do remember that WHS is a consumer product, not enterprise.  Currently there are no protections of this type for any consumer class product and even enterprise products have this at the top end only.  I applaud Mark for bringing this to us and I certainly recognize his depth of experience in storage systems to even be aware of this problem.  AFAIK, only Netapp's WAFL is able to recover from a lost write (albeit at a much higher storage efficiency with RAID-DP), but here we have a consumer product which (theoretically) can do the job.  Kudos to Mark and the team.

     

    Saturday, May 08, 2010 12:12 PM
  • I guess I'm curious about these so-called "silent failures," too. I've been a programmer for many years and am having a hard time coming up with a real-world scenario.

    This type of error correction can only fix bit errors that occur "under" the DE (e.g., weak bits on the drive that read incorrectly, drive firmware errors that cause the wrong data to be written (almost certainly more than a couple of bits). Any higher-level error won't be caught. For example if a software error causes the wrong data to be written to the drive, the ECC data would be wrong too. Likewise, if the software wrote to the wrong sector by mistake, the ECC would not show any problem (and if that sector were duplicated, you could detect the "collision," but not determine which is "correct").

    Regarding roddy_0's question about it being pointless to prevent any error unless all errors can be prevented, I'll make a couple comments: 1) If only a tiny fraction of errors can be corrected, and the cost (in extra storage, processing time, etc.) is not insignificant, then it's probably not worth it. 2) Without some clear test cases, and example of real-world errors that can and can't be corrected, it is very difficult to judge the value of this feature.

    I guess, from reading the comments here and elsewhere, that many people share my opinion - I'm not too worried about theoretical 1- or 2-bit errors, but I am *very* worried about a single-drive failure taking out more (unduplicated) data than is contained on that single drive. Microsoft should be focusing their effort on dealing with this very real (and relatively common) situation, rather than on extremely rare bit errors.

    With this new version of DE, the primary advantage of WHS over a Drobo has been eliminated, and suddenly the Drobo's other advantages put it squarely in the lead (parity vs. duplication, recovery from 2-drive failures, etc.).

    Like some of the other users here, I have a very large video library that is not practical to duplicate (I'm already running 7, mostly 2TB drives). I am willing to accept the risk of a drive failure if I know that losing a drive will only cost me the files on that drive (although copying 100-200 DVDs again would really suck). On one occasion, my WHS dropped one of the drives from the storage pool (not sure what happened), but I was able to simple copy the files off using another machine, then re-add the drive to the pool (prompting a reformat). Not a single file was lost -- with DEv2, I doubt that would be possible.

    Saturday, May 08, 2010 2:04 PM
  • Here's a paper on silent data corruption . There are others, but I think this one gives a decent explanation that most technical people will be able to work through. It doesn't really go into bit flips, though. A bit flip happens when a 0 or 1 should be written to disk, but the opposite is actually written (or read). SATA disks are, at heart, still ATA, and have little error correction that would detect/correct such errors. The CRC that DE uses will detect a single or double bit flip in any sector and heal it automatically.

    As for data loss scenarios in Vail, they're not really changed from V1, except in that an extremely large file (>1 GB) may be split among multiple disks and, if not duplicated, may be lost if any of those disks fails. I don't see any indication that there's an attempt made to keep all segments of such files on the same disk, and the block approach DE V2 is using is specifically designed to allow a file to be split this way. I suggest someone post a suggestion on Connect to the effect that the blocks that make up such large files should be kept "together" on the same disk(s) where possible even if that results in unbalanced disk usage. Personally, I don't have a lot of such files, and I back everything on my server up anyway, so I'm unconcerned.

    I'm not on the WHS team, I just post a lot. :)
    Saturday, May 08, 2010 2:38 PM
  • I guess I would like to see some discussion of how DEv2 addresses these silent data corruption examples. Let's look at the ones listed in the quoted paper:

    • Misdirected writes - if this happens "above" DEv2, I don't see how this can be corrected. You can detect the problem if the misdirected location is already in use by another file, and if you detect it before the write occurs, you can report an error and fail the write (I'm not sure what the user can do at this point). If you don't detect it until the read, the overwritten file is already corrupted. Even if the overwritten file were duplicated, the best you can do is to replace the overwritten data with another copy in a new location. If a misdirected write occurs "below" DEv2, there would not even be a record of where the write occurred, so the file that was being written would be "missing" a chunk of data. Either file could potentially be repaired if they were duplicated.
    • Torn writes - this is very similar, except it only affects the file being written. As before, if this happened "above" DEv2 (e.g., the app/filesystem only partially copied the data to the sector buffer), you could not even detect the error, as the file system is writing what it was told to write (and the ECC data would be generated to match). If it happened "below" DEv2, you would be able to repair the file if it was duplicated, although these types of errors are often the result of power failures/electrical glitches, which could result in damage to the "raw" drive data that can't be corrected without a reformet.
    • Data path corruption - this is similar to the "torn writes" case in that incorrect is written to the drive. If its just a 1- or 2- bit error, it should be correctible. More likely, bit errors are caused by a hardware problem (poor connections, signal interference, power ripples). I this case, you could have scattered bit errors all over, and trying to "cover" a hardware problem with software recovery will only get you so far. If the data path corruption is due to a software bug, you could end up with arbitrarily corrupted data or a misdirected write (see above). Keep in mind that DEv2 is adding another potential failure point in the data path (DEv2 is just as likely to have bugs as the drive firmware).
    • Parity pollution - this should be fairly easily handled by DEv2, although Microsoft hasn't really documented how this works.

    It would be very helpful it Microsoft would produce a document (they may already have one internally), that discusses the various scenarios and how DEv2 handles them.

     

    Saturday, May 08, 2010 3:30 PM
  • Regarding the splitting of files across disks, the key problem is that the files that get split (> 1 GB) are the most likely to NOT be duplicated (every movie in my DVD library is > 1 GB). So this just compounds the problem. For large, non-duplicated files, the "standard NTFS" filesystem with files residing on a single physical drive, was the only "safety net" for recovering files (for corrupted drives), or limiting damage (for failed drives). With DEv2 and split files, the recoverability of large, non-duplicated files is GREATLY reduced. Adding another 5-6 drives (in my situation, approximately 10TB are not duplicated) is an unreasonable solution, as that would require the addition of multi-port SATA cards, a very large case, large power supply, etc. This could easily cost $2000. This would also exceed the current recommended limits for Vail.

    And this doesn't even touch the problem I would have trying to migrate to Vail somehow (I already have a very modern motherboard/CPU, and don't want/need to build a new system).

    Once Vail becomes the "current" version, I'll have to re-evaluate whether WHS, multiple Drobos, or a very large RAID system better fits my needs. WHS no longer looks like the cheapest, easiest, and most secure solution.

    Saturday, May 08, 2010 3:44 PM
  • ...
    It would be very helpful it Microsoft would produce a document (they may already have one internally), that discusses the various scenarios and how DEv2 handles them.

    If they release one, I would expect it to be after Vail reaches GA.

    As for large files. if you feel that all blocks that contain segments of a file should be kept together on a single disk if possible, you should submit a product suggestion on Connect . For myself, I consider relying on only one storage location (i.e. a single Windows Home Server) completely inadequate for protecting data, so I back my server up and take that off site regularly. If you aren't doing that today, you're more trusting than I am. So large files are a complete non-issue to me, so much so that I won't create that product suggestion for you, or vote on it if you (or someone else) creates it. :)


    I'm not on the WHS team, I just post a lot. :)
    Saturday, May 08, 2010 4:26 PM
  • Ken,

    For my large files, I'm not counting on WHS to protect my video files at all (although inconvenient, I can simply re-rip the DVDs). I use WHS for video files primarily as a means of easily accessing all of my movies from a single shared drive. That said, I don't want WHS to *reduce* my file protection below that of just using a bunch of separate drives. With DEv1, I get the advantage of a large virtual drive, with the same recoverability/safety of a bunch of separate NTFS drives. With DEv2, the recoverability/safety is actually LESS than a bunch of separate drives. That's the problem.

    BTW, I *do* rely on WHS to protect/duplicate a lot of other files, although their total size is very small (two 1TB drives set up for mirroring would be more than enough).

    --Doug

    Saturday, May 08, 2010 4:38 PM
  • I understand it's hard for people who have not experienced these problems to appreciate that they even exist, let alone that they can have a significant impact.  I for one have been on the receiving end before and I am very glad indeed to have such protection available to me.  It is my sincere hope that this feature is not torpedoed by others who do not recognize its value, as there is no competing product for me to choose.

    Incidentally, please remember that the availability of Vail does not mean the death of v1.  I certainly don't mind continuing to run v1 if v2 somehow falls through.  I chose it for a very good reason; because it was superior to anything else available at the time.  For those of you who feel your current v1 is better than v2, your upgrade path is the simplest of all.  Relax and do nothing.  :D

     

    Saturday, May 08, 2010 5:29 PM
  • I guess I would like to see some discussion of how DEv2 addresses these silent data corruption examples. Let's look at the ones listed in the quoted paper:

    --8<--

    From just my analysis of the problem and DE's specified capabilities, very few of those problems can actually be solved by CRC alone.  Add duplication to the mix however, and you have a very different picture.  Now I begin to understand why the WHS team thinks the option to disable duplication should be removed, as many of these improvements to reliability would not be possible otherwise.  It quite literally makes no sense to have CRC if duplication is off.

    This does not make any difference to me as I use full duplication, but for those of you whe don't, it is looking very much like v1 is a much preferred choice.  However, it is still very early beta, so don't give up hope.  Do log your suggestions and vote them up, and maybe DEv1 will find its way into the final product.

     

    Saturday, May 08, 2010 6:16 PM
  • Here is an example of "silent failure" that happened to me. My SATA controller broke. It would access disks but would randomly set some bits to garbage during writing. Only one drive was affected so I think it was a bad contact somewhere on controller board.
    I detected it because I use FlexRaid on WHS v1. FlexRaid has "verify" command that checks data integrity against parity that it created and I run it on regular basis.
    I replaced controller, ran verify again to establish what files were corrupted and re-added these files.

    Now, it is important that I was able to do that ahead of time, before I actually needed that data. I don't see a possibility to periodically check data integrity in new DE feature list. This leads me to assume that in case of irrecoverable error like this user will find out about data loss only when trying to read the data. Please correct me if I'm wrong.
    Saturday, May 08, 2010 7:09 PM
  • So, since this is now block level, will programs like Carbonite work now (it seemed to have troubles with the virtual D: drive)?  Have you tested with it, by any chance?  Currently, I need to use SyncToy to a non-pooled drive and point Carbonite to it.

    Keith

     


    I've done a quick test of Carbonite with a free account (since I didn't want to mess with my active server) and it appears to work normally.  I copied a folder of pictures to the Y: drive and set it to back up the Pictures folder underneath it.

    As a recap, with WHS 1 PP3, it would run into a "permission" problem with the pseudo D: drive and would fail to back up any of those files.

    Keith

    Saturday, May 08, 2010 8:59 PM
  • Now that striping has been introduced anyway,
    ...

    Martin, please see the post you replied to where I say "but it doesn't 'stripe' those blocks for performance reasons". Striping has not "been introduced" in the way you mean it. And you should not expect parity calculations, with their (significant) overhead, to be added to Windows Home Server. Mark Vayman has explained in more detail why this will not happen above...

    Feel free to vote suggestions up on Connect , however.
    I'm not on the WHS team, I just post a lot. :)
    Saturday, May 08, 2010 9:25 PM
  • ...
    Do log your suggestions and vote them up, and maybe DEv1 will find its way into the final product.

    I hope, very profoundly, that this doesn't come to pass. Drive Extender V1 is, for just one example, incompatible with Previous Versions. There is an ugly and not completely reliable workaround to gain access to previous versions; I don't bother with it as it's too much trouble (I haven't logged in to the desktop on my V1 server in probably a couple of months, except to answer a question that requires it). You also run into file locking problems; V1 is unable to duplicate a file that's in use.


    I'm not on the WHS team, I just post a lot. :)
    Saturday, May 08, 2010 9:30 PM
  • ...
    Do log your suggestions and vote them up, and maybe DEv1 will find its way into the final product.

    I hope, very profoundly, that this doesn't come to pass. Drive Extender V1 is, for just one example, incompatible with Previous Versions. There is an ugly and not completely reliable workaround to gain access to previous versions; I don't bother with it as it's too much trouble (I haven't logged in to the desktop on my V1 server in probably a couple of months, except to answer a question that requires it). You also run into file locking problems; V1 is unable to duplicate a file that's in use.


    I'm not on the WHS team, I just post a lot. :)

    The chances of this happening are probably low to none, considering the scope of the change.  I'm just trying not to be negative :)

     

    Sunday, May 09, 2010 2:17 AM
  • I've done a quick test of Carbonite with a free account (since I didn't want to mess with my active server) and it appears to work normally.  I copied a folder of pictures to the Y: drive and set it to back up the Pictures folder underneath it.

    As a recap, with WHS 1 PP3, it would run into a "permission" problem with the pseudo D: drive and would fail to back up any of those files.

    Keith

    I am doing similar things with my D: drive without issue.  Are you running your application under the administrator account?

     

    Sunday, May 09, 2010 2:23 AM
  • I've done a quick test of Carbonite with a free account (since I didn't want to mess with my active server) and it appears to work normally.  I copied a folder of pictures to the Y: drive and set it to back up the Pictures folder underneath it.

    As a recap, with WHS 1 PP3, it would run into a "permission" problem with the pseudo D: drive and would fail to back up any of those files.

    Keith

    I am doing similar things with my D: drive without issue.  Are you running your application under the administrator account?

     

    On the WHS1 or Vail?  In either case, yes, I am running under Admin.  For WHS (Classic???) it is a known issue in the Googleverse and the only workaround was using something like SyncToy to move the files to a real drive.  In effect, anything would then end up with 2 or even 3 copies depending on duplication settings.  I like the whole cloud thing, since it takes personal hardware failure out of the loop (and fires, floods, theft, etc).

     

    Keith

    Sunday, May 09, 2010 3:04 AM
  •  

    For #2 - new data written to the disk will be automatically duplicated if other disks exist. Existing data will be duplicated once the missing drive is removed, and other drives are available.

    Forgive me if I am going over already stated discussion but I have read and am trying to understand.

    1. How is a faulty drive detected, what parameters are used, how does it deal with a drive that is mostly ok but slowly dying?

    2. If a drive is slowly dying is there a risk of both copies of folders and contents becoming corrupted?

    3. Are you saying even if you have enough drives the duplication won't kick in for existing files / folders except new data until the faulty drive is removed (I assume moved means from the console and not necessarily physicaly at the time. I would have thought a dead drive providing enough space and drives available should auto remove.

    4. How reliable have people found this in a failure, is this part of Vail very robust now or are we testing like we did for the WHS1 Data corruption issue? How much have Microsoft learnt and tested based on the DE issues in WHS1?

    Thats about if for now :-)

    Monday, May 10, 2010 11:43 PM
  • Sorry, but the whole discussion about duplication, that "should be forced down end users' throats" is really disappointing!

    The discussion about DE1/2 and Duplication is loosing focus on what is really important for home users. I´m not a technical administrator, but a Salesman.

    I like the idea of WHS very much, but it has to compet with much cheaper solutions with much less power consumption. They are already in the market and offer all the functionality that WHS V1 does, and more (at least from the point of view of an unexperienced home user)! To get a success Vail has to offer some new and very useful features. And not to destroy the advantages, that WHS already has.

    Raid1/Duplication is necessary, when important data change permanantly in realtime (Webserver, Email-Server, f.e.). But for Home User? For what price? For 1 TB Data you need minimum 3 TB HDD-Space (Data + Overhead + Duplication + Backup). Duplication is NOT a subsitute for a Backup. It does not protect you for virus and users, who mistakenly delete a folder. In my opinion complete duplication is a waste of storage space, daily backup is sufficient for most data. If any, duplication for home users is needed only for a small amount of overall data.

    But there are some more problems with DE_2:

    When people use a Home Server, the most important data will move to the Home Server. This has a lot of consequences:

    - Backup of the server is the most important thing (not backup of the clients)
    - The way these important data are stored must be completely transparent and easy understandable. Otherwise careful people will not trust and not buy WHS.
    - Accessibilty of the data (even when the server has a hardware failure) is a must have without any discussion!!! I must have the possibilty to take out every harddisk and access the data from every Windows-PC in case of a hardware defect of the server. No normal customer will buy two WHS, only to have one in the backhand in the case, that the first one is out of order.
    - The whole server must be protectable by a combination of bitlocker and EFS, Encryption must be transparent and easy to use. What helps a bitlocker-protected Notebook, when every thief can access all the data in the notebook-backup at the home server?

    Bitlocker and EFS are features that other NAS (mostly) do not offer. This is a real "buy WHS instead of something else reason".

    Drive Extender is a nice and comfortable feature. But when I must choose between Drive Extender on one side and drive letter for every hard disk, but Bitlocker-protected and with the possibility to move out every hard disk and access is from other Windows-PC´s - I would choose the second.

    Drive Extender V2 - like you plan it now - is a knock out for the whole WHS!!!

    When there is no Bitlocker, and a single hard drive failure can wipe out the whole movie collection, and there is no possibility to access the data in case of a hardware failure of the server.....

    ...only very careless contemporaries (or really stupid people) would buy such a thing.

    For a simple Music and Video Streaming Server, nobody needs duplication and nowbody needs a WHS. There are much cheaper solutions with power cunsumption < 10 Watt and they are already in the market. Nobody has to wait for Microsoft..... after years of developpment .... my be in the end of 2010... or 2011 ... or never.

    I hope you understand, what I mean. It´s the same like with Windows Phone 7: you are already late, really late. I like MS and WHS and I really hope, that you will do the right things und in time to bring that WHS to a big success.

     In my Opinion this means:
    - Duplication is just an option
    - Non-duplicated Data are stored to the same hard drive (as long as possible)
    - There is a survey in a log file, where users can see, which data are stored in which HDD => we know which HDD to pull out, when server can´t get powered on.

    New Features like:

    - Safe functionality with bitlocker and EFS
    - Integration of Cloud-Backup
    - Integrated Downloadmanager
    - Proxy-based two-stage security solution out of the box
    - Exchange and Web-Server for family and small workgroups

    To all: If you like these features, please use the links and vote them up

     

    Tuesday, May 11, 2010 1:43 PM
  • None of those suggestions are (from the titles, since the links are broken and I can't be bothered to fix them for you or translate them) related to Drive Extender. Please start separate threads of discussion for any that don't already have active discussion going on (i.e. do a little searching/reading first).

    That said, in no particular order:

    • Exchange, or anything like it, will not happen, because Microsoft's market research shows that most people are quite satisfied with what they have now.
    • Cloud backup is something that would make a good add-in. Unfortunately most vendors who entered that Windows Home Server market segment have exited it, I suspect because of high costs/low returns.
    • Integrated security as I think you want it will only work if the server is on the edge of your network, and there are a host of reasons not to put a device containing data on the edge of your network. It is, in a word, foolish. If, instead, you're asking for parental controls, I'm a strong proponent of actually paying attention to what your children are doing online, rather than relying on automated solutions that they can defeat.
    • Bitlocker, EFS, etc. will all present significant challenges for headless systems, and there's no market demand for them. Enthusiasts are not enough to drive this into Windows Home Server.
    • A download manager is a client tool. I personally don't use torrent software, which is what I'm quite certain you want, but my opinion is that it belongs on your client computers. However, it wouldn't be an extremely difficult task for a professional programmer to create a torrent client that would integrate with Windows Home Server and function as an add-in. The closest anyone has come is an add-in that was created some 3 years ago which is no longer supported, and which required a specific torrent client to be installed separately (so it wasn't really in keeping with the add-in model).

     

    As for duplication... As I've said previously, I don't think it should be optional.


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, May 11, 2010 2:32 PM
  • I'm going to put in my two cents.  As a long time WHS user (from beta) and I am running vail in a VMware soley for my laptop so far.  I like the new web interface, the x64 OS, seems faster etc.  But I also can not believe that the new drive extender is so limited.  There were a few times during my WHS use that I had to put a drive into another pc due to failures or testing.  This was a huge advantage.  Now you can't?  My production WHSv1 server is running on older hardware and it will be the one I use if I decide to put vail to work.  What happens if a stick of ram or video card goes out??  Processor or motherboard?  All my precious data is stuck until I either find replacements for the "old" parts which might be difficult or if I build another vail server.  This is unacceptable as I store my homework, taxes, and very important stuff that I need access to anytime.  That's why I have WHS with duplication turned on..  Not to mention very rarely do I do backups to a usb drive.

     

    Unless I can access the drives from a different pc in case of failure, all the other features don't persuade me enough to upgrade.


    athlon 3400, 2gb ram, 7 drives totaling about 6.5 tbs.
    Tuesday, May 11, 2010 5:45 PM
  • To be honest, I'm equally concerned with performance and reliability. If it performs better than the existing drive extender but still has at least equal reliability, I'm probably going to be fine with it.

    That said, there seems to be a controversy over whether duplication should be required/forced or not. For me, I can't have duplication forced because about 70% of what's on my home server now is several hundred unduplicated DVD rips (because I can always re-rip if I know what got lost in a catastrophe). That's something like 6TB of data. Besides the cost involved in getting enough drives to duplicate that, there aren't a lot of home servers out there (for purchase, not counting DIY) that have the hardware to support that many drives. As it stands, I'm already having some reliability issues getting my HP EX475 to work nicely with a full eSATA port replicator, and that's just eight drives total I've got (internal + external). If I need to duplicate everything, that means I'm going to end up managing 16 drives. I'd love someone to point me to an affordable solution for that.

    If duplication ends up getting forced, I'll be looking to just move to a NAS device with RAID 5 or something. I guess I should probably be doing that anyway, or running some sort of file server with RAID for my DVDs instead of home server. From a performance standpoint, given drive extender v1, I'm already looking at something like that.

    Tuesday, May 11, 2010 5:59 PM
  • There's a mechanism in Vail (documented in the release notes, I think) which allows you to mount a foreign storage pool on a Vail server and promote it to being "the" storage pool. This primarily exists to allow access to your data after a server issue which takes out your OS partition or system drive, but it will also server for a more serious failure. All you need to do is to reinstall (or replace), remove all drives from server storage (leaving you with no storage pool), connect the old pool drives, and promote the old pool. It all sounds more complex than it is.

    For those who feel strongly that access to your data from any computer (not just a new Vail server) in the event of server failure is essential, this product suggestion (the #2 most "up voted" suggestion that's active in the Windows Home Server Connect program) was created. Please go vote. :)


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, May 11, 2010 6:11 PM
  • I can spout hypothetical situations all day but I can think of more then one situation where this new "feater" DE will fail and puts my data more at risk now then it is currently.  

    So I purchase an HP server or a server from a different company that comes pre-installed.  This costs me around 500 bucks, similar to the current EX### machines.  These have a one year warranty from the mfg.  So i have a single drive or multiple drive system and I load it up with data that I feel is safe.  A fan or other failure happens to my server and I have to send it in for repairs or replacement.  I have no way to get my data!  I also know that it's going to take at least a week for the company to send me back another server.  Also they tend to erase all the data anyway and ask you to backup your stuff before sending it to them for repairs..   So this expensive server just toasted my data because of a simple fan failure or other small problem?  

    I have voted for the access to the data from any computer but if I'm understanding this correctly, a file can be split onto different hard drives?  This is also crazy!  This new DE scares me a lot.  Finally forced duplication?  I have almost 5TB worth of data and only 960GB of that is duplicated.  I don't want to have to invest in drives in order to duplicate the other 4TB, the data just isn't that important enough.  So if I don't duplicate my files they could be at risk..  

    The whole point of the forum is to get feedback from users so don't take anything personally but this new DE is not making me want to invest in the upgrade..


    athlon 3400, 2gb ram, 7 drives totaling about 6.5 tbs.
    Tuesday, May 11, 2010 6:32 PM
  • Backups of server shares: there's a mechanism provided to allow regular backups of server shares, on a scheduled basis, built into Vail. The scheduled nature is already better than V1. You can choose not to back your server up to external storage. If you make that choice, you're vulnerable to something frying your server, no doubt. You can choose not to take your backup off-site. If you make that choice, you're vulnerable to disasters that affect your entire home.

    Splitting of files: Files larger than 1 GB will be split into multiple DE "blocks". Files smaller than 1 GB will presumably be packed in as tightly as may be. result: files larger than 1 GB, if in a share that's not duplicated, are at some risk of being lost. I couldn't say how much risk, as I don't know the exact storage allocation algorithm DE V2 uses. This is analogous to the backup database in V1: it's not duplicated, and it's split up into multiple 4 GB blocks. The loss of a single disk in the storage pool can destroy the backup database. The difference is that the database is really pretty nearly an "atomic" entity; because there's no duplication of data or control structures anywhere, the loss of anything can destroy much or all of the data in the database.

    ...
    So if I don't duplicate my files they could be at risk..
    ...

    Which describes perfectly the situation if you lose a drive in your V1 server without duplication. You pay your money (or not) and you take your chances. In this case, you make my argument, not yours. I think Microsoft shouldn't offer the option of turning off duplication for exactly the reason you've just stated, and the subsequent complaints from users who didn't understand the consequences of their choice. And I've heard those complaints repeatedly.

     


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, May 11, 2010 7:46 PM
  • Ken, I respect your knowledge and your experience, but I really don´t like the way you wipe down every opinion and suggestion, which is not compliant with your opinion.

    A lot of people have well-argumented concerns with drive extender and try to improve it. Microsoft asked us for feedback, for ideas and for suggestions - not the other way. And we spend our time to help for nothing. Sorry, but I would like to hear some official statements from the WHS-Team to our concerns. A Moderator, who wipe´s down every other opinion is not helpful.

    Tuesday, May 11, 2010 10:57 PM
  • Ken, I respect your knowledge and your experience, but I really don´t like the way you wipe down every opinion and suggestion, which is not compliant with your opinion.

    A lot of people have well-argumented concerns with drive extender and try to improve it. Microsoft asked us for feedback, for ideas and for suggestions - not the other way. And we spend our time to help for nothing. Sorry, but I would like to hear some official statements from the WHS-Team to our concerns. A Moderator, who wipe´s down every other opinion is not helpful.

    Ken does not work for Microsoft, so his position w.r.t development of Vail is no different from you or me.  If you have a right to voice your opinion, so does Ken or anyone else who has an interest in WHS.  As another poster said, don't take it personally, and here I will add: keep it constructive.  Other than that, feel free to say what you think about Vail.

     

    Wednesday, May 12, 2010 12:13 AM
  • ...
    So if I don't duplicate my files they could be at risk..
    ...

    Which describes perfectly the situation if you lose a drive in your V1 server without duplication. You pay your money (or not) and you take your chances. In this case, you make my argument, not yours. I think Microsoft shouldn't offer the option of turning off duplication for exactly the reason you've just stated, and the subsequent complaints from users who didn't understand the consequences of their choice. And I've heard those complaints repeatedly.

    Well, no, if you lose a drive in the current version of WHS, you only POTENTIALLY lose the data ONLY on that drive.  The data on the rest of your drives is safe, and you can easily remove the bad drive, take it to another system, and use tools to potentially recover data.  But more importantly, the relatively open format of the current version allows the use of 3rd party tools to protect your data with parity, so that you don't have to double the size of your storage pool to have data redundancy.  There are multiple solutions to the (very real and fairly common) scenario of losing a drive(s) in the current format.  On the surface, it sounds like v2 is not a practical solution for large amounts of data that you care much about.

    There seems to be a disconnect between MS's perception of their target market, and what many people want or need a home server for.  IMO, the most compelling use for a home server for the average user is as a central media server.  There are so many simpler solutions (to a separate server) if you don't have a lot of data.  But ironically, DE v2 seems to make safeguarding large amounts (15+TB) of data impractical.  It will be interesting to see how popular WHS v2 becomes compared to v1.  It sounds like it won't meet my needs.  I'll likely continue to use v1 until I find something better.

    Thursday, May 13, 2010 1:45 PM
  • Backups of server shares: there's a mechanism provided to allow regular backups of server shares, on a scheduled basis, built into Vail. The scheduled nature is already better than V1. You can choose not to back your server up to external storage. If you make that choice, you're vulnerable to something frying your server, no doubt. You can choose not to take your backup off-site. If you make that choice, you're vulnerable to disasters that affect your entire home.

    Splitting of files: Files larger than 1 GB will be split into multiple DE "blocks". Files smaller than 1 GB will presumably be packed in as tightly as may be. result: files larger than 1 GB, if in a share that's not duplicated, are at some risk of being lost. I couldn't say how much risk, as I don't know the exact storage allocation algorithm DE V2 uses. This is analogous to the backup database in V1: it's not duplicated, and it's split up into multiple 4 GB blocks. The loss of a single disk in the storage pool can destroy the backup database. The difference is that the database is really pretty nearly an "atomic" entity; because there's no duplication of data or control structures anywhere, the loss of anything can destroy much or all of the data in the database.

    ...
    So if I don't duplicate my files they could be at risk..
    ...

    Which describes perfectly the situation if you lose a drive in your V1 server without duplication. You pay your money (or not) and you take your chances. In this case, you make my argument, not yours. I think Microsoft shouldn't offer the option of turning off duplication for exactly the reason you've just stated, and the subsequent complaints from users who didn't understand the consequences of their choice. And I've heard those complaints repeatedly.

     


    I'm not on the WHS team, I just post a lot. :)


    Ken I have seen you make references to taking regular offsite server backups and while this is definately a great best practice it simple isn't scalable for a lot of the WHS installs out there.  

    I have duplication enabled on all of my shares, my videos share is OVER 4TB.     There is really no logistical way for me to backup this share to removable storage that I am aware of with the default backup tools available in WHS and the largest hard drive of 2TB formatting to around 1800GB.

    I would think there is far greater probability for Hard Drive failure or this new Vail architecture resulting in data loss then there is of my house burning to the ground and taking my entire WHS install with it.

    With a UPS\Surge protector, and duplication enabled I feel pretty secure about my data in V1.      Some of these Vail changes just are a bit frightening.    

    I am hoping over time more is revelaed which will put my concerns at ease as I would love to build out a brand new box to handle Vail, but I am just not sold on these changes!

    Flad

     

     

    Thursday, May 13, 2010 5:29 PM
  • Ken I have seen you make references to taking regular offsite server backups and while this is definately a great best practice it simple isn't scalable for a lot of the WHS installs out there.  
    ...
    Simply put, if you fail to take backups offsite regularly, you are vulnerable to force majeure events: fire, flood, tornado, hurricane, theft, etc. If you're comfortable with that, good for you. If you're not (and I'm not, having had a house burn to the ground due to an electrician's incompetence), it's trivially easy for most people to drop a hard drive in the office or at a friend/family member's house and swap it every few weeks.

    I'm not on the WHS team, I just post a lot. :)
    Thursday, May 13, 2010 5:43 PM
  • To Ken's point, I use MozyHome (in an admittedly somewhat convoluted setup ) to back up my valuable WHS data offsite and it works reasonably. From a "set it and forget it" standpoint, you can't really beat it.
    Thursday, May 13, 2010 6:16 PM
  • With the new deive extender technology, can the Spin Rite run on the discs?  (Spin Rite is a hard drive recovery / maintenance utillity from www.grc.com)

    With DE v1, the disc reported that it was the size of the complete share pool, and Spin Rite detected that the disc was reporting it's size as being different from the actual size and would not proceed.  Will this happen with DE v2?

    My thoughts are to occasionally turn off my server, boot the server into Spin Rite, and perform a check of all of my drives.  Then re-boot into WHS.

    Previously, with DE v1, I had a hard drive failure, and then found out that spin rite couldn't repair it because of the issue with DE v1.

    Friday, May 14, 2010 2:18 PM
  • Why not try it and see? Though to be honest I doubt that Spinrite will be able to repair a DE pseudo-volume.
    I'm not on the WHS team, I just post a lot. :)
    Friday, May 14, 2010 2:23 PM
  • Ken, you really don't get it.  You keep saying it's the same chances of loss as in v1.  This is ridiculously far from the truth.  A loss of one drive can now cause 100% data loss if all you have is large, non-duplicated files.

    Duplicating everything is simply not a feasible argument.  I currently have 10TB of data, most of which is very large video files.  If everything was duplicated that would require 26.3TB of storage, which means 14 2 TB drives.  Show me how to design that system without resorting to a rackmount server case (the most a standard desktop case supports is around 10 drives), show me the cost and show me the power usage.  If you think it will be anywhere near the current system I have with only 6 2TB drives then I will give you the keys to my house.

    With my current setup of 6 2TB drives on v1 I can have 10TB of storage and if a drive fails I lose < 2TB of data.  On v2 I would have 10TB of storage and if a drive fails I would lose 10TB of data.  If I turned on duplication and used v2 with my current hardware setup I would have 3TB of storage and could survive a drive failure with zero loss but I would have to throw out 70% of my data before hand.

    You seem completely immovable in your point of view and it's obviously because you use your WHS differently than the majority of people.  I wish you had more of an open mind because you are ruining this product for this majority of us.

    Friday, May 14, 2010 9:31 PM
  • You can theorize all you like, but I doubt you've done any testing on this at all. I have, and have not found that I lose more than I would with DE V1 and duplication turned off. So as far as I'm concerned, the people who are crying about how the sky is falling are talking through their hats.

    Show me a real world scenario (I'm uninterested in anything that could start out with "Well, in theory...") that repeatably results in significantly higher data loss than DE V1 would in the equivalent situation and I'll be extremely interested. Until you show me that scenario, I'm not buying the doom and gloom.


    I'm not on the WHS team, I just post a lot. :)
    Friday, May 14, 2010 11:20 PM
  • Duplicating everything is simply not a feasible argument.  I currently have 10TB of data, most of which is very large video files.  If everything was duplicated that would require 26.3TB of storage, which means 14 2 TB drives.  Show me how to design that system without resorting to a rackmount server case (the most a standard desktop case supports is around 10 drives), show me the cost and show me the power usage.  If you think it will be anywhere near the current system I have with only 6 2TB drives then I will give you the keys to my house.

    With my current setup of 6 2TB drives on v1 I can have 10TB of storage and if a drive fails I lose < 2TB of data.  On v2 I would have 10TB of storage and if a drive fails I would lose 10TB of data.  If I turned on duplication and used v2 with my current hardware setup I would have 3TB of storage and could survive a drive failure with zero loss but I would have to throw out 70% of my data before hand.

    You seem completely immovable in your point of view and it's obviously because you use your WHS differently than the majority of people.  I wish you had more of an open mind because you are ruining this product for this majority of us.

    An open mind is probably best defined as one that see all sides of an argument and appreciates the concerns of all parties.  I am afraid you are no better than Ken in that regard.  Everyone has their ideal OS in mind, but there is only one product and it has to do everything.  To be fair to Vail, it can do what you and everyone ask, only it asks that you enable duplication, which you are not willing to do for reasons of cost.  I do not believe that is a failing in Vail itself, but a problem with elevated expectations on the part of the user.  It costs money to store data, and more data costs more money.  Storing data easily and safely increases that cost again.  Conceptually I can find nothing wrong with this premise.

    If you have not done so, then I encourage you to look objectively at what Vail offers and evaluate the alternatives (FreeNAS, v1, bare disks).  Then instead of expecting a solution with all the best parts of everything, accept that there are some compromises when designing a storage system.  That will be one step towards an open mind.

    Lastly, remember that you do not absolutely need to run this OS.  There are many alternatives (or combinations of them for different purposes) and some may work better than others if you keep an open mind.

     

    Saturday, May 15, 2010 4:43 AM
  • You can theorize all you like, but I doubt you've done any testing on this at all. I have, and have not found that I lose more than I would with DE V1 and duplication turned off. So as far as I'm concerned, the people who are crying about how the sky is falling are talking through their hats.

    Show me a real world scenario (I'm uninterested in anything that could start out with "Well, in theory...") that repeatably results in significantly higher data loss than DE V1 would in the equivalent situation and I'll be extremely interested. Until you show me that scenario, I'm not buying the doom and gloom.


    I'm not on the WHS team, I just post a lot. :)


    This doesn't make any sense.  Here's a simple example, no theory needed.

    4 drives, lots of 4GB files, every file is on all 4 drives, duplication is turned off.  One drive dies and you lose everything.  There is no arguing that and no testing is needed.  It looks like I will be staying with v1.

    Saturday, May 15, 2010 6:37 AM
  • Duplicating everything is simply not a feasible argument.  I currently have 10TB of data, most of which is very large video files.  If everything was duplicated that would require 26.3TB of storage, which means 14 2 TB drives.  Show me how to design that system without resorting to a rackmount server case (the most a standard desktop case supports is around 10 drives), show me the cost and show me the power usage.  If you think it will be anywhere near the current system I have with only 6 2TB drives then I will give you the keys to my house.

    With my current setup of 6 2TB drives on v1 I can have 10TB of storage and if a drive fails I lose < 2TB of data.  On v2 I would have 10TB of storage and if a drive fails I would lose 10TB of data.  If I turned on duplication and used v2 with my current hardware setup I would have 3TB of storage and could survive a drive failure with zero loss but I would have to throw out 70% of my data before hand.

    You seem completely immovable in your point of view and it's obviously because you use your WHS differently than the majority of people.  I wish you had more of an open mind because you are ruining this product for this majority of us.

    An open mind is probably best defined as one that see all sides of an argument and appreciates the concerns of all parties.  I am afraid you are no better than Ken in that regard.  Everyone has their ideal OS in mind, but there is only one product and it has to do everything.  To be fair to Vail, it can do what you and everyone ask, only it asks that you enable duplication, which you are not willing to do for reasons of cost.  I do not believe that is a failing in Vail itself, but a problem with elevated expectations on the part of the user.  It costs money to store data, and more data costs more money.  Storing data easily and safely increases that cost again.  Conceptually I can find nothing wrong with this premise.

    If you have not done so, then I encourage you to look objectively at what Vail offers and evaluate the alternatives (FreeNAS, v1, bare disks).  Then instead of expecting a solution with all the best parts of everything, accept that there are some compromises when designing a storage system.  That will be one step towards an open mind.

    Lastly, remember that you do not absolutely need to run this OS.  There are many alternatives (or combinations of them for different purposes) and some may work better than others if you keep an open mind.

     


    v1 is better in virtually every way then by any logic except for people with a couple MP3s and Word docs.
    Saturday, May 15, 2010 6:38 AM
  • v1 is better in virtually every way then by any logic except for people with a couple MP3s and Word docs.

    I disagree.  v1 is better in some ways and v2 is better in others.  Of course you do not need to see others' viewpoints if all that matters to you is your own.  I understand that and I expect Microsoft does too.

     

    Saturday, May 15, 2010 6:54 AM
  • To be fair to Vail, it can do what you and everyone ask, only it asks that you enable duplication, which you are not willing to do for reasons of cost.  I do not believe that is a failing in Vail itself, but a problem with elevated expectations on the part of the user.  It costs money to store data, and more data costs more money.  Storing data easily and safely increases that cost again.  Conceptually I can find nothing wrong with this premise.
    While there is nothing wrong with this sentence in general it all comes down to the cost. The price that you have to pay comparing to alternatives.

    Now, let's consider the cost to switch from WHS v1 to v2 for a guy like me and MikeCousins who have about 10TB of data while keeping the same level of data safety and recoverability.
    I will add 3rd party parity calculation solutions like FlexRaid and disParity to the equation because they have proved to be working, people have been using them for at least one year and a half and they really protect from single drive failure. I personally tested FlexRaid several times and I'm fully convinced that if HDD dies it will restore data.
    And these tools don't seem to be able to work with WHS v2 because they work on file level. 1GB proprietary WHS v2 block format can't be deemed reliable unless we have a clear indication when DE v2 updates the blocks which we don't.

    10TB of data would require
    - 7 2TB drives on WHS v1 with protection against single drive failure. 6 2TB data drives and 1 2TB parity.
    - 7 2TB drives on WHS v2 without protection against single drive failure and with possibility to lose more data than was stored on single drive in case of its failure. 12% DE overhead plus overhead for each folder on each drive, ~13-14% overhead total.
    - 14 2TB drives on WHS v2 with protection against single drive failure. Duplication is the only method.

    So here you go, to move to WHS v2 you need to buy 7 (seven) more 2TB hard drives. With current prices (~$140 for 2TB) it is about 1000 dollars.
    Add to this a new case (it's completely different price for having 8 HDD or 16 HDD case), new SATA controllers (good ones, like SuperMicro, $140 with cables), PSU, cooling, other things and you add another few hundreds.

    Is it worth to pay about $1500 (fifteen hundred dollars) for the benefit of owning WHS v2 vs WHS v1? I would say a big NO.
    I am mostly concerned about data safety and recoverability.
    While Flexraid on WHS v1 gives me ability to verify data integrity in advance, WHS v2 does not seem to have this feature. I asked about it above few days ago - no response.
    While WHS v1 gives me ability to read my data in case sever doesn't boot WHS v2 does not have this feature.

    I'm not even mentioning possible restriction on drive number or data size. Currently it's unstable beyond 16TB? That could make it impossible to move to Vail if this limitation stays.

    And for losing all that I am supposed to pay $1500? Are you serious?
    Saturday, May 15, 2010 10:33 AM
  • You can theorize all you like, but I doubt you've done any testing on this at all. I have, and have not found that I lose more than I would with DE V1 and duplication turned off. So as far as I'm concerned, the people who are crying about how the sky is falling are talking through their hats.

    Show me a real world scenario (I'm uninterested in anything that could start out with "Well, in theory...") that repeatably results in significantly higher data loss than DE V1 would in the equivalent situation and I'll be extremely interested. Until you show me that scenario, I'm not buying the doom and gloom.


    I'm not on the WHS team, I just post a lot. :)


    Ken, may I ask how you tested? You disclosed that you don't store files larger than 1GB on WHS, in this case you will never experience a difference between WHS v1 and v2.

    And I disagree that we have to find "real world scenario ... that repeatably results in significantly higher data loss" in order to prove that this solution is risky.
    I'm sure that when WHS v1 was released nobody had found "real world scenario that repeatably results in data corruption" , however, the data corruption bug was there and it had not been fixed for many months. We are to identify the risks.
    It should be the other way around now, should be up to Microsoft to convince us that the new model is safe. So far I don't see it.

    The thing is that if it's possible in the design to split files between drives it will be done at some point. And according to Murphy's law it will be done when we don't expect it. And then we lose more than we are prepared to comparing to WHS v1.

    Microsoft hasn't released algorithm for splitting files so we can assume the worst even if we think that there are no bugs in it.
    I would assume real world scenario would be to fill WHS v2 with 40GB Blu-ray ISOs, then delete first ones, then fill again. WHS tends to fill one drive to the end so it would probably split files once one drive is close to be full. This may be a real-world scenario for a movie viewer. Once film becomes boring it is replaced with new one.
    It is just a guess. There are may be other scenarios.
    I have not tested WHS v2 mostly because of the drawbacks that I see without instaling it. I would have tried it already if it didn't have such obvious flaws. Maybe I will install it in a few weeks, depending on free time.


    Now, here is the problem with this forum.
    Ken is obviously representing a perfect target for WHS. He is storing only few tiny files on his WHS and he doesn't mind data duplication or triplication. Vail overhead for duplication is somewhere between duplication and triplication.
    Ken is so ideal MS user that he is even happy when Microsoft takes something from him, like when MS took out folder size in backup configuration.
    Apparently, there are people with different needs, people who have a single box acting as both WHS and a multi-terabyte relatively safe storage. Vail may make this impossible.

    Problem: Average Joes who buy pre-built WHS boxes in quantities probably don't visit this forum. Enthusiasts do and there is a conflict of interests. While average Joes are the ones who fund this project the enthusiasts also have some influence as they are asked for advice.
    Let's see how it goes.
    I personally would not recommend WHS if it doesn't provide NTFS. There are better non-MS alternatives. PC backup can be handled separately relatively easy.

    Saturday, May 15, 2010 10:42 AM
  • To be fair to Vail, it can do what you and everyone ask, only it asks that you enable duplication, which you are not willing to do for reasons of cost.  I do not believe that is a failing in Vail itself, but a problem with elevated expectations on the part of the user.  It costs money to store data, and more data costs more money.  Storing data easily and safely increases that cost again.  Conceptually I can find nothing wrong with this premise.
    While there is nothing wrong with this sentence in general it all comes down to the cost. The price that you have to pay comparing to alternatives.

    What you are comparing is not WHS v1 vs v2.  You have pretty much built a custom solution with WHS v1 as its base and you want to do the same thing with v2.  Remember that even v1 was designed to be run with duplication turned on for protection.  If you managed to circumvent that requirement, then good for you, but do you then expect Microsoft to maintain compatibility with your modification?

    Anyway, let's assume for the sake of argument that your comparison is valid.  As a user, you do have a clear case for not upgrading to v2.  But you are also clearly not representative of WHS's target market, so your comparison is probably valid only for a very select group who do the same types of modifications.  WHS is being designed for a wider market who know nothing of modifying storage systems, and for them it may make much more sense to upgrade.

     

    Saturday, May 15, 2010 12:00 PM
  • 4 drives, lots of 4GB files, every file is on all 4 drives, duplication is turned off. One drive dies and you lose everything. There is no arguing that and no testing is needed. It looks like I will be staying with v1.

    As stated your example is hypothetical. Now set that up and let us know what really happens when you disconnect a drive without removing it (simulates sudden drive failure) and then remove it using the dashboard. I have done that in a small way, with a mixed load of files on a server with two drives and no duplication. I lost an expected 50% plus or minus a reasonable margin of error. I didn't lose all of my larger files, just some, about the same 50%.

    We need a "real world scenario" because Microsoft hasn't given us the exact algorithms Drive Extender uses for data distribution, so for example we don't know if it's sensitive to file integrity. We have to try to reverse engineer DE from actual behavior. And yes, there was a "real world scenario" for the V1 data corruption issue. It was reported by exactly one person, and in it's original state was basically impossible to reproduce so it got closed as "no repro". Once Windows Home Server was in the market, though, more people started to experience file corruption, and eventually an easier repro case was found.

    Regarding being closed minded: I do appreciate your concerns. They're based on speculation, though ("based on what Microsoft has said" when Microsoft hasn't delivered complete information, "If this is true then in the worst case that is probably also true" and various weasel words/phrases ), and as far as I'm concerned experimentation trumps speculation every time. The fact that I remain unconvinced by other's speculation in the face of my own experimentation doesn't mean I have a closed mind. It means I have evidence that my opinion on the matter is correct, and nobody else has brought anything  to the table. It seems to me that those who are unwilling to construct and execute a relatively simple experiment are the close-minded ones. So rather than being armchair quarterbacks and "assuming the worst" how about y'all go act like beta testers and test something, eh? If you're passionate about this product (as I rather obviously am) you should be willing to invest some of your own time into the scenarios that interest you to make sure they meet your needs.

    Here's a test scenario for you folks to play with: One server, two small drives (very small, Vail minimum is best, you don't want your test dataset to fit on a single drive), duplication off for everything. It's probably best if one drive is internal and the other is external because you want to be able to disconnect one easily, but whatever. Three different file loads: all small files (<100MB, for the sake of discussion) across multiple shares, all large files across multiple shares, mixture of small and large files (I went with 30% by disk space small, 70% large). Load your server up and wait, I'd recommend overnight (Microsoft has said that DE V2 implements a leveling algorithm to even out disk usage, and I would assume that it's not done in real time). Disconnect a drive without warning, then remove the "missing" drive in the Dashboard. See what you've lost. I've executed the mixed file load test (twice), results as above. Maybe I was lucky? Prove me wrong, and I'll change my tune. Then I'll lean on Microsoft to change their tune. :)

    One final note: it's not that I don't store any large files at all on my server. I have an assortment of stuff, including .isos from MSDN and Technet, old virtual machines that I still need every now and then, etc. (Those .vhds make great "large file" candidates, since a clean Windows 7 Home Premium .vhd is around 9.5 GB. :) ) I just don't store terabytes of DVD and BluRay .iso files.


    I'm not on the WHS team, I just post a lot. :)
    Saturday, May 15, 2010 1:43 PM

  • What you are comparing is not WHS v1 vs v2.  You have pretty much built a custom solution with WHS v1 as its base and you want to do the same thing with v2.  Remember that even v1 was designed to be run with duplication turned on for protection.  If you managed to circumvent that requirement, then good for you, but do you then expect Microsoft to maintain compatibility with your modification?

    Anyway, let's assume for the sake of argument that your comparison is valid.  As a user, you do have a clear case for not upgrading to v2.  But you are also clearly not representative of WHS's target market, so your comparison is probably valid only for a very select group who do the same types of modifications.  WHS is being designed for a wider market who know nothing of modifying storage systems, and for them it may make much more sense to upgrade.

    You may call it circumvention but I prefer to call it an improvement to original product. An improvement that could have been polished and made available to mainstream since it existed from 2008. But it never happened.
    I don't expect Microsoft to maintain compatibility with circumvention, but I might have expected Microsoft to be more innovative and to come up with more efficient solution.

    I realize that I'm not a typical WHS target. I'm not alone though. If you check HTPC section on avsforum.com you will find many people building media storage servers either on WHS v1 plus parity protection or unRaid ($120 product) or something else. If Microsoft decided to ignore that market it's fine.

    The message I'm getting essentially says "Go away, this product is not for you, it is for average Joe who doesn't know any better" .
    The statement is only good until average Joe either finds something better or he gives up because of the price. He can find out better if other company with better product and good marketing budget shows up (like Apple).
    I don't want Microsoft to be in such vulnerable position. Vail life-cycle hasn't started yet but better DIY products exist already.

    People's computer education improves as well as storage needs grow. And storage need grows very fast. More and more people buy HD video downloads, record HD broadcasts, shoot their own HD video. AVCHD HD video can take more than 10GB per hour. Storing several terabytes of data may soon become common even for average Joe.
    And in this economy you don't expect people to worry about price they have to pay for safely storing data? Maybe because couple of terabytes will be enough for everyone forever?

    I agree, right now for average Joe who don't know nothing and had no choice other than previous WHS v1 or suffering of data loss it is a step forward. But is it big enough step considering future years of the product life-cycle?
    Sunday, May 16, 2010 8:25 AM
  • You may call it circumvention but I prefer to call it an improvement to original product. An improvement that could have been polished and made available to mainstream since it existed from 2008. But it never happened.
    I don't expect Microsoft to maintain compatibility with circumvention, but I might have expected Microsoft to be more innovative and to come up with more efficient solution.

    I realize that I'm not a typical WHS target. I'm not alone though. If you check HTPC section on avsforum.com you will find many people building media storage servers either on WHS v1 plus parity protection or unRaid ($120 product) or something else. If Microsoft decided to ignore that market it's fine.

    The message I'm getting essentially says "Go away, this product is not for you, it is for average Joe who doesn't know any better" .
    The statement is only good until average Joe either finds something better or he gives up because of the price. He can find out better if other company with better product and good marketing budget shows up (like Apple).
    I don't want Microsoft to be in such vulnerable position. Vail life-cycle hasn't started yet but better DIY products exist already.

    People's computer education improves as well as storage needs grow. And storage need grows very fast. More and more people buy HD video downloads, record HD broadcasts, shoot their own HD video. AVCHD HD video can take more than 10GB per hour. Storing several terabytes of data may soon become common even for average Joe.
    And in this economy you don't expect people to worry about price they have to pay for safely storing data? Maybe because couple of terabytes will be enough for everyone forever?

    I agree, right now for average Joe who don't know nothing and had no choice other than previous WHS v1 or suffering of data loss it is a step forward. But is it big enough step considering future years of the product life-cycle?
    It may be inefficient to you, but to me it's simple and unlikely to cause a problem under a typical high load rebuild cycle which occurs during failure. Once again it's about your priorities and risk profile, and if I remember right, the data you store is not critical, so losing it is an inconvenience rather than a disaster. Moreover, it is not so inconvenient that you will pay an extra ~60% efficiency to protect it, but you will pay 20% and accept some level of risk.

    I hope you realize this is a very specialized risk profile. You may have the technical knowledge to analyze your own risk profile and subsequently customize a storage system to that risk profile, but this is not something that consumers will be able to do. Even giving them the option can result in confusion or misconfiguration and ultimately affect reliability.

    If I may take a moment, this discussion is unlikely to have any meaningful resolution because the product is in the unfortunate position of having to fill too many roles with conflicting requirements. It would be helpful to bear in mind that you are not going to be the only user or group with your requirements who needs to have a working solution. I expect this to change in the future, but for now we can debate till the cows come home and it will probably not change anything.

    This thread is about DE, a subsystem that the target user will never see or know anything about, but will fundamentally affect the safety of his data. We are not the target user by a long shot, but we need to think in terms of the target user for our feedback to mean anything. Making suggestions based on our own intended usage, while valid, is very likely to be irrelevant to the product.

    On the bright side, I think Mark reads all this, so if there is ever a WHS pro version, I'm sure he will be putting your feedback to good use.

    Sunday, May 16, 2010 11:23 AM
  • Our task in this forum is to test and comment DE and make suggestions to improve it, isn´t it?

    And if we do this good, we focus on home users needs first and to our own needs second. Why? Only if this product is a commercial success, we can expect future developpment. But let´s not forget: one of the major reasons for buying a WHS (in comparison to other cheap solutions already in the market) is it´s flexibility.

    My be, that the solution Micksh is using, is special - the reason why he does this, is not. And the reason why he does this, is highly relevant to average home users.

    The points/reasons I understood from Micksh are:

    • There are several kinds of data (like TV-recordings, blu ray-movies, software-downloads of test versions, etc.), where you need neither duplication nor backup or can´t afford (because of price or practicability) - I completly aggree to that and average home users do as well. If you think, that´s not true, please explain why. => Duplication should be an option and turned on by default
    • The "overhead" for the kinds of data, that are duplicated should be as less as possible. Yes, of course, what should be wrong with comparing/benchmarking DE with other third party-solutions? Let´s challenge Microsoft, not just accept everything. I´m quite sure, that everyone would prefer a better DriveExtender out of the box (than to extra pay and implement third-party solutions).
    • In case of a single hdd-failure and non-duplicated data: it´s acceptable to lose the data of this single hard drive, but not more. I completly aggree to that! The average home user (like me) knows using a PC with several hdd´s and he knows, which data are on which hdd. When a single hard drive is gone, he knows exactly which data are lost (and have to be restored from the backup).

    To Ken: thanks, for testing that third point. I´m astonished by the results. But I fear, if rebalancing is established the results will change.

    To Mark/Microsoft: Please let us know, if DE try´s to keep non-duplicated data to the same hdd (as far as possible). That would help to settle down a lot.

    Sunday, May 16, 2010 1:05 PM
  • ...
    To Ken: thanks, for testing that third point. I´m astonished by the results. But I fear, if rebalancing is established the results will change.

    To Mark/Microsoft: Please let us know, if DE try´s to keep non-duplicated data to the same hdd (as far as possible). That would help to settle down a lot.

    Load leveling is, I understand, already there. Perhaps a better test would be to set up a server with a single drive, load it with data, and then add a second drive and wait (overnight or so) for leveling to distribute the data. Then disconnect a drive and see what happens. I don't recall for sure, but I don't think that was the scenario I tested.

    I'm not on the WHS team, I just post a lot. :)
    Sunday, May 16, 2010 2:36 PM
  • For discussion

    I really tried hard to understand the necessarity for duplication (Raid 1), but I can´t get it (when you do a backup once a day for minimum). An automated daily backup is more than most home users have today. May be, I missed something, so please help me "open my mind" with good arguments... ;-)))

    I made a table for discussion with the different kinds of data and needs for duplication/backup I see for most of us:

    Document-Typ

    share in the total sum of data storage

    Change/Data Load to WHS

    Can be recovered (without duplication and backup)

    Need for Duplication (if backup once or more often a day )

    Need for backup (independent from duplication)

    Need for enchryption

    Music

    Low to medium

    periodically

    Yes, but a lot of work

    no

    yes

    no

    Photos

    Low to medium

    periodically

    No

    no

    yes

    Yes, because it´s personal privacy, but most people can live without

    Videos (self produced)

    high

    periodically

    No

    no

    yes

    Yes, because it´s personal privacy, but most people can live without

    Documents

    low

    May be in realtime, when sombody writes a book f.ex.

    No

    Yes, in certain cases

    yes

    Yes

    TV-Recordings

    high

    daily

    Yes, but may take a long time...

    No

    depends on user preference

    no

    Movies

    high

    periodically

    Yes, but a lot of work

    no

    Yes, but can´t be done because of price or practicability

    no

    Software, Drivers

    medium

    periodically

    Yes, but a lot of work

    no

    yes

    no

    Backup

    high

    periodically

    Yes

    Not neccessary for most people

    Not neccessary for most people

    yes, for some people

     

    So Documents are the only part I really see the necessarity of duplication additional to daily backup (when an author writes a book f.ex. some hours of time slot between backup-cycle´s can meen catastrophic data loss). But this is only a small amount of the data storage over all.

     

     

    The last column in the table is for the need of enchryption. DE supports EFS, but no bitlocker (WS 2008 R2 could do). For me - and I think for a lot people more - this would be an important feature in WHS to have the data stored in it really safe (not only by failure, but also by abuse in case of theft).

     

    Allow me to make some Example´s:

    •  I would like to backup the bitlocker-protected notebook of my company to WHS. The data are then not-protected. This could really bring me in trouble, if the WHS is getting stolen. An abuse could bring me to jail or at least cost a lot of money.
    • Journalists, author´s, lawyer´s, software developper´s, that take documents from work to home...
    • The family father, that want´s to be sure, that he doesn´t have to search the internet for the photos of his children/family, when then WHS is stolen...

    So my question to this point:

    • How important is this aspect for you  - and the target market of the average home user in you opinion?
    • If it is important an could be an compelling reason for buying WHS vs. other NAS: how can we bring bitlocker and DE together?
    Sunday, May 16, 2010 3:19 PM
  • Thanks, for the quick anwser. I mentioned the second point from Mark ("...A periodic rebalance operation is considered for the next version....."

    I can´t do the test for my own in the moment. I driving it in a virtual machine in the moment and don´t have sufficient disk space, but will go fo an other 2 TB-Drive tomorrow...

    Sunday, May 16, 2010 3:30 PM
  • Bitlocker interfaces with the TPM in your laptop (usually; you can use it without a TPM but I don't entirely get the point then) and operates at a level below VSS, which it must do in order for VSS to work correctly on client computers. Since Windows Home Server client backup uses VSS, you're stuck. I'm not saying it's completely impossible, and I'll vote up a suggestion if you link one, but don't hold your breath.

    As for your table, I personally have relatively little video (and no recorded TV at all) and a huge load of music and photos. I also have a fairly large load of documents, etc. Also, I don't consider the server backup to be a substitute for duplication. If duplication is on, then losing a drive means I have to replace it. That's all. My files are still on my server, they're still accessible (though some are perhaps not protected until I remove the failed drive and replace it), backups still happen, changes propagate immediately to the duplicate files (so I don't lose work through drive failure), etc. The server backup will take quite a long time to restore if there's a lot of data, so to me it's primarily of use in a force majeure disaster recovery scenario.


    I'm not on the WHS team, I just post a lot. :)
    Sunday, May 16, 2010 3:56 PM
  • Thanks, for the quick anwser. I mentioned the second point from Mark ("...A periodic rebalance operation is considered for the next version....."

    I can´t do the test for my own in the moment. I driving it in a virtual machine in the moment and don´t have sufficient disk space, but will go fo an other 2 TB-Drive tomorrow...

    I could swear Mark said elsewhere that load leveling is already in place...

    As for testing, you can easily do a small test in a VM. Create two minimum size disks that you add to server storage, make sure they're the only storage, copy some large files, and see what happens when you "remove" one disk, i.e. remove it as a vhd for the VM.


    I'm not on the WHS team, I just post a lot. :)
    Sunday, May 16, 2010 3:59 PM
  • Bitlocker interfaces with the TPM in your laptop (usually; you can use it without a TPM but I don't entirely get the point then) and operates at a level below VSS, which it must do in order for VSS to work correctly on client computers. Since Windows Home Server client backup uses VSS, you're stuck. I'm not saying it's completely impossible, and I'll vote up a suggestion if you link one, but don't hold your breath.

    As for your table, I personally have relatively little video (and no recorded TV at all) and a huge load of music and photos. I also have a fairly large load of documents, etc. Also, I don't consider the server backup to be a substitute for duplication. If duplication is on, then losing a drive means I have to replace it. That's all. My files are still on my server, they're still accessible (though some are perhaps not protected until I remove the failed drive and replace it), backups still happen, changes propagate immediately to the duplicate files (so I don't lose work through drive failure), etc. The server backup will take quite a long time to restore if there's a lot of data, so to me it's primarily of use in a force majeure disaster recovery scenario.


    I'm not on the WHS team, I just post a lot. :)

    Bitlocker: I already made a suggestion to bitlocker (or an other enchrypted safe funktionality). Here is the link:

    https://connect.microsoft.com/WindowsHomeServer/feedback/details/558338/safe-functionality-with-bitlocker-and-efs

    I hope it works. Please tell me how to do it, if not.

    The need of TPM for bitlocker: I think this depends on the attack scenario. If the notebook is shut down, the data are safe without TPM, except the attacker has repeatedly acess to it without your knowledge.

    My table: yes, I agree, the share of data storage is very individual. What I meant is: a video file is much bigger than mp3 or doc. So if you use all these kind of data on your WHS, probably the relation between the different types is like I tried to express with the table.

    Duplication vs Backup: thanks, I understand your point of view. So far I have never had a drive failure at home (only with company notebooks). I use a 3-drives architeture in every client pc: 1. for system, 2. for data, 3. for backup of system and data. Data-Recovery with acronis would last maximum 1 hour (some data like VM´s for software testing would be lost). I would use duplication for all data, that I need to access without losing time. But I can wait 1 or 2 hours until I can acess music, photos and videos again.

    O.k., to be honest: it would be a disaster if the drive failure happens in the evening and we can´t whatch a movie... but on the other hand... my wife and me would got to bed early - not the badest thing.... ;-)))

    I share your point of view, that duplication of all data is the most comfortable way. My point is: when you have limited ressources (and a lot of movies), backup will be sufficient for the biggest part of your share and duplication is a nice and important feature for the rest. But the definition of this (in my opinion smaller) part of data is highly individual.

    Sunday, May 16, 2010 5:02 PM
  • Very interesting, and educational discussion. IMO, for me... if it requires
    backing up using WHS, then in most cases it requires duplication, with one
    exception in my case, which is backups of system backups produced by other
    means (Windows Backup images, other 3rd party BU images, etc) – I see no
    reason for duplication to be enabled for those.
     
    Art (artfudd) Folden
    ~~~~~~~~~~~~~~~~
    "DreamingTiger" wrote in message
    news:3d4d4059-158d-4d00-a6ef-b76fb234b062...
     
     
     
    For discussion
     
     
    I really tried hard to understand the necessarity for duplication (Raid 1),
    but I can´t get it (when you do a backup once a day for minimum). An
    automated daily backup is more than most home users have today. May be, I
    missed something, so please help me "open my mind" with good arguments...
    ;-)))
     
     
    I made a table for discussion with the different kinds of data and needs for
    duplication/backup I see for most of us:
     
     
    Document-Typ
     
     
    share in the total sum of data storage
     
     
    Change/Data Load to WHS
     
     
    Can be recovered (without duplication and backup)
     
     
    Need for Duplication (if backup once or more often a day )
     
     
    Need for backup (independent from duplication)
     
     
    Need for enchryption
     
     
    Music
     
     
    Low to medium
     
     
    periodically
     
     
    Yes, but a lot of work
     
     
    no
     
     
    yes
     
     
    no
     
     
    Photos
     
     
    Low to medium
     
     
    periodically
     
     
    No
     
     
    no
     
     
    yes
     
     
    Yes, because it´s personal privacy, but most people can live without
     
     
    Videos (self produced)
     
     
    high
     
     
    periodically
     
     
    No
     
     
    no
     
     
    yes
     
     
    Yes, because it´s personal privacy, but most people can live without
     
     
    Documents
     
     
    low
     
     
    May be in realtime, when sombody writes a book f.ex.
     
     
    No
     
     
    Yes, in certain cases
     
     
    yes
     
     
    Yes
     
     
    TV-Recordings
     
     
    high
     
     
    daily
     
     
    Yes, but may take a long time...
     
     
    No
     
     
    depends on user preference
     
     
    no
     
     
    Movies
     
     
    high
     
     
    periodically
     
     
    Yes, but a lot of work
     
     
    no
     
     
    Yes, but can´t be done because of price or practicability
     
     
    no
     
     
    Software, Drivers
     
     
    medium
     
     
    periodically
     
     
    Yes, but a lot of work
     
     
    no
     
     
    yes
     
     
    no
     
     
    Backup
     
     
    high
     
     
    periodically
     
     
    Yes
     
     
    Not neccessary for most people
     
     
    Not neccessary for most people
     
     
    yes, for some people
     
     
     
     
     
    So Documents are the only part I really see the necessarity of duplication
    additional to daily backup (when an author writes a book f.ex. some hours of
    time slot between backup-cycle´s can meen catastrophic data loss). But this
    is only a small amount of the data storage over all.
     
     
     
     
     
     
     
     
    The last column in the table is for the need of enchryption. DE supports
    EFS, but no bitlocker (WS 2008 R2 could do). For me - and I think for a lot
    people more - this would be an important feature in WHS to have the data
    stored in it really safe (not only by failure, but also by abuse in case of
    theft).
     
     
     
     
     
    Allow me to make some Example´s:
     
     
    I would like to backup the bitlocker-protected notebook of my company to
    WHS. The data are then not-protected. This could really bring me in trouble,
    if the WHS is getting stolen. An abuse could bring me to jail or at least
    cost a lot of money.
    Journalists, author´s, lawyer´s, software developper´s, that take documents
    from work to home...
    The family father, that want´s to be sure, that he doesn´t have to search
    the internet for the photos of his children/family, when then WHS is
    stolen...
     
     
    So my question to this point:
     
     
    How important is this aspect for you - and the target market of the average
    home user in you opinion?
    If it is important an could be an compelling reason for buying WHS vs. other
    NAS: how can we bring bitlocker and DE together?
     
     
     
    Sunday, May 16, 2010 6:04 PM
  • Our task in this forum is to test and comment DE and make suggestions to improve it, isn´t it?

    And if we do this good, we focus on home users needs first and to our own needs second. Why? Only if this product is a commercial success, we can expect future developpment. But let´s not forget: one of the major reasons for buying a WHS (in comparison to other cheap solutions already in the market) is it´s flexibility.

    My be, that the solution Micksh is using, is special - the reason why he does this, is not . And the reason why he does this, is highly relevant to average home users.

    The points/reasons I understood from Micksh are:

    • There are several kinds of data (like TV-recordings, blu ray-movies, software-downloads of test versions, etc.), where you need neither duplication nor backup or can´t afford (because of price or practicability) - I completly aggree to that and average home users do as well. If you think, that´s not true, please explain why. => Duplication should be an option and turned on by default
    • The "overhead" for the kinds of data, that are duplicated should be as less as possible. Yes, of course , what should be wrong with comparing/benchmarking DE with other third party-solutions? Let´s challenge Microsoft, not just accept everything. I´m quite sure, that everyone would prefer a better DriveExtender out of the box (than to extra pay and implement third-party solutions).
    • In case of a single hdd-failure and non-duplicated data: it´s acceptable to lose the data of this single hard drive, but not more. I completly aggree to that! The average home user (like me) knows using a PC with several hdd´s and he knows, which data are on which hdd. When a single hard drive is gone, he knows exactly which data are lost (and have to be restored from the backup).

    To Ken: thanks, for testing that third point. I´m astonished by the results. But I fear, if rebalancing is established the results will change.

    To Mark/Microsoft: Please let us know, if DE try´s to keep non-duplicated data to the same hdd (as far as possible). That would help to settle down a lot.

    Although v1 was flexible in this respect, in my opinion that was a side effect rather than a design consideration.  Unlike Vail, v1 was not built from the ground up, but rather was built upon server 2003 and the demigrator added to handle redundancy.  NTFS was used as it was able to handle the additional requirements with only minimal modification.  The end result was what you know as a highly flexible storage solution, but I doubt that was the intention of MS.  So to answer you, I doubt that flexibility is any priority of MS at all.  DEv2 is pretty much confirmation of that.

    I agree that most average users would use WHS to store movies and other large content, as that is the direction MS has chosen to take it for consumer appeal.  However, I do not agree such content necessarily does not need duplication, as that is a user preference.  The problem is that average users are not competent to assess their preferences until exposed to the consequences at least once.  My guess is that they won't mind losing content they have already watched, but they will raise ____ if unwatched content goes missing.  If there is a way to make such distinctions automatically, that would be a winner, but if not, the safer way would be to protect all content.

    Making duplication optional is bad because it opens the door for abuse without sufficient due consideration, a common failing among average users.  Yes the argument could be that they are responsible for their choices, but I feel it is inappropriate to give them dangerous tools which require a level of knowledge that they cannot be expected to possess.  In short, duplication should be required.  At most, require a reg hack to disable it, making it very inaccessible to average users.

    As for efficiency complaints, I have already gone over tradeoffs in that area and I don't think it's necessary to do so again.  Of course I would appreciate any improvements as much as the next person, but I don't think MS needs to be told that space efficiency is important.  I am choosing to believe that they have already explored the limits and this is the best that can be done at this time.  Certainly it is not something that can be changed this late in the implementation.  Further improvements would almost certainly only be possible with a redesign.

    For the issue of what is "acceptable" loss in single drive failures, Ken has already tested this and it seems to be no different from v1.  I don't think any more need be said about that as it seems to be a non issue.  I will be doing my own tests once I find a different test platform (with more than two drive ports) and I will post my findings on that.

    I would like to highlight an important caveat that unlike your bare disk scenario, you have no control over what gets lost , even if it is only the data on the failed disk, because of course you have no control over what goes where.  Would it be less work to restore a single episode of every tv series you own compared to the entire series?  I am speculating of course, but since your scenario was also speculation, it's ok.

    I guess overall, I agree with you that we are here to improve the product, but we disagree on how to do so.  I think at this stage we should be focusing on incorrect behaviour or minor enhancements rather than limitations in the implementation.  Also, I feel that as beta testers, we are here to work with the developers and not to criticize them.  There is an awful lot of that, and I feel it is mostly undeserved and frequently uninformed.

    Yes we want to improve Vail, but should we first try and understand its purpose?  If we accept that it is a consumer storage solution, designed to be simple and safe, then that is what we should be trying to help Vail to become.  That is my purpose for being here and doing this.

     

    Sunday, May 16, 2010 6:27 PM
  • For discussion

    I really tried hard to understand the necessarity for duplication (Raid 1), but I can´t get it (when you do a backup once a day for minimum). An automated daily backup is more than most home users have today. May be, I missed something, so please help me "open my mind" with good arguments... ;-)))

    I don't hope to influence anyone with arguments, as I feel everyone needs to think for themselves what makes sense to them.  I can only provide my own usage case as a possible alternative.

    I see duplication not as an alternative or complement to backup, as it does not protect from catastrophe.  For that reason, my truly irreplaceable personal files are directly backed up to multiple cloud storage systems.  Other data which is recoverable or regenerable, I do not bother to backup.  In a worst case scenario, I could recover my data from the cloud and also regenerate or restore less important data from their original sources, at varying levels of hassle.

    I use duplication purely as a means to avoid having to do all that under normal day to day running.  Events such as drive failures are surely going to happen, since drives do fail eventually.  The advantage of using this strategy is that under normal operation, I do not have to do anything heroic to maintain or restore integrity.  With a little training or coaching, it should even be possible for my wife to manage such events in my absence.  There is a definite advantage in peace of mind with such a setup.

    I will not say this is the only way or even the best way to store data.  Much depends on the individual's priorities, how much cash they have to spend, how much their time is worth, or how risk averse.  This is merely my own solution to the universal storage problem, which I came up with after much pain and suffering and sleepless nights, so if it helps you, please be my guest.

    Data security is not really my thing, so I will leave that for others more competent than myself.

    HTH.

     

    Sunday, May 16, 2010 8:09 PM
  • roddy_0, you have very reasonable point here. I agree that discussion needs to be constuctive or it will lead to nowhere.
    I don't know anything about MS product development strategy but they probably have discussed implementation with partners who build WHS boxes like HP, Acer and others. So they know what they are doing and this implementation is set in stone already.

    At this point I can only ask Microsoft about relatively small changes as to open a back door for DIY improvements. Without any support or commitment from Microsoft. Just give an ability to hook up 3rd party parity tools and knowledgeable people will figure out the rest without imposing any responsibility to Microsoft.

    For that I would request

    1. An ability to read DE NTFS system (i.e. 1GB file chunks) within Vail. The ability to read full HDD content.
    2. A knowledge on when these 1 GB chunks are updated. Or, better, possibility to disable updating the chunks when data in storage pool is not changed. This is the most important request because parity would become invalid if DE silently updates the chunks with its internal auxiliary information.
    The thing that made 3rd party parity tools work on WHS v1 was that WHS never altered NTFS files when user didn't update them. It never moved files across the drives either. Shuffling files between the drives can be dealt with but it will add difficulties.

    These would theoretically allow to hook up parity tools. It would not be as flexible as WHS v1 because I can exclude duplicated folders from parity calculation in WHS v1 and FlexRaid. To find out reliability of this thing it would still require a lot of testing but if all file changes can be controlled by user it should be feasible to make it work.
    Again, no official MS support is requested.

    3. A tool that tells what 1GB chunks contain would be appreciated. I would like to pick up a file and see on which physical HDD it is located at least.

    4. Inability to read Vail disks on other PCs is still a showstopper. But given the popularity of this request I hope it will be addressed some day.

    Would that be fair requests?

    Sunday, May 16, 2010 11:55 PM
  • Number 4 is at the top of my list.

    --
    ______________
    BullDawg
    Associate Expert
    In God We Trust
    ______________
     
    "micksh" <=?utf-8?B?bWlja3No?=> wrote in message news:a13c5dc9-44f6-46c1-a1da-c6bf6e442177...

    roddy_0, you have very reasonable point here. I agree that discussion needs to be constuctive or it will lead to nowhere.
    I don't know anything about MS product development strategy but they probably have discussed implementation with partners who build WHS boxes like HP, Acer and others. So they know what they are doing and this implementation is set in stone already.

    At this point I can only ask Microsoft about relatively small changes as to open a back door for DIY improvements. Without any support or commitment from Microsoft. Just give an ability to hook up 3rd party parity tools and knowledgeable people will figure out the rest without imposing any responsibility to Microsoft.

    For that I would request

    1. An ability to read DE NTFS system (i.e. 1GB file chunks) within Vail. The ability to read full HDD content.
    2. A knowledge on when these 1 GB chunks are updated. Or, better, possibility to disable updating the chunks when data in storage pool is not changed. This is the most important request because parity would become invalid if DE silently updates the chunks with its internal auxiliary information.
    The thing that made 3rd party parity tools work on WHS v1 was that WHS never altered NTFS files when user didn't update them. It never moved files across the drives either. Shuffling files between the drives can be dealt with but it will add difficulties.

    These would theoretically allow to hook up parity tools. It would not be as flexible as WHS v1 because I can exclude duplicated folders from parity calculation in WHS v1 and FlexRaid. To find out reliability of this thing it would still require a lot of testing but if all file changes can be controlled by user it should be feasible to make it work.
    Again, no official MS support is requested.

    3. A tool that tells what 1GB chunks contain would be appreciated. I would like to pick up a file and see on which physical HDD it is located at least.

    4. Inability to read Vail disks on other PCs is still a showstopper. But given the popularity of this request I hope it will be addressed some day.

    Would that be fair requests?


    BullDawg
    Monday, May 17, 2010 12:17 AM
  • ...
    1. An ability to read DE NTFS system (i.e. 1GB file chunks) within Vail. The ability to read full HDD content.
    ...

    Mick, have you got Vail installed anywhere for testing? If so, you know that you have full access to the NTFS "volumes" that Drive Extender V2 creates. If I remember how FlexRAID works, you don't need anything beyond that, because FlexRAID allows you to limit your protection to a single file if you want, so it must lie on top of the file system. DE V2 is beneath the file system, so there's no problem.

    If you don't have Vail installed, why not?


    I'm not on the WHS team, I just post a lot. :)
    Monday, May 17, 2010 2:16 AM
  • roddy_0, you have very reasonable point here. I agree that discussion needs to be constuctive or it will lead to nowhere.
    I don't know anything about MS product development strategy but they probably have discussed implementation with partners who build WHS boxes like HP, Acer and others. So they know what they are doing and this implementation is set in stone already.

    At this point I can only ask Microsoft about relatively small changes as to open a back door for DIY improvements. Without any support or commitment from Microsoft. Just give an ability to hook up 3rd party parity tools and knowledgeable people will figure out the rest without imposing any responsibility to Microsoft.

    For that I would request

    1. An ability to read DE NTFS system (i.e. 1GB file chunks) within Vail. The ability to read full HDD content.
    2. A knowledge on when these 1 GB chunks are updated. Or, better, possibility to disable updating the chunks when data in storage pool is not changed. This is the most important request because parity would become invalid if DE silently updates the chunks with its internal auxiliary information.
    The thing that made 3rd party parity tools work on WHS v1 was that WHS never altered NTFS files when user didn't update them. It never moved files across the drives either. Shuffling files between the drives can be dealt with but it will add difficulties.

    These would theoretically allow to hook up parity tools. It would not be as flexible as WHS v1 because I can exclude duplicated folders from parity calculation in WHS v1 and FlexRaid. To find out reliability of this thing it would still require a lot of testing but if all file changes can be controlled by user it should be feasible to make it work.
    Again, no official MS support is requested.

    3. A tool that tells what 1GB chunks contain would be appreciated. I would like to pick up a file and see on which physical HDD it is located at least.

    4. Inability to read Vail disks on other PCs is still a showstopper. But given the popularity of this request I hope it will be addressed some day.

    Would that be fair requests?

    1. I don't understand your first suggestion.  You want to be able to login to Vail and open files which are stored on the shares?  If so, this is already available just like in v1 except that instead of D:\shares, now it has different mapped drives for each share.

    2. If I am interpreting you correctly, you want to be able to either prevent DE from moving data between disks with its leveling logic, or less preferably be able to detect when it does so?

    3. This would actually be quite useful for me too.  I would vote on this.

    4. Yes I have already voted on this one.  I hope never to have to do this, but you can never tell when the option will come in handy.

    By all means log your suggestions and get people to vote on them.  With enough support, you might be able to get them implemented.

     

    Monday, May 17, 2010 3:24 AM
  • 1. An ability to read DE NTFS system (i.e. 1GB file chunks) within Vail. The ability to read full HDD content.
    ...

    Mick, have you got Vail installed anywhere for testing? If so, you know that you have full access to the NTFS "volumes" that Drive Extender V2 creates. If I remember how FlexRAID works, you don't need anything beyond that, because FlexRAID allows you to limit your protection to a single file if you want, so it must lie on top of the file system. DE V2 is beneath the file system, so there's no problem.

    If you don't have Vail installed, why not?

    I don't have Vail installed partly because of other showstopper - inability to read data disks on other PCs. I just don't feel urgency to invest my time in Vail testing because of that. I don't expect this to be fixed any time soon so it can wait. I will install it later though, when time allows.

    I am glad to know that item #1 from my list is not an issue. But, as I stated, the most important thing for FlexRaid to work is item #2.
    They key thing for FlexRaid to work is that OS doesn't change files that are protected. WHS v1 can't possibly do that. WHS v2 can because it has a proprietary layer on top on file system. If there is a way for FlexRaid to work on Vail the new DE 1GB chunks would have to be protected by parity. There is more to it to make it work but this is the basis: 1GB chinks must be untouched once parity is created. If Vail changes few bytes in them, like if it writes last access time or something the whole parity will become invalid and parity protection will not make any sense.

    The way flexRaid works on WHS v1 now is that it protects folders like c:\fs\T\DE\shares\Videos, c:\fs\K\DE\shares\Videos, etc.
    The only way flexRaid may work on Vail is that it protects files like [Physical disk 1]\DE_1GB_Chunk1, [Physical disk 1]\DE_1GB_Chunk2, ..., [Physical disk 2]\DE_1GB_Chunk1, etc.

    I don't know how the 1 GB chunks look like when being read from Vail but I hope you understand what I'm trying to say even if terms are inaccurate.
    In WHS v1 it was simple. I know when I change my data and I update parity after that. In WHS v2 I have no control over these chunks.

    Now, to me it doesn't make much sense to install Vail and test if these chunks are arbitrary changed or not. Even if I find after months of testing that Vail doesn't change them I can't be sure that they won't be altered when some event happens. An event that wasn't covered by testing. Such event can ruin whole parity thing and give me false perception that my data is protected.

    The testing would not mean much.

    At this point all I need is information disclosure from Microsoft - how and when Vail updates 1GB chunks. Preferrably possibility to disable updating them.
    Only after I have this information I can move forward.

    I don't think I am being arrogant refusing to install Vail (although it may look so to other people). For me it really needs to be more deterministic and controllable.

    I'm not holding my breath waiting for this information though. I can understand if MS has other priorities.

    Monday, May 17, 2010 3:41 AM
  • The way flexRaid works on WHS v1 now is that it protects folders like c:\fs\T\DE\shares\Videos, c:\fs\K\DE\shares\Videos, etc.
    The only way flexRaid may work on Vail is that it protects files like [Physical disk 1]\DE_1GB_Chunk1, [Physical disk 1]\DE_1GB_Chunk2, ..., [Physical disk 2]\DE_1GB_Chunk1, etc.

    I don't know how the 1 GB chunks look like when being read from Vail but I hope you understand what I'm trying to say even if terms are inaccurate.
    In WHS v1 it was simple. I know when I change my data and I update parity after that. In WHS v2 I have no control over these chunks.

    If I understand how this works in v1, it is protecting individual files, correct?  And I presume it is possible to totally regenerate a file purely from the parity information when the containing drive fails the only copy goes missing.  By that logic, doesn't it mean that you can simply apply the same protection on the file level in DEv2 without regard to the blocks?  The one additional concern would be that now the parity needs to be protected from being lost together with the original file, but I think you could just use duplication to protect that.

    This actually sounds like it should already work.  Why not do a test on this?

     

    Monday, May 17, 2010 3:53 AM
  • roddy_0, it is only partially correct.
    FlexRaid protects individual files, yes. It can regenerate file based on parity - correct too.
    Same logic can't be applied to Vail though.

    The units that FlexRaid (or disParity, or similar tools) protect are physical drives. Hard drives. If HDD fails it regenerates the files. But only if it knows on which hard drive the files were located.
    I will try to illustrate it. FlexRaid reads files like this:
    [My WD Green 2TB disk1]\some_file1
    [My WD Green 2TB disk1]\some_file2
    [My Samsung SpinPoint 1.5TB disk2]\some_file3
    [My Samsung SpinPoint 1.5TB disk2]\some_file4
    [My Seagate disk3]\some_file5
    [Parity drive(s)]

    Then if [My Seagate disk3] fails it will regenerate all files on it based on other drives and parity. But in order to do that it must know which files were stored on which physical drives.
    I listed directories like these in previous post: c:\fs\T\DE\shares\Videos, c:\fs\K\DE\shares\Videos
    c:\fs\T is one physical drive, c:\fs\K is another physical drive. c:\fs\T represents [My WD Green 2TB disk1] and c:\fs\K is a nickname for [My Samsung SpinPoint 1.5TB disk2].
    The paths are NTFS mounting points for these drives. This is how it works in WHS v1. Can I do the same thing in Vail? No.

    The path like \\WHS\Videos\some_file will not work because it doesn't provide information about physical drive that contain the some_file.

    The only way FlexRaid might work is that if it works with [My WD Green 2TB disk1]\DE_1GB_Chunk1, [My WD Green 2TB disk1]\DE_1GB_Chunk2 files (not sure how they are called in Vail).
    But then my concern from the previous post applies.
    Monday, May 17, 2010 4:24 AM
  • Mick, please go back and read what I wrote, instead of repeating your preconceived notions about Drive Extender V2. DE V2 exposes standard NTFS volumes, which contain your files.

    To be blunt, if you don't want to install Vail, you have nothing to contribute to a discussion of Vail functionality except in a theoretical sense. And since your theories are based on incorrect information, they're worthless. Please go gain some actual, practical, first hand knowledge of what you're talking about, so you can contribute.

     


    I'm not on the WHS team, I just post a lot. :)
    Monday, May 17, 2010 4:35 AM
  • ...
    Can I do the same thing in Vail? No.

    The path like \\WHS\Videos\some_file will not work because it doesn't provide information about physical drive that contain the some_file.
    ...

    Oddly enough, several people have told you repeatedly that you can .

    And the FlexRAID documentation is extremely clear on the question of parity on network shares: you can do that if you want. (Not that it matters, see previous statements on the subject) You need to learn more about your own tools...


    I'm not on the WHS team, I just post a lot. :)
    Monday, May 17, 2010 5:33 AM
  • roddy_0, it is only partially correct.
    FlexRaid protects individual files, yes. It can regenerate file based on parity - correct too.
    Same logic can't be applied to Vail though.

    The units that FlexRaid (or disParity, or similar tools) protect are physical drives. Hard drives. If HDD fails it regenerates the files. But only if it knows on which hard drive the files were located.
    I will try to illustrate it. FlexRaid reads files like this:
    [My WD Green 2TB disk1]\some_file1
    [My WD Green 2TB disk1]\some_file2
    [My Samsung SpinPoint 1.5TB disk2]\some_file3
    [My Samsung SpinPoint 1.5TB disk2]\some_file4
    [My Seagate disk3]\some_file5
    [Parity drive(s)]

    Then if [My Seagate disk3] fails it will regenerate all files on it based on other drives and parity. But in order to do that it must know which files were stored on which physical drives.
    I listed directories like these in previous post: c:\fs\T\DE\shares\Videos, c:\fs\K\DE\shares\Videos
    c:\fs\T is one physical drive, c:\fs\K is another physical drive. c:\fs\T represents [My WD Green 2TB disk1] and c:\fs\K is a nickname for [My Samsung SpinPoint 1.5TB disk2].
    The paths are NTFS mounting points for these drives. This is how it works in WHS v1. Can I do the same thing in Vail? No.

    The path like \\WHS\Videos\some_file will not work because it doesn't provide information about physical drive that contain the some_file.

    The only way FlexRaid might work is that if it works with [My WD Green 2TB disk1]\DE_1GB_Chunk1, [My WD Green 2TB disk1]\DE_1GB_Chunk2 files (not sure how they are called in Vail).
    But then my concern from the previous post applies.

    Ok I think I understand what's going on now.  It isn't protecting files in isolation, but rather generating its parity from a set of files stored on different drives.  The regeneration would happen from parity plus the combined set of files minus the missing file.

    The best I can say is that Vail doesn't have this yet.  There is no way to address a block at the moment, since that is filesystem level and DE is below that.  What you are essentially asking for is a filesystem driver which allows you to read the DE blocks directly.  I'll be straight with you, I don't see this happening in v2.  The right way to do this in my opinion is add API functions to the block storage manager to allow addressing and reading of the blocks.  Unfortunately, APIs have to go through rigorous testing to ensure there are no negative effects on the system, and that's a tall order at this stage.

    I could be wrong of course, the only way to be sure is give it a try.  If you really want to see this work, invest some time and energy in it.  There is no better person to do it than someone with a vested interest.

     

    Monday, May 17, 2010 5:40 AM
  • The right way to do this in my opinion is add API functions to the block storage manager to allow addressing and reading of the blocks.  Unfortunately, APIs have to go through rigorous testing to ensure there are no negative effects on the system, and that's a tall order at this stage.
    I think we are on the same page if by block you mean what I call 1GB chunk (not sure what proper term is).
    There should be no need to add API. Microsoft certainly has all API (otherwise how would they manage all this?). And it should have passed all testing already. The request should be about disclosing part of this API.

    Then my request #3 (that you promised to vote for) "A tool that tells what 1GB chunks contain" can be made easily based on this.
    And I think this is important complement to request for Vail disks being readable on other computers - you would need to know what disk to read when you want to find a certain file. I actually posted a comment to this request https://connect.microsoft.com/WindowsHomeServer/feedback/details/554324/windows-home-server-code-name-vail-drives-should-be-readable-on-non-vail-computers?wa=wsignin1.0
    saying that we need to know where files are physically located. These all are parts of the same equation.
    Monday, May 17, 2010 6:46 AM
  • ...
    Can I do the same thing in Vail? No.

    The path like \\WHS\Videos\some_file will not work because it doesn't provide information about physical drive that contain the some_file.
    ...

    Oddly enough, several people have told you repeatedly that you can .

    And the FlexRAID documentation is extremely clear on the question of parity on network shares: you can do that if you want. (Not that it matters, see previous statements on the subject) You need to learn more about your own tools...


    I'm not on the WHS team, I just post a lot. :)

    Hi Ken, I think the point isn't that he can access the files, but that it makes little sense to do it that way since unlike in v1, it is not possible to distinguish files on different drives, which is what flexraid needs to be able to function reliably.  In this scenario, if parity were generated across two files on the same physical disk, the loss of that disk would cause the parity system to fail because of multiple source failure.

    That's if I am understanding this system correctly, which I think I do...

     

    Monday, May 17, 2010 9:41 AM
  • Hi Ken, I think the point isn't that he can access the files, but that it makes little sense to do it that way since unlike in v1, it is not possible to distinguish files on different drives, which is what flexraid needs to be able to function reliably.  In this scenario, if parity were generated across two files on the same physical disk, the loss of that disk would cause the parity system to fail because of multiple source failure.

    That's if I am understanding this system correctly, which I think I do...

    The FlexRAID home page seems quite clear that you can use it to protect files on network shares. Standard SMB shares are, from a remote client, each considered to be their own file system, even if they reside (on the server) on the same drive. That's why moving from one share on your server to another takes so long: a file system to file system move is really a "move it locally, then move it back" operation. Working directly on a disk, all you need to do is update a pointer and the file moves.

    So, given that it works on network shares, whatever FlexRAID actually does it's not locked to physical disks. Also, per one of the FlexRAID FAQs asking about single vs. multiple partitions/volumes "FlexRAID protects data paths". Pretty clearly it's a layer that sits above (in this case) NTFS and watches operations on files.


    I'm not on the WHS team, I just post a lot. :)
    Monday, May 17, 2010 12:29 PM
  • ...
    Can I do the same thing in Vail? No.

    The path like \\WHS\Videos\some_file will not work because it doesn't provide information about physical drive that contain the some_file.
    ...

    Oddly enough, several people have told you repeatedly that you can .

    And the FlexRAID documentation is extremely clear on the question of parity on network shares: you can do that if you want. (Not that it matters, see previous statements on the subject) You need to learn more about your own tools...


    I'm not on the WHS team, I just post a lot. :)

    Hi Ken, I think the point isn't that he can access the files, but that it makes little sense to do it that way since unlike in v1, it is not possible to distinguish files on different drives, which is what flexraid needs to be able to function reliably.  In this scenario, if parity were generated across two files on the same physical disk, the loss of that disk would cause the parity system to fail because of multiple source failure.

    That's if I am understanding this system correctly, which I think I do...

    You do. Parity size needs to be as large as the largest unit of protection. Physical drive is the unit of protection also because it guaranties that we don't need parity larger than 2TB if the largest disk is 2TB.
    Protecting \\WHS\Videos\ folder does not make sense because if the folder grows to 10GB we would need 10GB parity. This is not how it works.
    Making \\WHS\Videos\ one protected unit and \\WHS\Music\ another protected unit will not work because single drive failure can result losing files in both folders. Parity protects only from one unit failure.
    Again, this is not how it works and it can work only as I described.
    Monday, May 17, 2010 2:12 PM
  • Ok, I think there is a simple solution to those,

    1. who have concerns with DE or would like to use hard- or software RAID instead
    2. who would like to have theire sensitive data bitlocker protected.

    When you install Vail to a single disk, there are 2 Partitions: 1 for system (NTFS) and a second (shown up as "DE FAT32") => you can use bitlocker for system (but not for the DE Partition. I tried it, on the first view it seem´s possible, but afterwards you get problems like expected).

    In the next step you can add as many hdd´s as you want to your vail, just DO NOT add it to the server pool. You can use it as single Drives or configure a RAID, format it with NTFS, use bitlocker, etc. In Windows-Explorer you can configure normal network shares, the Users configured in Dashboard are available for that. In LAN this works fine, everybody has access to his folders - independent if they are stored in server storage pool or outside of it.

    The only disadvantage in the moment could be healed by Microsoft (very easily I think/hope):
    you can not see those folders directly in Dashboard (only the Folders in DE storage pool). But on the right side under "Tasks" theire is an option "open folders". This opens Windows Explorer and works also over the internet (as remote app), but you have to know the admin password to access Dashboard.

    Mediastreaming (over Internet) works already directly - just add the folders in non-server-storage-pool-drives to the appropriate libary.

    No Problem with Server Backup: when you add a drive to server backup storage, it will automaticly formated with NTFS => you can use bitlocker if you want. In the file selection window every folder on the server is available (also the non-server-storage-pool-drives).

    => so the only limitation to this simple solution is with internet access and can be solved by Dashboard access.

    I couldn´t yet test out everything, but it seems to work. So I will make a suggestion to Microsoft to show up every network folder in Dashboard (even those, that are not in DE-storage pool). With this little help, WHS would play it´s best trump: it´s flexibility. When you remember my table, everybody can make his own decision which kinds of data he wants to store in DE-Storage-Pool, and for which kind he prefers an other solution.

    Please vote it up! 

    Monday, May 17, 2010 3:21 PM
  • Okay, I see why you're fixed on the physical disk now. FlexRAID will handle network shares, but you don't want to manage the parity volume you'd need. Fair enough (though creating a single 10 TB "volume" isn't hard these days).

    Frankly, it's too late in the development process for a new DE storage engine to be developed for this version of the product, and the V1 DE engine has a lot of limitations that are every day irritations to a lot of users, so it won't come back (if it could, which I rather doubt). So you're going to be limited to some very minor tweaks like larger blocks (possibly as an option at install time; it's what I would prefer).

    What you're not going to see is the "volumes" DE is putting those blocks into exposed at the OS level, either directly or through APIs (and note that an API isn't just some code that manipulates something, it's a published specification for how third parties can also manipulate that thing).

    My recommendation: install Vail, learn about DE V2, and see what you can do. It seems likely to me that FlexRAID won't meet your needs on V2, so perhaps it's time to give some other tool a try. Or live with a long delay for restoring your gigantic Videos share and build a custom backup strategy for it using wbadmin and some scripting. :)

    Right now though, you're coming across as "I care about this situation, but not very much because I can't be bothered to actually install and investigate this beta to see what the limits actually are."

     


    I'm not on the WHS team, I just post a lot. :)
    Monday, May 17, 2010 4:07 PM
  • I'm not trying to beat a dead horse, but I'm not sure why Microsoft is still using file duplication in DEv2. In DEv1, there was a trade-off: Have to duplicate files, but uses standard NTFS drives (using standard NTFS formatted drives basically required duplication to add protection. Now that they are essentially replacing the filesystem with an NTFS API-compatible block-level driver, there is no longer any reason to duplicate files, but they kept the requirement to duplicate files for protection - they could have simply used parity like every other system.

    As far as I'm concerned, these two things (standard NTFS drives and file duplication) are tied together and balance each other out (one positive, one negative). Now they have changed the positive to a negative (non-standard filesystem), and kept the other one negative (duplication vs. parity). If they had also changed to parity at the same time, most of my arguments against DEv2 would go away.

    As it stands now, I will seriously look at Drobo FS - it has the primary benefits of DEv2 (ad hoc upgrading of mixed drives, single large virtual drive), without the new DEv2 negative (Drobo uses parity for protection against the failure of either 1 or 2 drives). The only thing the Drobo doesn't have is the automatic backup of client machines (there are any number of client backup solutions that aren't any harder to install then the WHS connector software). A drobo is a bit more expensive than the WHS server I built, but is still cheaper than adding the "required" additional drives to enable duplication on all my WHS files.

    Monday, May 17, 2010 7:58 PM
  • Integrated security as I think you want it will only work if the server is on the edge of your network, and there are a host of reasons not to put a device containing data on the edge of your network. It is, in a word, foolish. If, instead, you're asking for parental controls, I'm a strong proponent of actually paying attention to what your children are doing online, rather than relying on automated solutions that they can defeat.
    Ken: I don´t know, what you meant with that.... Proxy and Router are different things... I did NOT reccomend to use WHS as a router.
    Tuesday, May 18, 2010 10:17 AM
  • "Proxy-based 2 stage security solution" to me means you want a proxy server and firewall. A proxy server is not useful in the home (too little repeat traffic to make a caching proxy useful, and I guarantee your kids will circumvent "proxy server as net nanny" in a day if/when they want to), which leaves firewall, which doesn't belong on your server except to protect your server itself. But as I said above, kindly start a separate thread of discussion, because this is out of place in a discussion of Drive Extender.
    I'm not on the WHS team, I just post a lot. :)
    Tuesday, May 18, 2010 1:48 PM
  • I'm not trying to beat a dead horse, but I'm not sure why Microsoft is still using file duplication in DEv2. In DEv1, there was a trade-off: Have to duplicate files, but uses standard NTFS drives (using standard NTFS formatted drives basically required duplication to add protection. Now that they are essentially replacing the filesystem with an NTFS API-compatible block-level driver, there is no longer any reason to duplicate files, but they kept the requirement to duplicate files for protection - they could have simply used parity like every other system.

    As far as I'm concerned, these two things (standard NTFS drives and file duplication) are tied together and balance each other out (one positive, one negative). Now they have changed the positive to a negative (non-standard filesystem), and kept the other one negative (duplication vs. parity). If they had also changed to parity at the same time, most of my arguments against DEv2 would go away.

    As it stands now, I will seriously look at Drobo FS - it has the primary benefits of DEv2 (ad hoc upgrading of mixed drives, single large virtual drive), without the new DEv2 negative (Drobo uses parity for protection against the failure of either 1 or 2 drives). The only thing the Drobo doesn't have is the automatic backup of client machines (there are any number of client backup solutions that aren't any harder to install then the WHS connector software). A drobo is a bit more expensive than the WHS server I built, but is still cheaper than adding the "required" additional drives to enable duplication on all my WHS files.

    Doug I have looked into the Drobo devices.   The Drobo FS has 5 bays which I think may be too small to suit my needs long term and the 8 bay elite is a bit more expensive.

    The Drobo units detect against 1 or 2 drive failures, protect against silent storage errors, and support the Advanced drive formats and will work with 3TB drives coming out by the end of the year.

    The value proposition for Drobo vs WHS is still up in the air but it is a compelling alternative.

     

     

     

     

    Wednesday, May 19, 2010 1:50 PM
  • Show me a real world scenario (I'm uninterested in anything that could start out with "Well, in theory...") that repeatably results in significantly higher data loss than DE V1 would in the equivalent situation and I'll be extremely interested. Until you show me that scenario, I'm not buying the doom and gloom.
    I'm not on the WHS team, I just post a lot. :)

    Since I don't have 4 HDDs freely available I can only recommend someone else to test this scenario out...

    * 4 HDDs (let's say 500GB each), all empty and in 1 pool

    * 100 4,7GB DVD .iso files get added to a not duplicated folder (after each 1 GB block Vail should select another disk to put the next fragment on, to even things out - if it didn't, then the 1 GB fragments wouldn't make much sense anyways!) - resulting in 500 fragments, theoretically spread out evenly over the 4 HDDs.

    * Pull the cable on one HDD (=failure) and check file availability.

    On v1 it would be (if the files were nicely balanced) ~75% on v2 (if the fragments were nicely balanced) ~0%. Results varying from this would have to be attributed to other mechanisms (like allowing x% of HDD capacity imbalance) and settings, ideally though this should be the outcome. As I intend to store my DVD collection on a WHS (either v1 or v2), this is of quite some concern to me!

    Duplication can also be affected by this!

    * Same scenario, this time with 100 4,7GB files to those 4 drives with duplication turned on.

    * Result: 1000 fragments, also theoretically evenly distributed (to be tested, obviously)

    * this time, 2 drives fail (= plugged out)

    On v1, you can't really predict how many files would fail, with an even distribution after 1 HDD failure ~50% of the files should be in danger (as you have 50 files on each HDD) and after the second HDD failure ~25% of the files should be unavailable and ~25% in danger. On v2 2 drives failing for files that have more fragments than HDDs available to spread them out could/would/should lead to nearly a disaster to recover! (250 fragments on each drive, if 125 are lost after the second failure, that's more fragments lost than there were files initially!) I also strongly advise some beta testers to run this test (or to ask some clarifications) and to make also duplication users aware, that if any 2 of their (potentially quite many) drives fail, it's very likely that much more data than these 2 drives contained will be unusable + lost! As soon as 1 drive fails, any additional failure can potentially kill most of your remaining data - so a possibility to use hot or cold spares should be there (as it takes quite some time for home users to get additional HDDs or to RMA the faulty ones and your data until then is again on a much higher risk than on the v1 model)

    My view of this issue is: Don't chop up data if there's a bigger possibility that you can't reassemble it from the pieces than if you would have kept it in one piece. Either add parity calculations (RAID-5...) as a 3rd option (I don't mind the CPU overhead, and to reassemble a file you anyways have to hook up the whole pool to a different machine currently) or build in a switch aka. "Distribute my files more evenly accross disks - DECREASED SECURITY vs. Don't fragment large files - DECREASES STORAGE EFFICIENCY" where you can select for truly unneeded files to be split if needed and for other files to have all fragments kept together on 1 HDD whenever possible.

    I really loved the idea of having a server in which I could insert any HDD of any size and it just gets added to my storage pool and I can choose if I want data to be duplicated or not, but with the current model it seems to me that I risk much more when adding another HDD to the pool than with v1 (even though the new features like CRC etc. are really nice!)

    Sunday, June 06, 2010 6:31 PM
  • >A fan or other failure happens to my server and I have to send it in for repairs or replacement.  I have no way to get my data!

     

    Why wouldn't you get your data from your backup?

     

    Drive duplication in WHS is not a substitute for good, solid backups!

    Monday, June 21, 2010 5:08 PM
  •  Here's a simple example, no theory needed.

    4 drives, lots of 4GB files, every file is on all 4 drives, duplication is turned off.  One drive dies and you lose everything.  There is no arguing that and no testing is needed.

    I'm curious as to how you know this to be true.

     

    Everything I have seen indicates files larger than one gigabyte may be on more than one drive, but not necessarily.  If you have something that states otherwise I wish you would share it as that would be very interesting indeed.

    Monday, June 21, 2010 5:14 PM
  • Ken,

    To make this real simple. DE mirroring is simply unacceptably inefficient for the large number of people who have lots of large video files. My camcorder does about 30GB/hour AVCHD. Blu-ray backups are 30GB+. Having to give up 60%+ to DE is utterly ridiculous, especially when competing solutions don't require this.

    But, wait, it gets worse. With WHSv1, I was able to take the acceptable risk of not duplicating. If a drive failed, i'd have a 1 in 12 chance (12 drive system) of losing the data on that drive.

    With WHSv2, DE has now dropped, to all intents and purposes, to RAID-0 levels. ANY single drive, out of my 12 failing, and I lose ALL my data, at least for the aforementioned larger files. This scenario has already been confirmed by a few people on the board here.

    Bottom line is this is a lousy decision..and hopefully one that can be reversed. If not, at the very least give OEMs the option, with the suggestion mentioned above. If you think a top tier WHS vendor like Niveus is going to trust customers large video libraries to a system like this.....

    WHS needs to grow up. The whimpy little 2/4 drive systems are only going to be useful to the most basic users. Perhaps a pro/advanced version is needed. I'd certainly be willing to pay more for one.

    Saturday, July 10, 2010 2:28 PM
  • All,

     

    I’ve been following this post for a while now and would like to point out some issues, as I see them.

     

    WHS is aimed at the “home user” by which I refer to the average family with say a couple of laptops or stand alone PC’s, and maybe a gaming machine or similar.

     

    I’ve been using it since it fist came on the market and it’s worked “almost” flawlessly since the initial bugs were ironed out.

     

    I’ve lived with hard drive failures for a long time, indeed since their inception as a storage medium and just like the old floppy disks of bygone years their controllers would happily write garbage to them on occasion, or write the correct data, or so they thought… but in fact wrote nothing as the media itself was corrupt. In truth there is no 100% sure fire way of ever detecting this, just like cosmic rays flipping the odd bits in a memory module, it gets translated as correct to the end of the street.

     

    Modern hard drives employ translation techniques to write to their own media and when the data is re-read, run the data writing technique backwards, this is not visible to the operating system at all unless special (and very expensive) drives are used. The firmware on the drive itself controls this, which for consumer drives is a black hole… I have to say that most modern drives are very reliable but if they do fail it’s usually in a catastrophic manner, very rarely in my experience do they fail in any sort of detectable step by step “easy to spot” manner that the average home user or indeed operating system will detect in a timely manner.

     

    If your storing upwards of 30 Terabytes on a WHS box, your using the wrong tools, if your worried about flipped bits then don’t use computers, if you have some critical data that you cant afford to lose at any cost, never store it on a computer and then back it up in the hope that’s its safe, as if its one thing I’ve learned over the years, its that if you want to keep something safe and secure then keep it well away from any computer system or systems and their respective backup processes. Sooner or later these all fail for a myriad of reasons.

     

    For the rest of us WHS1 with it’s” just a bunch of disks”, works just fine (most of the time which is all we cam hope for, in fact) however, and as outlined in this post WHS2 does not, as any disk that’s failed under this system cant be read by any other readily available system that’s “to hand”, for the average home user, to me this is a fatal design flaw.

     

    This breaks the system at the ground level no matter what the logic used to employ it may indicate, as having to utilise another tool/s to recover data, merely places “on line” another reason to fail.

     

    Simples is always better, especially with computers.


    been around to long
    Sunday, July 11, 2010 8:36 PM
  • I agree with dholbon in that if you are trying to store 10+ TB of data on your WHS, that's fine, but you are not the target user.  I think the typical user is a family with a few 1-3 pcs which need to be backed up, and with <2 TB of media data to store in a central location.

    I pretty much fit that bill, and WHS has been great.  I've been able to do a couple bare metal restores (which is really the greatest feature in my opinion) after hard drive failures, and I've got my media in a central spot which is also duplicated to protect against hard drive failures.  

    The 3 main things that WHS v1 does not do, which nearly made me find other solutions are:

    1. Duplicate the PC backups. - I don't understand at all how this was not a feature in v1.  Sure, if you have a hard drive failure on your WHS causing loss of your backups, you can just re-backup, but then you lose all the wonderful advantages of having a backup history.  I've had at least one occasion where I accidentally deleted an important file, but didn't realize it right away but was able to go into last week's backup and get it.

    2. Protect against silent failures. - In v1 there is an even more insidious problem than a silent failure in some media file.  You can get silent failures in your PC backups, which if they hit just the right sectors, can make every single one of your backups unable to be restored, rendering them useless.  This is because of the fact that it only shares unique clusters, so if a system file present on all your pcs gets silently corrupted in your backup, it breaks all of the backups.  This happened to me, and it was terrifying.  With Vail, from what I've read, it should not be an issue, because the backups will be duplicated and protected against silent failures.

    3. Allow for simple cloud backup. - All the online backup stuff wants real drives, and with DE migrator can run into file conflict issues.  (that said I did get crashplan working, with some jiggery-pokery, and I super highly recommend it)

     

    From my standpoint, Vail will be a godsend, as from what i understand, it solves 1 and 2, and at least makes 3 a little better in that you at least don't have to worry about fighting with the demigrator.

    Tuesday, July 20, 2010 8:55 PM
  • I am using WHS because of the Drive Extender, it a unique solution that allows me to gather up all the junk that used to be spread all over my home network on different shares and box it all in  one easily accessible accessible location.   Now it looks to me like Microsoft is ignoring their current unique usefulness and going over to the mainstream backup and safety data storage.  

    This is all good, but its also something that you can get from 100 diffrent vendors and from what I'm reading it by far not the even a competitive solution they are supplying the user with.

     

    For me I only need backups of maybe 5% of my data, the rest is media and audio files that can be easily replaced and I do not want to spend resource on backing it up or mirroring. If one of my hard-drive fails I can accept loosing a certain % of my data but losing all of it or spending hours or days recovering it is unacceptable ( then I may as well set up raid).

     

    I suggest next time Microsoft upgrades a product they may want to ask them selfs "why are people using this product?"

    Microsoft might as well rebrand this product because this is not a solution for the current WHS fans, this now just and all in one solution that has nothing unique and special.

     

     

    Tuesday, August 17, 2010 4:04 PM
  • As it stands now, I will seriously look at Drobo FS - it has the primary benefits of DEv2 (ad hoc upgrading of mixed drives, single large virtual drive), without the new DEv2 negative (Drobo uses parity for protection against the failure of either 1 or 2 drives). The only thing the Drobo doesn't have is the automatic backup of client machines (there are any number of client backup solutions that aren't any harder to install then the WHS connector software). A drobo is a bit more expensive than the WHS server I built, but is still cheaper than adding the "required" additional drives to enable duplication on all my WHS files.

    WHS still is more flexible than just Drobo, or even their DroboShare product.  While I love my Drobo (v2 w/ Firewire, 4 bays), but where I'm going with video I may need a DroboPro or Drobo Elite.  The premium for the Elite is interesting because with Vail, I can hook up both my WHS and my computer to the Elite, where with the Pro I have to pick one or the other.

     

    I would love for someone like HP to introduce a WHS with Drobo-like functionality over and above Drive Extender, because for large media files duplication is just too wasteful, but I do agree that standard RAID is way to complicated.

     

    The problem I have with Drobo Share (integrated or add on) is for the money you just don't get that much functionality.  Hmm... A DroboShare that's based on WHS - that's the ticket!  Best of all worlds.

    Sunday, August 22, 2010 10:31 PM
  • I am using WHS because of the Drive Extender, it a unique solution

    Those Drobo things are too overpriced. As well as those NAS boxes - their vendors are just too greedy.
    WHS used to offer unique solution to combine disks of different sizes into a single big volume, but now it is not unique, actually.
    MHDDFS (google it) is included in Ubuntu Linux now and it is essentially the same as Drive Extender, or there are other LVM implementations. It's ext3 FS and it can be readable in Windows - no benefits of WHS really.

    Vail is now useless in terms of providing data protection at reasonable cost.

    WHS Drive Extender is not unique. Just build a Linux box with a lot of HDDs with LVM and protect data with single parity drive using FlexRaid (or disParity). You will be able to combine disks with different capacity into a single volume. Dozens of disk with single parity HDD.
    The only thing missed is PC backups are not centralized. Still, huge savings in storage usage comparing to Vail's mandatory duplication.
    Monday, August 23, 2010 4:36 AM
  • If you want to trust your data to a home brew file system, good luck. I think you`re very brave.
    Tuesday, August 24, 2010 12:50 PM
  • If you want to trust your data to a home brew file system, good luck. I think you`re very brave.

     

    For the same reason I don't trust DEv2! ;-)

    MHDDFS seems to be more like DEv1, more or less a program, that stores files on different HDDs using a widespread + known filesystem but presenting those HDDs to the system as a single one.

    DEv2 on the other hand uses technology that is as far as I know present in no other Windows distribution, seems more unreliable (as it uses data striping which decreases reliability) and uses a quite heavy additional overhead. More than that it's currently quite undocumented and the code cannot be reviewed.

    Wednesday, August 25, 2010 4:13 PM
  • There seems to be an awful lot of "excitement" here over whether the DE2 chunking functionality may or may not be a problem and people looking for "real world proof" that it is or it isn't. If anyone is looking for proof then we can take that proof from DE1 as it does (in effect) a similar thing.

    Let's take a real world example. I use a "hard disc based music player" made by the company I work for at home. It is able to rip CDs to it's internal storage or to an external NAS. Typically a ripped album will consist of the following files:

    808 State \ Ex-El (CD-1) \ 01 - San Francisco.WAV

    808 State \ Ex-El (CD-1) \ 02 - Spanish Heart.WAV

    808 State \ Ex-El (CD-1) \ 03 - Leo Leo.WAV

    808 State \ Ex-El (CD-1) \ 04 - Qmart.WAV

    808 State \ Ex-El (CD-1) \ 05 - Naphatiti.WAV

    808 State \ Ex-El (CD-1) \ 06 - Lift.WAV

    808 State \ Ex-El (CD-1) \ 07 - Ooops.WAV

    808 State \ Ex-El (CD-1) \ 08 - Empire.WAV

    808 State \ Ex-El (CD-1) \ 09 - In Yer Face.WAV

    808 State \ Ex-El (CD-1) \ 10 - Cubik.WAV

    808 State \ Ex-El (CD-1) \ 11 - Lambrusco Cowboy.WAV

    808 State \ Ex-El (CD-1) \ 12 - Techno Bell.WAV

    808 State \ Ex-El (CD-1) \ 13 - Olympic.WAV

    808 State \ Ex-El (CD-1) \ amginfo.xml

    808 State \ Ex-El (CD-1) \ AMGReport.log

    808 State \ Ex-El (CD-1) \ cddbinfo.txt

    808 State \ Ex-El (CD-1) \ folder.jpg

    ...if those files are ripped to the units internal storage then they are all stored together on the same physical drive, there are two drives in the unit and the second keeps a snapshot of the music on the first so should anything happen to the first drive then we can recover the customers music from the second.

    If those files are ripped to a WHS share that is not duplicated then those files can (and *DO*) end up spread across multiple drives, the failure of any single one of those drives then renders the album incomplete (corrupted). I've had this happen with a drive that was in a backplane where the backplane started to make an intermittent contact with the caddy. This is one reason why I went from storing my DVDs as VIDEO_TS folders to ISOs as the ISO bundled all the various files together so that if I lost a drive then at least I'd loose a number of complete movies rather than a greater number of partial (but still ultimately unusable) movies.

    I know (from "real world" experiences) that when that drive went offline I suddenly got a whole raft of File Conflict errors across numerous unduplicated shares which were a bit of a sphincter tightening moment - almost random tracks just "vanishing" from albums here, there and everywhere!

    If MS have implemented a chunking algorithm on files of >1Gb where there is any possibility (no matter how slight) that chunks may be spread across multiple drives (and given that this seems to be the purpose of it then it's likely they will be) then for unduplicated data, no matter how you look at it, this makes those files *MORE* vulnerable. This is a fact that cannot be disputed.

     

    For me the *ONLY* advantage of Windows Home Server that made me buy it in the first place was the ability to add drives to a common storage pool - I was a bit scared when I first realised that it made no attempt to try to keep files from the same folder structure together on the same drive but I've come to terms with that now. I do worry that in making duplication really the only way to go (you can go unduplicated but you're taking an even bigger risk by doing so) MS are looking at Vail as being a much smaller scale solution than it once was. It's been banded about that for duplication to work you are looking at a 2.3 times basic storage multiplier which means I have to take my 14Tb of data that has 2Tb of it duplicated and increase that to 32Tb which is an incredible amount of storage to store what is actually only 14Tb... :-(

    Phil 

    Thursday, August 26, 2010 2:09 PM
  • I haven't tried out Vail yet, so I'm not best placed to judge, but it would seem to me to be fairly simple to keep everyone happy...

    1) Allow the user to create multiple drive pools. Each network share would be assigned to one of the pools.

    2) A drive pool could be one of three types

    a) JBOD - standard dynamic disks that have been spanned, not striped or mirrored. Windows sees one volume, a file is only ever on one disk, and that disk could be a virtual drive created by a RAID controller, but normally would just be a normal disks No requirement for each disk to be the same capacity, no redundancy unless you use RAID. Disks would still be readable on other versions of Windows. Power users not worried about silent data corruption would probably use this with RAID 5 or 6.

    b) JBOD with ECC - as above, but loose 12% or whatever it is to checksums. Protects against silent data corruption but not disk failure. Most people with RAID 5 or 6 would probably go for this option and it would be great to see in all versions of windows.

    c) Duplication with ECC - what the new DE does - you are protected against both disk failures and silent data corruption, but at the cost of more capacity being lost. This is what most non-power users would go for for their important data.

    I think this would give users enough choice without being too complex to understand.

    Thursday, October 14, 2010 7:54 PM
  • This seems a bit odd, but I have been testing with small drives to see how this thing functions for me performance wise/etc...  It's not been bad, I get 70-80MB/sec writing to old 80GB drives.  That said, I did the following:

    1. removed system drive from pool, so it shows 43.3GB free of 60GB, and there is a empty chunk of unallocated space on the disk.
    2. put two 80GB drives in it (for testing purposes so I could fill them up and not spend days doing so)
    3. turned off duplication and shadow copies on the videos share
    4. deleted all volume shadow copies sitting out there vssadmin delete shadows /all
    5. copied version after version of the install dvd iso to the videos folder
    6. it filled up at 67GB in use!

    So, from two 80GB drives, without duplication on, I get 67GB of usable space?  WTF?

    The Server folders tab (though labeled in readme as not completely accurate if I remember right) shows:

    Client Computer Backups = .1GB

    Documents = .1GB

    Music = .1GB

    Pictures = .1GB

    Recorded TV = .1GB

    Shadow Copies = .1GB

    Videos = 65GB

    It does actually have 65GB, and windows says I'm 1GB short when I try to copy 4GB over again.

    Then I noticed, it lets me copy one 4GB file to each of the other shares, which I don't fully understand.  This must be due to the 1GB chunking?  I couldn't get more than 83GB copied to 156GB (formatted size) of disk though, with duplication off, and 63GB copied with duplication on.  I hope this is a ratio that only happens when you're using small disks, and it really isn't a real world test, but it's still sketchy?  Has anyone actually looked at how much it actually you copy, vs what it says you have avail?  I am going to test again with 2TB drives, as loosing 47% space without duplication seems crazy?

    Friday, October 15, 2010 2:40 AM
  • Droesch76, can you please submit a bug report on Connect?
    I'm not on the WHS team, I just post a lot. :)
    Friday, October 15, 2010 12:58 PM
  • Indeed, I am going to validate my test with larger drives this weekend... since in the real world, it would be silly to use 80GB drives.
    Friday, October 15, 2010 4:41 PM
  • I would submit one now.

    An "80 GB" drive will only hold ~74 GB as Windows measures space (a bit less when you factor in OS overheads). Then Vail's ECC will take another 12% off the top, for around 65-66 GB of usable space per 80 GB drive. But with duplication turned off for everything, and previous versions also turned off, I would expect that would mean you could fit as much as 120 GB of data, more or less. So something seems wrong.


    I'm not on the WHS team, I just post a lot. :)
    Friday, October 15, 2010 5:23 PM
  • I finally made the time to do this over again with the two drives I'm actually planning to leave in the drive pool (two WD Green EARS 2TB drives).  I copied 3,245GB over which is exactly 87% of the total available space (folder has duplication and shadow copies turned off).  This matches the numbers given, so it must be skewed with small drives?

    Monday, October 25, 2010 2:30 AM
  • I did the test one more time with duplication on.  1669GB copied, which is slightly over half of what I copied without duplication.. which seems a bit strange.
    Tuesday, October 26, 2010 10:32 PM
  • This may be answered prior, but I missed it.


    Under the new system, what happens if the system drive fails (as it's my understanding under the old system, v1, that if your system drive failed it would be difficult at best to recover the data from the individual drives and under the new v2 system it could be worse).

     

    And is there an easy way to prevent "bad" things from happening with v2 if your system drive fails?

     

     

    Thank you.

    Wednesday, November 17, 2010 6:26 PM
  • An "80 GB" drive will only hold ~74 GB as Windows measures space (a bit less when you factor in OS overheads). Then Vail's ECC will take another 12% off the top, for around 65-66 GB of usable space per 80 GB drive. But with duplication turned off for everything, and previous versions also turned off, I would expect that would mean you could fit as much as 120 GB of data, more or less. So something seems wrong.

    Aren't we forgetting VAIL uses a pool of 1GB chunks for storage? A chunk that large (compared to the total storage capacity of the drive) will have a relatively large effect on the storage overhead. This means that one drive could hold about 64 chunks (74GB taking in account of the about 12% overhead). With two drives the pool would have 128 chunks.

    The following 'explanation' of what droesch sees on his VAIL server is pure theory, based on what I think I know about VAIL and in no means  based on any "fact", MS specification or even practical test (I will probably do some :-)

    According to "droesch" he can store 65G of VAIL .iso files (4248KB each) - that would be about 15 files in the Video share?
    As each file is just a litle over 4GB it needs 5 chunks from the pool to store the single file.
    That makes a total of 15x5=75 chunks for storing all 15 files (1GB short for just another file so less then 5 chunks left unused).

    The other 6 shares could hold at least 6 more files. My assumtion is that VAIL pre-allocates an unknown number of chunks per share. Logic tells us that the number of pre-allocated chunks would be at least 5 (to hold the one file allowed to copy) and less the 10 (or it would allow two files). Something tells me the 'unknown' number could very well be 8 :-)
    Assumption: the 6 shares are pre-allocating 6x8=48 chunks.

    Now add the total chunks so far: 75 + 48 = 123 chunks with some chunks left where 'some' is less then 5.
    Didn't we started with 128 chunks available? Just can't explain where I lost at least 1 chunk. Must be a bug :-)

    Does this make any sense?
    - Theo.


    No home server like Home Server
    Wednesday, November 17, 2010 10:07 PM
  • Yeah, that does make sense... good write up.
    Wednesday, November 17, 2010 11:47 PM

  • Under the new system, what happens if the system drive fails (as it's my understanding under the old system, v1, that if your system drive failed it would be difficult at best to recover the data from the individual drives and under the new v2 system it could be worse).


    With WHS v1, a system drive failure shouldn't a problem for storage drives, as the format is NTFS.

    Vail is different creature, where you gonna need another Vail Server to access the storage, or re-install your Vail machine and then add the earlier storage drives back to the pool.


    One WHS v1 machine in the basement with a mixed setup of harddrives in and outside the storage pool. And now, next to it, a Vail Refresh brother for beta duties.
    Thursday, November 18, 2010 12:15 PM
  • With WHS v1, a system drive failure shouldn't a problem for storage drives, as the format is NTFS.

    Vail is different creature, where you gonna need another Vail Server to access the storage, or re-install your Vail machine and then add the earlier storage drives back to the pool.

    Well.... VAIL definitely is - as you say - quite a different creature compared to WHS:

    VAIL/Aurora has a build in server backup feature wich is entirely different from the backup function available for WHSv1.
    Functionally it's works 'just like' PC backups in the sense that you can make automatic (image based) backups of your server (OS system drive and data) to one or more additional backup drives. For this it uses the Backup and Recovery functionality available in Windows Server 2008.
    In case anything 'bad' happens  to the server storage (or the server itself) all that is needed is a thumbdrive to boot the server and the "Server Restore CD". This enables you to restore the server to a previous state.
    But mind that VAIL preview is a beta creature and there are problems with the backup functionality. We will see what the final RTM version will bring. So don't count on the currecnt VAIL beta server backup it as your only means for desaster recovery!

    One other way (by design) I know of to get your origial data back after reinstaling the VAIL/Arora OS is by "attaching" the old drives as an external storage pool and then promoting them to the primary storage pool. Again, this functionality for VAIL has problems and the only time I tried it failed misarably due to beta-problems.

    And then there is the 'unsupported' way: attach a drive to any VAIL server. It will be able to read the disks and you will find a number of virtual disk files (.vhd) you can attach to as virtual drives to copy of data. Never fully tried this though...

    Please remember: VAIL is beta?
    So do not trust your data to it - unless you have a valid backup or are prepared to loose all.
    I do have all my data on my VAIL test server. But I synchronise it (using robocopy) with my primary server running WHSv1 - which in turn is  backed up to at least one external (off-site) backup.

    - Theo.


    No home server like Home Server
    Friday, November 19, 2010 11:24 PM
  • So killed off at last but what will be in it's place?  The whole point of WHS was that it was 'better' than raid.
    -- Free AV for WHS : http://whsclamav.sourceforge.net/
    Tuesday, November 23, 2010 5:26 PM
  • So killed off at last but what will be in it's place? The whole point of WHS was that it was 'better' than raid.
    Not for me.
     

    David Wilkinson | Visual C++ MVP
    Tuesday, November 23, 2010 5:47 PM
  • "What will be in it's place?"

    Nothing, I suspect. Some little tool like SBS has to let you move shares from drive to drive. If an OEM wants, they can build on a RAID platform. You can use software RAID, with the same level of actual support Microsoft has always given for that technology. Yeah, like that.

    The folks who've been whining about their huge storage pools should be happy, though they're getting what they asked for at the expense of the consumers the product is actually intended for. :P


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, November 23, 2010 5:54 PM
  • > at the expense of the consumers the product is actually intended for.
     
    My idea exactly (based on the little real info we just got so far).. :-(.
     

    "What will be in it's place?"

    Nothing, I suspect. Some little tool like SBS has to let you move shares from drive to drive. If an OEM wants, they can build on a RAID platform. You can use software RAID, with the same level of actual support Microsoft has always given for that technology. Yeah, like that.

    The folks who've been whining about their huge storage pools should be happy, though they're getting what they asked for at the expense of the consumers the product is actually intended for. :P


    I'm not on the WHS team, I just post a lot. :)

    Have a nice day!
    Tuesday, November 23, 2010 6:47 PM
  • So what does Windows Storage Server do for data protection?  Does it also use DE?
    Tuesday, November 23, 2010 7:37 PM
  • "What will be in it's place?"

    Nothing, I suspect. Some little tool like SBS has to let you move shares from drive to drive. If an OEM wants, they can build on a RAID platform. You can use software RAID, with the same level of actual support Microsoft has always given for that technology. Yeah, like that.

    The folks who've been whining about their huge storage pools should be happy, though they're getting what they asked for at the expense of the consumers the product is actually intended for. :P


    I'm not on the WHS team, I just post a lot. :)


    Nice Ken.

    So people giving feedback you don't want to hear is whining.  Very objective.  Maybe you should take a step back and ask why they gave that feedback, maybe they liked the concept of WHS but just wanted to use it for something slightly different to you - that doesn't make it whining, just feedback from a different perspective to you - maybe, people like myself who were "whining" just wanted MS to make some changes to get the best of both worlds...I certainly did.

    I've still never seen you or anyone else define who the product is intended for - and having large storage pools is not something MS have ever said is something WHS shouldn't be used for.

    Tuesday, November 23, 2010 7:38 PM
  • ... Very objective. ...

    Nobody pays me for what I do here. I do it because I like to help people solve problems, and because I like Windows Home Server. So I don't have to be objective. I said whining, and I meant whining. See the second definition here; all the folks who said "if DE V2 is in the release version, I'm switching to linux" were whining by that definition, because it's the adult equivalent of "let me do it my way or I'll pick up my toys and go home".

    I know the reasons for the feedback regarding DE V2, and I understand them. But I know several dozen people who own and use Windows Home Server, and of them, two have more than a terabyte of data in their shares. I'm one of the two. The people with relatively modest storage needs are the vast majority, and the minority has contributed to a decision that messes up the product for the majority.

    So yeah: whining.


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, November 23, 2010 7:54 PM
  • eagle63, the version known by the code name Breckenridge (a business-oriented product, as the entire "Storage Server" line is) does, at the moment. It will lose DE as a result of this.


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, November 23, 2010 7:55 PM
  • eagle63, the version known by the code name Breckenridge (a business-oriented product, as the entire "Storage Server" line is) does, at the moment. It will lose DE as a result of this.


    I'm not on the WHS team, I just post a lot. :)

    Thanks Ken, but isn't Breckenridge already out?  I thought it was a finished product that was released a month or so ago.  (http://blogs.technet.com/b/storageserver/archive/2010/09/22/windows-storage-server-2008-r2-is-now-available.aspx) If yes, does that mean DE actually works?  My impression/hunch (rightly or wrongly) is that there must be some technical reasons for why DE is getting canned from Vail.  

    On a related note, being that I have a technet subscription and could essentially get Windows Storage Server for free, is that now a much more attractive choice rather than wait for Vail and it's potential non-existent data protection?  Thanks!

    Tuesday, November 23, 2010 8:04 PM
  • That's fair enough Ken.  Thanks for the explanation.

    I'm sure you're pretty upset with the decision, so am I if that is any consolation.  I've never been one to take my bat and ball home about the new DE.  Simply to say it doesn't meet my needs and offer my explanation in the hope MS would make changes to how it worked whilst we still had the opportunity in beta...certainly not to ditch it.  I fully expected DE2 to make it to release with few changes and my needs to be dismissed - pretty much what you had said numerous times - sadly that was not to be the case.

    Good luck.

    Tuesday, November 23, 2010 8:07 PM
  • Ahh.....
    It sucks that it got removed.
    Cause I never use RAID cause of several reasons.
    Hope there are replacement for DE.

    Wednesday, November 24, 2010 7:48 AM
  • Bringing back DE is now the most voted up Vail suggestion on connect. After 24 hours of being posted.

    I think that says it all.

    Wednesday, November 24, 2010 10:11 AM
  • Bringing back DE is now the most voted up Vail suggestion on connect. After 24 hours of being posted.

    I think that says it all.


    I think that's a lot more than the connect suggestion opened by people with large pools complaining about DE2.

    However, what is wrong with just sorting out DEv1 making it work with previous versions and that should be it.
    -- Free AV for WHS : http://whsclamav.sourceforge.net/
    Wednesday, November 24, 2010 11:44 AM
  • ... Very objective. ...

    Nobody pays me for what I do here. I do it because I like to help people solve problems, and because I like Windows Home Server. So I don't have to be objective. I said whining, and I meant whining. See the second definition here ; all the folks who said "if DE V2 is in the release version, I'm switching to linux" were whining by that definition, because it's the adult equivalent of "let me do it my way or I'll pick up my toys and go home".

    I know the reasons for the feedback regarding DE V2, and I understand them. But I know several dozen people who own and use Windows Home Server, and of them, two have more than a terabyte of data in their shares. I'm one of the two. The people with relatively modest storage needs are the vast majority, and the minority has contributed to a decision that messes up the product for the majority.

    So yeah: whining.


    I'm not on the WHS team, I just post a lot. :)

    Hmm...  Using your own definition, you're whining.
    Wednesday, November 24, 2010 12:54 PM
  • Bringing back DE is now the most voted up Vail suggestion on connect. After 24 hours of being posted.

    I think that says it all.


    I think that's a lot more than the connect suggestion opened by people with large pools complaining about DE2.

    However, what is wrong with just sorting out DEv1 making it work with previous versions and that should be it.
    -- Free AV for WHS : http://whsclamav.sourceforge.net/

    Agreed.  There was absolutely nothing wrong with DE v1 after the corruption problems were sorted out.  They could have made minor tweaks to it, but otherwise left it alone.  Whoever made the decision to go with DE v2 needs a kick in the backside.
    Wednesday, November 24, 2010 1:08 PM
  • I'm really, really unhappy with the decision to remove DE from Vail. I'm a happy Mediasmart EX490 user, and I've been itching to buy a new WHS with all the new features of Vail... but now I can't see myself buying a new WHS, or sticking with this technology. The DE tech is the best part of the platform- I never have to worry about losing a drive, and I can always just add a new SATA drive to increase my storage pool. Seriously, this is a bad decision on Microsoft's part. Please reconsider. 
    Thursday, November 25, 2010 2:55 AM
  • I don't really see the point of an add-in.  How long would it take to develop an addin that does something this complex and does it right?  Look at how long it took to get WHS v1 right.  Who's going to buy Vail (which is now basically pointless for a lot of people) and wait for a year for an addin that may or may not work?  Not I.  If functionality like this were that easy it would have been done already for other OS's.

    I know that's not very constructive, but personally unless this DE decision changes I consider WHS a dead product.  Fortunately v1 works pretty well for me, but I have not even the slightest interest in buying Vail.

    On the plus side I guess I can finish my new build and just install WHS v1 on it instead of waiting for Vail (which I thought was due EOY '10).

    Thursday, November 25, 2010 8:09 AM
  • While Mr. Softy flounders over WinHS v2, look around at the various linux-based home server and NAS products. I (easily) installed OpenFiler on an Athlon64 system, as a back up for WinHS v1, and like it. Amahi looks even better; in my situation, it could probably replace WinHS, and, apparently, my router as well.

    Thursday, November 25, 2010 1:32 PM
  • Now that they removed drive extender, what's really left that makes Home Server a unique product?  Poor planning? Go back to DE v1.
    Thursday, November 25, 2010 5:22 PM
  • Hey Ken, sounds like you're whining.  And you're right, no one pays you.  You can go anytime you want without quiting.  Lose the tone, or go home.
    Sparky
    Thursday, November 25, 2010 11:08 PM
  • Count me as someone who's decidedly unhappy about the loss of Drive Extender. For me the issue is not setting up WHS in the first place, but extending the reliable storage capacity once the box is up and running. Yes, 2 TB drives may be cheap, but at some point that 2 TB is going to run out (HD home movies, anyone?) or fail, and then what is the home user going to do? (And FWIW Vail doesn't support my 3ware 9500S RAID card, even using the latest drivers).

    Right now, I can set up a cheap WHS box for someone with an empty set of removable drive bays - or with one filled. When they want additional storage, they simply plug in a drive and configure it in the console. Easy. No worrying about rebalancing or moving shares or any of the other stuff IT professionals find trivial. Amahi has been mentioned above. I don't see your typical WHS owner going through the rigmarole detailed at http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA to add a new HDD. 

    Unless Vail has an easy way of replicating the ease and functionality of DE, I ask the WHS developers to have a long hard think about the wants and abilities of their target market and reinstate DE.


    qts
    Friday, November 26, 2010 1:15 AM
  • Yep, I'm whining a bit.

    I knew someone would call me on if when I wrote that. Sorry if you don't like it; it's pretty annoying, isn't it?


    I'm not on the WHS team, I just post a lot. :)
    Friday, November 26, 2010 2:39 AM
  • Seems like the  usual big Corp. has  loss touch with the Base, DE is a great feature,  No it isnt all the WHS is, but makes it complete. I have already formatted over my Beta Vail at this point and will start anew. back to square one or snag a cheap version of WHS V1 for my Media server.I will follow along just in case MS wakes up and brings it back to life.  by the way Ken ,  I dont consider it whining, just being vocal, and isnt that what forums are all about, getting vocal and sharing ideas.   
    Monday, November 29, 2010 4:15 AM
  • Yep, I'm whining a bit.

    I knew someone would call me on if when I wrote that. Sorry if you don't like it; it's pretty annoying, isn't it?


    I'm not on the WHS team, I just post a lot. :)

    Everyone has the right to complain.  I feel like whining as well so you'r in good company.

    It seems if they are listening to small business users for these requirements, why call it Windows "Home" Server?  Adding additional seamless storage was just about the coolest thing WHS had going for it. Aside from automatic PC backup and folder duplication. Take away two out of the three and we have just a plain old run of the mill server.  No reason to move to it. I unpliged my VAIL box the other day and put the drives back in my WHS V1 machine where it continues to be a simple task adding capacity to the storage pool.  Too bad .  I did really like the re-vamping of the Web UI.  Not so crazy about the new Dashboard though.

    Monday, November 29, 2010 6:38 AM
  • MS has been doing so much better lately (or so it seemed) in delivering high-quality, consumer-oriented, integrated products (Win7, WP7, Kinect, etc.). And the backstory to all of it is that MS claims to wants to provide the products that consumers use to run their lives ("save us from our phones," the "digital living room," "to the cloud!"). WHS is a platform that really could lock MS into our homes in a seamless, efficient, and beautiful way.

    But stripping out features that people could really find a use for is so counterintuitive to this goal that it boggles the mind! I wish they'd read these forums and get a sense of what their customers REALLY think about their products and decisions:

    • restore DE to Vail in the most useful and stable way possible; customers' storage requirements fluctuate greatly and more frequently than system performance requirements, especially in a home server (is anyone actually deleting pictures, music, movies, or documents en masse?)
    • introduce full MCE capabilities and/or other Xbox 360 integration (beyond simple HomeGroup features and DLNA)
    • deliver multi-platform mobile apps for downloading and STREAMING media (WP7, iPhone/iPad, Android, etc.)
    • enable Remote Access to Win7 HP (I know it doesn't natively support RDC, but if Mesh can give you remote access, why not work that capability into Launchpad?)

    Whether we realize it or not, we've all looking for something to tie our respective digital worlds together, and the cloud just isn't there yet.

    Friday, December 03, 2010 5:00 PM
  • Well it looks like the new WHS should provide a choice at install time, the old DE updated so you can see which single files span more than a single disk with the ability to move these including duplicated ones guaranteeing (ahem…) recovery, and the new extender.

     

    This would keep everyone happy, no?


    been around to long
    Saturday, December 04, 2010 6:36 PM
  • We need a new solution and a bit more communication from microsoft regarding this fiasco. iSCSI support needs to be there as well as the ability to mount a network share as storage for WHS.  Since microsoft is looking to rid themselves of the job of data protection, they have to allow us to use other solutions in their place.

    such as this proposal: http://picasaweb.google.com/lh/photo/cjtgO4dIqP0g9pTPKQTj8w?feat=directlink

     

    Monday, December 06, 2010 10:13 PM
  • So I'm confused, everything I've read says Drive extender is gone from VAIL.

    Will VAIL still allow me to add drives at will to expand storage or not?

    Tuesday, December 28, 2010 4:31 PM
  • On Tue, 28 Dec 2010 16:31:16 +0000, Dynamic By Design wrote:

    So I'm confused, everything I've read says Drive extender is gone from VAIL.

    It will be gone in the next release.


    Will VAIL still allow me to add drives at will to expand storage or not?

    No one knows for sure what is coming in the next release.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    Want custom ringtones on your Windows Phone 7 device?
    Conversational mode: Describes the typical office the day after a major
    sporting event.

    Tuesday, December 28, 2010 5:20 PM
  • Yes, Vail will still let you add a drive and extend storage. What that will look like I can't say.
    I'm not on the WHS team, I just post a lot. :)
    Tuesday, December 28, 2010 6:03 PM
  • That was the top feature I liked was that at any time I could extend the drive space but adding another drive, why do all the articles I've read show people very angry about this being removed if it in fact has not been removed?

     

    Also is there any word that VAIL will have Media Center PC built it?

     

    Right now I'm looking at:

    XBox360 -> Media Center PC -> WHS -> NAS (if WHS can't extend anymore)  if WHS had Media Center PC built in I could eliminate a need for a machine.

    Tuesday, December 28, 2010 6:30 PM
  • On Tue, 28 Dec 2010 18:30:25 +0000, Dynamic By Design wrote:

    That was the top feature I liked was that at any time I could extend the drive space but adding another drive, why do all the articles I've read show people very angry about this being removed if it in fact has not been removed?

    It is being removed and will be gone from the next release. The death of
    Drive Extender was only recently announced and there haven't been any
    updates released since the announcement.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    Want custom ringtones on your Windows Phone 7 device?
    I smell a wumpus.

    Tuesday, December 28, 2010 6:33 PM
  • On Tue, 28 Dec 2010 18:30:25 +0000, Dynamic By Design wrote:

    That was the top feature I liked was that at any time I could extend the drive space but adding another drive, why do all the articles I've read show people very angry about this being removed if it in fact has not been removed?

    It is being removed and will be gone from the next release. The death of
    Drive Extender was only recently announced and there haven't been any
    updates released since the announcement.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    Want custom ringtones on your Windows Phone 7 device?
    I smell a wumpus.


    So then you won't be able to add drives and extend storage although Ken Warren indicated you can.
    Tuesday, December 28, 2010 6:42 PM
  • On Tue, 28 Dec 2010 18:42:47 +0000, Dynamic By Design wrote:

    So then you won't be able to add drives and extend storage although Ken Warren indicated you can.

    No, you will be able to, it just won't use the DE technology and what the
    replacement is going to be no one is exactly sure of yet. It could well be
    some form of RAID for example.
    The problem, and what has people upset is that DE, unlike RAID, allows you
    to add any old sized disk and it will be automatically added to the storage
    pool. RAID has much stricter requirements on the size of disks. There are a
    number of things that DE provides that any other solution is not likely to
    be able to provide.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    Want custom ringtones on your Windows Phone 7 device?
    CPU: A juvenile way of telling your dog he missed the paper.

    Tuesday, December 28, 2010 6:56 PM
  • On Tue, 28 Dec 2010 18:42:47 +0000, Dynamic By Design wrote:

    So then you won't be able to add drives and extend storage although Ken Warren indicated you can.

    No, you will be able to, it just won't use the DE technology and what the
    replacement is going to be no one is exactly sure of yet. It could well be
    some form of RAID for example.
    The problem, and what has people upset is that DE, unlike RAID, allows you
    to add any old sized disk and it will be automatically added to the storage
    pool. RAID has much stricter requirements on the size of disks. There are a
    number of things that DE provides that any other solution is not likely to
    be able to provide.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    Want custom ringtones on your Windows Phone 7 device?
    CPU: A juvenile way of telling your dog he missed the paper.

    Ok Thanks Paul, that's actaully what I was trying to say.  The ability to add any size/type of drive and add it's space to the pool is what I really liked about WHS.  Doing a strict RAID 5 with no ability to expand the pool easly really sucks.

    I'm now looking at doing something like:

    XBox360 -> Windows Media PC (Win 7 Ultra) -> uRaid Server

    Then I can run Backup For Workgroups on the win 7 ultra and backup all of the computers (4) to the uRaid Server.

    Wish Microsoft hadn't removed that feature changes my whole purchasing plan now.

    Tuesday, December 28, 2010 7:04 PM
  • That was the top feature I liked was that at any time I could extend the drive space but adding another drive, why do all the articles I've read show people very angry about this being removed if it in fact has not been removed?

     

    Also is there any word that VAIL will have Media Center PC built it?

     

    Right now I'm looking at:

    XBox360 -> Media Center PC -> WHS -> NAS (if WHS can't extend anymore)  if WHS had Media Center PC built in I could eliminate a need for a machine.

    Drive Extender includes:

    • file duplication for high availability of your data and local protection
    • seamless expansion of server storage (i.e. plug in a drive, add to storage, and the server takes care of everything else)
    • some error detection/correction (in V2) which would help prevent certain corruption scenarios.

    You will still be able to plug in a drive and extend server storage. It probably won't be automatic, you'll have to take some action to get it to happen.

    Media Center: never, most likely. Microsoft doesn't see the server as a media powerhouse, just a media storage location. Think media hub, not HTPC. Or to put it a different way: I doubt that Microsoft is interested in reducing your running machine count, just in selling copies of Windows.


    I'm not on the WHS team, I just post a lot. :)
    Tuesday, December 28, 2010 7:19 PM
  • I smell a wumpus.
    Hunt the wumpus!

    I'm not on the WHS team, I just post a lot. :)
    Tuesday, December 28, 2010 7:22 PM
  • There are 3 key things that sets WHS apart as being truly useful for my (bloody complex) home network and those of the people I support:

    1. Automated client backup
    2. Document and media storage and access
    3. Ability to seamlessly extend storage based on acquired drives from other, retired machines, via Drive Extender
    Losing 1/3rd of the feature set is going to take an awful lot of other enhancements to make up for and I'm just not seeing the value proposition for upgrading at this point. I was seriously considering a full new hardware purchase in conjunction with the Vail launch, but if Vail is going to offer little more and substantially less it's going to be hard to get past the boss.

    I understand the stated rationale for removing DE, however that's a technical and strategy consideration, not an end-user orientated decision, which is dangerous in a consumer facing product.

    So the challenge I'd like addressing is, what makes Vail worth upgrading to and how is that equivalent to the loss of such a serious feature?

    Thursday, December 30, 2010 12:06 AM

  • Ability to seamlessly extend storage based on acquired drives from other, retired machines, via Drive Extender
    Those drives are old, they're tired, and (because all disk drives are mechanical in nature) they're wearing out and will probably fail soon. The ability to add a drive without worrying about what's going to go on it, how big it is (assuming it's "big enough"), etc. is great. The ability to use an old drive in server storage sucks precisely because it tempts you to do exactly what you said above. :)
    I'm not on the WHS team, I just post a lot. :)
    Thursday, December 30, 2010 2:00 AM
  • There are 3 key things that sets WHS apart as being truly useful for my (bloody complex) home network and those of the people I support:

    1. Automated client backup
    2. Document and media storage and access
    3. Ability to seamlessly extend storage based on acquired drives from other, retired machines, via Drive Extender
    Losing 1/3rd of the feature set is going to take an awful lot of other enhancements to make up for and I'm just not seeing the value proposition for upgrading at this point. I was seriously considering a full new hardware purchase in conjunction with the Vail launch, but if Vail is going to offer little more and substantially less it's going to be hard to get past the boss.

    So the challenge I'd like addressing is, what makes Vail worth upgrading to and how is that equivalent to the loss of such a serious feature?
    I don't see how #2 "Document and media storage and access" sets WHS apart.
    Even XP pro and many other OS after initial setup can be run headless and used as storage server. Decent motherboards have RAID1 built-in for safety. Yes, WHS had console and with other OS RDP or VNC needs to be used but I don't think it's a big deal. Some use NAS for that, they have decent UI.

    So, essentally Vail lost 1/2 of functionality and that half was the most important one. There is a decent internal backup in Windows 7 so all you do with WHS is save a bit of space on duplicated files.

    Nothing makes worth upgrading right now. For media streaming there is Windows Media Player, TVersity, etc. HP servers had it built-in.

    People put advanced format drives in WHS v1. There are even guides on how to add GPT partitions larger than 2.2 TB to WHS v1 pool. So it may be possible to use 3TB and large drives in WHS v1 (of course Microsoft will never support it).

    It's possible that WHS v1 won't be able to backup Windows 8, but there is a long time until that is relevant.

    Thursday, December 30, 2010 2:08 AM
  • I agree that your three items are key advantages to Windows Home Server, but I’ll add a few more:
     
    1. Small form-factor hardware with large storage ability. At the moment I have 3.5 terabytes in an HP EX485 that sits quite nicely on the corner of my desk without taking up a lot of space or tons of external wires dangling all over the place. The EX495 is even more attractive given it’s the same size but has a more powerful dual-core processor and can be upgraded to 4GB RAM. I seriously considered that option and almost jumped on the Amazon deal last week (I think it was $599 at the time), but I have no idea if it will support Vail because I have no idea what Vail requires hardware-wise at this time.
     
    2. I can easily expand to 8 terabytes or more just by hot-swapping drives. Theoretically I can do this with RAID, but it requires hot-swapping all drives in the array one at time (with a potentially 24-hour RAID rebuild after each swap) and storage expansion happens after the last drive is swapped – assuming the RAID array supports auto expansion. If the RAID array is standard RAID 5, it may require backing up all data, replacing all drives, rebuilding the RAID array manually, then restoring the files.
     
    3. Windows Home Server integrates much better into a Windows network than Linux-based NAS devices. The ones I tried invariably had issues keeping timestamps that would throw off file sync utilities or didn’t handle Windows ACLs quite right.
     
    4 Although it’s not supported, I can install a few extra servers on Windows Home Server that I can’t run on a Linux-based device.
     
    5. Plug-it-in, turn-it-on, touch one web page, DONE. I have to say Vail is a lot easier to get up and running with the /connect web page but even the EX485 wasn’t that difficult.
     
    6. Perhaps not as import, but I offloaded all of my media to Windows Home Server. I didn’t see the need to keep all my media on both my desktop and on the server since Windows Media Player will play it just as easily when it’s on the server. This avoids Media Player from seeing duplicate copies of all my music as well.
     
    "Simon J Hudson" wrote in message news:c80224e2-a167-4c6a-b02d-1cf2e02efaff...

    There are 3 key things that sets WHS apart as being truly useful for my (bloody complex) home network and those of the people I support:

    1. Automated client backup
    2. Document and media storage and access
    3. Ability to seamlessly extend storage based on acquired drives from other, retired machines, via Drive Extender
    Losing 1/3rd of the feature set is going to take an awful lot of other enhancements to make up for and I'm just not seeing the value proposition for upgrading at this point. I was seriously considering a full new hardware purchase in conjunction with the Vail launch, but if Vail is going to offer little more and substantially less it's going to be hard to get past the boss.
     
    I understand the stated rationale for removing DE, however that's a technical and strategy consideration, not an end-user orientated decision, which is dangerous in a consumer facing product.
     
    So the challenge I'd like addressing is, what makes Vail worth upgrading to and how is that equivalent to the loss of such a serious feature?
    Friday, December 31, 2010 10:04 PM
  • So the home server team figured out that either

    1. They cannot actually make the DE code changes and have it safely work (no data corruption).  Possibly due to the coming changes in hard drive technology.

    2. They cannot make enough money with the product to keep supporting the product.  If WHS is 1% of the income and 30% of the headache. . .

    2010 was a tough year - end of WHS and Net Neutrality. 

    Good year for carbonite

     

     

    Saturday, January 08, 2011 1:34 PM
  • Deprecated features:

    ·         A data drive from a storage pool cannot be read on machine not running the “Vail” server software.

    I had heard that drives from the Vail storage pool were able to be mounted, in some instances, as VHDs.  This was discussed at a user group meeting in relation to one of the earlier betas. Does this hold true?

     

    C!

    Tuesday, May 03, 2011 6:15 AM
  • On Tue, 3 May 2011 06:15:16 +0000, Cameron Harris wrote:

    I had heard that drives from the Vail storage pool were able to be mounted, in some instances, as VHDs. ?This was discussed at a user group meeting in relation to one of the earlier betas. Does this hold true?

    Either you misheard, misremembered or the speaker mispoke. Server Backups
    in WHS 2011 are stored using VHD files and can be mounted.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    fortune:  No such file or directory

    Tuesday, May 03, 2011 2:39 PM
  • Good Job Microsoft!   Removing DE from WHS 2011 will keep me from upgrading.    The only reason I went with WHS is because of DE,and  I no longer wanted to deal with a RAID solution.

    You never really seem to get what consumers want do you.

     

    Wednesday, May 04, 2011 12:57 AM
  • On Wed, 4 May 2011 00:57:37 +0000, smoked ham wrote:

    Good Job Microsoft! ? Removing DE from WHS 2011 will keep me from upgrading. ? ?The only reason I went with WHS is because of DE,and ?I no longer wanted to deal with a RAID solution.

    You never really seem to get what consumers want do you.

    Since you followed up to my post, just a reminder that MVPs are volunteers.
    We do not work for Microsoft, nor do we help the product teams make
    decisions.


    Paul Adare
    MVP - Identity Lifecycle Manager
    http://www.identit.ca
    The generation of random numbers is too important to be left to chance.

    Wednesday, May 04, 2011 9:33 AM