vendredi 13 août 2010 07:43Our application is to run on HPC, and to process a large size of data. The current NTFS or RAID still can't meet our application's speed and reliability requirement when our data size scale up.
Our application can achieve better I/O performance if some parallel file system is adopted. The parallel file system spreads file data across multiple storage machines and delivers higher performance, scalability and fault tolerance by allowing access to the data from multiple systems directly and in parallel. For example, IBM GPFS, http://www-03.ibm.com/systems/software/gpfs/
It seems that the current HPC doesn't have associated high performance parallel file system, which makes it a short point in storage.
Our confused issue is as follows:
1. Is there any parallel file system produced or being developed by Microsoft? Because we are Microsoft Employee, we may could only use Microsoft's product in our cluster.
2. If the above can't make sure, who should we contact to know about this? Maybe we should contact some PM or technial support of HPC team to know about this...
Thank you for help!
Toutes les réponses
mercredi 6 octobre 2010 23:32Modérateur
I've forwarded this question to a coworker who knows all about parallel file systems.
But since you work at MS you may want to email the HPC Discussion Group
- Marqué comme réponse Don PatteeModerator mercredi 6 octobre 2010 23:32
mercredi 6 octobre 2010 23:32
Hi - am happy to help. The closest MS only solution is DFS-N to enable many servers to serve up data to compute nodes. But that will not help single file performance.
GPFS runs in a Windows environment today and a good alternative is StorNext from Quantum. I can arrange contacts if you need them.
Let me know if you have more questions