none
Can HPC "burst" to AWS without requiring a trusted relationship between domains? RRS feed

  • Question

  • From what I understand, HPC headnode can be in domain 'X' and use Azure computenodes that are not in that domain. 

    Is there any way to do this but run the computenodes in AWS instead of Azure? i.e. connect from headnode to computenodes across port 443, and not require that the computenodes are in domain X?

    The only way I can think of doing it is if I create another domain 'Y' in AWS for the computenodes, and create a trust relationship between domain X and Y. Any other options?

    Basically I don't want to create a trust relationship between 'X' and 'Y', and I have to use AWS.

    Other option is run the headnode in AWS too, but then how do HPC clients connect from inside domain 'X' to headnode in 'Y'?

    Thursday, January 21, 2016 4:36 PM

Answers

  • Hi,

      We don't have solutions to burst to AWS. There are instructions on how to set up an entire HPC Pack cluster in AWS. And you client can submit jobs to that cluster from anywhere even the client is not part of the domain.

      You can check this article (Submit jobs to HPC cluster in azure): https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-hpcpack-cluster-submit-jobs/ ; Or you can call the REST API directly if you submit jobs from Linux client.

      By customize AzureAutoGrowShrink.ps1 you can auto-grow shrink the AWS compute nodes based on the job queue, please refer to https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-hpcpack-cluster-node-autogrowshrink/;

      In our next release, we plan to support non-domain joined compute nodes which may help you create burst to AWS solution.


    Qiufang Shi

    Friday, January 22, 2016 1:53 AM

All replies

  • Hi,

      We don't have solutions to burst to AWS. There are instructions on how to set up an entire HPC Pack cluster in AWS. And you client can submit jobs to that cluster from anywhere even the client is not part of the domain.

      You can check this article (Submit jobs to HPC cluster in azure): https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-hpcpack-cluster-submit-jobs/ ; Or you can call the REST API directly if you submit jobs from Linux client.

      By customize AzureAutoGrowShrink.ps1 you can auto-grow shrink the AWS compute nodes based on the job queue, please refer to https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-hpcpack-cluster-node-autogrowshrink/;

      In our next release, we plan to support non-domain joined compute nodes which may help you create burst to AWS solution.


    Qiufang Shi

    Friday, January 22, 2016 1:53 AM
  • Thanks Qiufang Shi - very helpful answer! I can confirm that I can run a headnode in AWS and submit from within a dififerent domain using the instructions from your "Submit jobs to HPC cluster in azure" link

    When is the next release planned?

    Thursday, January 28, 2016 9:42 AM
  • Hi Qiufang do you know if it possible to connect to the headnode by IP address? In the article it says:

    Use the full DNS name of the head node, not the IP address, in the scheduler URL. If you specify the IP address, you’ll see an error similar to "The server certificate needs to either have a valid chain of trust or to be placed in the trusted root store".

    I tried creating another Certificate on the headnode with the IP in the certificate's alternative name, and using this. But I still get "The server certificate needs to either have a valid chain of trust or to be placed in the trusted root store" (I confirmed this new Certificate worked by configuring winrm to use it, and I could connect with winrm on SSL without setting "-SkipCNCheck")

    Any other way of connecting to the headnode by IP address over HTTPS?

    Wednesday, February 3, 2016 1:10 PM
  • If you need to know the details of our next release, please contact hpcpack@microsoft.com.
    Thursday, February 4, 2016 1:31 AM
  • We suppose it shall work with IP as the name. Have you re-started the service?

    And you can submit job through REST api and skip the CN check so that you can use the IP address directly. Here is an example: http://social.technet.microsoft.com/wiki/contents/articles/7737.creating-and-submitting-jobs-by-using-the-rest-api-in-microsoft-hpc-pack-windows-hpc-server.aspx

    • Proposed as answer by Qiufang Thursday, February 4, 2016 1:44 AM
    Thursday, February 4, 2016 1:44 AM
  • I can confirm that it is possible to connect to the headnode by IP address over HTTPS. I must've forgotten to restart HPCScheduler as you suggested, as it worked when I tried again.
    Friday, February 12, 2016 3:42 PM
  • Hi,

    Any update on HPC Pack's support for Burstmode in AWS? Will it to be supported in HPC Pack 2016?

    Currently, I can use Cluster Manager to create 'Azure Template' and autogrow/burst "worker roles" in Azure - Any thing equivalent for AWS? 

    Thanks

    Thursday, October 20, 2016 3:24 PM