none
Microsoft HPC 2016 Update 1 Preview is available

    General discussion

  • We have the HPC Pack 2016 Update 1 Preview publicly available for customers to download and try out. Anyone here are also welcomed to use it and provide feedbacks.

    Meanwhile we are working on Quality and Reliability of the product before we reach GA. Below are the information about the preview release:

     

    Download Link: https://www.microsoft.com/en-us/download/details.aspx?id=56112

    ARM template: Preview deployment templates

    What’s New in HPC Pack 2016 Update 1 Preview

     

    • Removed Dependency on Service Fabric for Single Head node Installation

    Service Fabric is not required for single head node cluster installation. But it is still required for 3 head nodes cluster setup.

    • Azure AD (AAD) Integration for SOA Job

    In HPC Pack 2016 RTM we added AAD support but only for HPC Batch jobs. Now we added support for HPC SOA jobs. Please refer to doc step by step using AAD for SOA job 

    • Burst to Azure Batch Improvement

    In this version, we improved bursting to Azure Batch pools including “Low Priority” VM support and Linux IaaS VM support. For more details please check doc Burst to Azure Batch with Microsoft HPC Pack 2016 Update 1 

    • Use Docker in HPC Pack

    HPC Pack is now integrated with Docker. Cluster users can submit a job requesting a Docker image, and the HPC Job Scheduler will start the task within Docker. Please check this doc Using docker in HPC Pack for more details. NVIDIA Docker GPU jobs and cross-Docker MPI jobs are both supported. 

    • Manage Azure Resource Manager(RM) VMs in HPC Pack

    One of the advantages of HPC Pack is that it makes it much easier to manage Azure compute resource for cluster administration through node templates. In addition to the Azure Windows PaaS node template and Azure Batch pool template, we now introduce a new type of template: Azure RM VM template. It enables you to add, deploy, remove and manage RM VMs from one central place. For more details please check doc Burst to Azure IaaS nodes in HPC Pack 

    • Support Excel 2016

    If you’re using Excel Workbook offloading in HPC Pack, the good news for you is that Excel 2016 is supported by default. The way you use Excel Workbook offloading hasn’t been changed.

    And if you’re using O365, you need to manually active the Excel on all compute nodes. 

    • Improved auto grow-shrink operation log

    Originally you had to dig into the management logs to check what was happening to the auto grow shrink operations, which is not very convenient. Now you’re able to read the logs within the HPC Cluster Manager GUI (Resource Management à Operations à AzureOperations). If you have auto grow shrink enabled, you will see one active “Auto grow shrink report” operation every 30 minutes. This operation logs is never archived and instead is purged after 48 hours, which is a different behavior from other operations.

    Please note that this feature already exists in HPC Pack 2012 R2 Update 3 with QFE4032368 

    • Peek output for a running task

    Before HPC Pack 2016 Update 1, you’re only able to see the last 5K output from the task if the task hasn’t specified output redirection. And you’re not able to know the output of a running task as well unless you open the task output directly if the task redirected to a file. This situation was worse if your job and task were running in Azure nodes as your client is not able to access the Azure nodes directly.

    Now if you go to a job view dialog from job manager GUI, by selecting a running task and click “Peek Output” button, we show you the latest 4K output or standard error.

    You can also use the command line tool “task view <jobId>.<TaskId> /peekoutput” to get the latest 4K output and standard error from the running task.

    Please note that:

    1. This feature does not work if your task is running on an Azure batch pool node, and not work for SOA job;
    2. Peek output may fail if your task redirected the output to a share that the compute node account cannot access. 
    • Backward Compatibility

    In HPC Pack Update 1 we added compatibility with previous versions of HPC Pack. Mainly we support below scenarios:

    1. With the new SDK we release with HPC Pack Update 1, you’re able to connect to previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of HPC Pack Cluster and submit/manage your batch jobs.
    2. You’re able to connect to HPC Pack 2016 Update 1 from previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of the HPC Pack SDK to submit and manage batch jobs. Please note that this only works if your HPC Pack 2016 Update 1 cluster nodes are domain joined.

    The Preview SDK is available here. Please note, SDK from Update 1 will not work for HPC Pack 2016 RTM. In the new SDK, we add a new Method in IScheduler as below to allow you to choose the endpoint you want to connect: WCF or .net Remoting. With the original Connect mothod, we will first try to connect to WCF endpoint (version later than HPC Pack 2016 RTM) and if failed we then try .net remoting endpoint (version before HPC Pack 2016):

    public void Connect(string cluster, ConnectMethod method)

    • SqlConnectionStringProvider plugin

    HPC Pack 2016 Update 1 supports the plugin to provide a customized SQL connection string. This is mainly for managing the SQL connection strings in a separate security system from HPC Pack 2016 Update 1. For example, the connection strings change frequently or it contains secret information that it is improper to save it in windows registry, or the service fabric cluster property store.

    Please refer to doc HPCSqlConnectionStringPlugin for how to develop and deploy the plugin. 

    • New Set of HPC Pack Scheduler REST API

    While we keep the original REST API for backward compatibility, we introduced a new set of REST API for Azure AD integration and added JSON format support. Please refer to doc New Set of HPC Pack Scheduler REST API for more details.


    Qiufang Shi

    Sunday, October 15, 2017 8:32 AM

All replies

  • I had significant problems getting the HpcPortal to work with this version - after installing IIS it was missing the required ASP.NET modules as well as Windows Authentication. By mucking about, manually installing things and some trial-and-error configuration I did get it to work in the end.

    One thing I have observed in this version is that I can't access the /hpc endpoint from the browser - I just get an authorization error:

    <Error>
    <Message>Authorization has been denied for this request.</Message>
    </Error>


    When accessing the /WindowsHPC endpoint the browser pops up a dialog asking for my Windows username and password after which it then works.

    When accessing the /hpc endpoint via python and using SSPI I do get the correct results so it just seems like an issue whereby the browser doesn't know to ask for credentials when accessing the /hpc endpoint.

    Maybe it's a config issue? Is there any documentation for configuring the REST api for this particular version? Can I make a feature request that the portal and REST features be configurable from within the cluster manager - I think that would be much more user-friendly.

    Thanks,

    Dave


    Wednesday, October 25, 2017 1:28 AM
  • Hi dhirschfeld,

    Thank you for your reply. For your first issue, you need to enable HPC Portal by entering following powershell command after HPC Pack 2016 Update 1 installed:

    Set-HPCWebPortal.ps1 -enable
    It will install and config IIS for you.

    For your second issue, may I ask in what scenario do you need to access /hpc rest endpoint in web browser? The more we understand your scenario, the better we can help.

    Thanks,
    Zihao

    Wednesday, October 25, 2017 2:27 AM
  • I did run the `Set-HPCWebPortal.ps1 -enable` command. I didn't notice any errors in the output but it seemed to not complete correctly, leaving the portal in an unusable state. After installing the required IIS modules manually and manually fiddling with the config file to get Windows Authentication the portal did then work. 

    I may have missed some error in the output but don't have it to hand. All I can say is that I didn't notice anything untoward at the time.

    Being able to access the /hpc endpoint in the browser is useful for quickly debugging/checking the api - e.g. I can browse to /hpc/Jobs and see the result without having to write any code. I can also then copy and paste the result into a unit-test. It's just a convenience thing, and as mentioned it does work for the /WindowsHPC endpoint


    Wednesday, October 25, 2017 2:37 AM
  • Hi dhirschfeld,

    Thank you for your feedback. We'll check the issues you reported and inform you.

    Thanks,
    Zihao

    Wednesday, October 25, 2017 4:24 AM