none
Microsoft HPC Pack 2016 Update 1 now available! RRS feed

  • General discussion

  • We have the HPC Pack 2016 Update 1 publicly available for customers to download.

     

    Download Link: https://www.microsoft.com/en-us/download/details.aspx?id=56360

    ARM template: Deployment templates

    SDK Link:https://www.nuget.org/packages/Microsoft.HPC.SDK/5.1.6086

    Sample Codes:https://github.com/MicrosoftHPC/HPC2016.SampleCode

     

    What’s New in HPC Pack 2016 Update 1

    1. Removed Dependency on Service Fabric for Single Head node Installation

    Service Fabric is not required for single head node cluster installation. But it is still required for 3 head nodes cluster setup.

     

    1. Azure AD (AAD) Integration for SOA Job

    In HPC Pack 2016 RTM we added AAD support but only for HPC Batch jobs. Now we added support for HPC SOA jobs. Please refer to doc step by step using AAD for SOA job

     

    1. Burst to Azure Batch Improvement

    In this version, we improved bursting to Azure Batch pools including “Low Priority” VM support and Linux IaaS VM support. For more details please check doc Burst to Azure Batch with Microsoft HPC Pack 2016 Update 1

     

    1. Use Docker in HPC Pack

    HPC Pack is now integrated with Docker. Cluster users can submit a job requesting a Docker image, and the HPC Job Scheduler will start the task within Docker. Please check this doc Using docker in HPC Pack for more details. NVIDIA Docker GPU jobs and cross-Docker MPI jobs are both supported.

     

    1. Manage Azure Resource Manager(RM) VMs in HPC Pack

    One of the advantages of HPC Pack is that it makes it much easier to manage Azure compute resource for cluster administration through node templates. In addition to the Azure Windows PaaS node template and Azure Batch pool template, we now introduce a new type of template: Azure RM VM template. It enables you to add, deploy, remove and manage RM VMs from one central place. For more details please check doc Burst to Azure IaaS nodes in HPC Pack

     

    1. Support Excel 2016

    If you’re using Excel Workbook offloading in HPC Pack, the good news for you is that Excel 2016 is supported by default. The way you use Excel Workbook offloading hasn’t been changed.

    And if you’re using O365, you need to manually active the Excel on all compute nodes.

     

    1. Improved auto grow-shrink operation log

    Originally you had to dig into the management logs to check what was happening to the auto grow shrink operations, which is not very convenient. Now you’re able to read the logs within the HPC Cluster Manager GUI (Resource Management àOperations àAzureOperations). If you have auto grow shrink enabled, you will see one active “Auto grow shrink report” operation every 30 minutes. This operation logs is never archived and instead is purged after 48 hours, which is a different behavior from other operations.

    Please note that this feature already exists in HPC Pack 2012 R2 Update 3 with QFE4032368

     

    1. Peek output for a running task

    Before HPC Pack 2016 Update 1, you’re only able to see the last 5K output from the task if the task hasn’t specified output redirection. And you’re not able to know the output of a running task as well unless you open the task output directly if the task redirected to a file. This situation was worse if your job and task were running in Azure nodes as your client is not able to access the Azure nodes directly.

    Now if you go to a job view dialog from job manager GUI, by selecting a running task and click “Peek Output” button, we show you the latest 4K output or standard error.

    You can also use the command line tool “task view <jobId>.<TaskId> /peekoutput” to get the latest 4K output and standard error from the running task.

    Please note that:

    1. This feature does not work if your task is running on an Azure batch pool node, and not work for SOA job;
    2. Peek output may fail if your task redirected the output to a share that the compute node account cannot access.

     

    1. Backward Compatibility

    In HPC Pack Update 1 we added compatibility with previous versions of HPC Pack. Mainly we support below scenarios:

    1. With the new SDK we release with HPC Pack Update 1, you’re able to connect to previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of HPC Pack Cluster and submit/manage your batch jobs.
    2. You’re able to connect to HPC Pack 2016 Update 1 from previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of the HPC Pack SDK to submit and manage batch jobs. Please note that this only works if your HPC Pack 2016 Update 1 cluster nodes are domain joined.

    The SDK is available here. Please note, SDK from Update 1 will not work for HPC Pack 2016 RTM. In the new SDK, we add a new Method in IScheduler as below to allow you to choose the endpoint you want to connect: WCF or .net Remoting. With the original Connect mothod, we will first try to connect to WCF endpoint (version later than HPC Pack 2016 RTM) and if failed we then try .net remoting endpoint (version before HPC Pack 2016):

    publicvoidConnect(stringcluster, ConnectMethod method)

     

    1. SqlConnectionStringProvider plugin

    HPC Pack 2016 Update 1 supports the plugin to provide a customized SQL connection string. This is mainly for managing the SQL connection strings in a separate security system from HPC Pack 2016 Update 1. For example, the connection strings change frequently or it contains secret information that it is improper to save it in windows registry, or the service fabric cluster property store.

    Please refer to doc HPCSqlConnectionStringPlugin for how to develop and deploy the plugin.

     

    1. New Set of HPC Pack Scheduler REST API

    While we keep the original REST API for backward compatibility, we introduced a new set of REST API for Azure AD integration and added JSON format support. Please refer to doc New Set of HPC Pack Scheduler REST API for more details.

     

    1. Improved Linux Mutual Trust configuration for cluster user’s cross node MPI job

    Originally, when cluster user submits cross node MPI job, they need provide key pair XML file through hpccred.exe setcreds /extendeddata. Now this is not required as we will generate a key pair for the user automatically.

     

    1. SOA Performance Improvement

    HPC SOA performance has been improved in this release.

    Note: above linked documents are still linking to the preview version while we are publishing the GA version, but they shall still apply.

     

    What will be next?

    •  We will provide guidelines on how to migrate from your HPC Pack 2016 RTM cluster;
    • We will update our existing online documents to reflect the new capabilities;
    • We want to listen to your and customer’s feedbacks and prepare for our next release;

    Qiufang Shi

    Monday, December 18, 2017 6:24 AM

All replies

  • Hi I am getting a NullReferenceException trying to use the new SDK 5.1.6086 from nuget.org to connect to a headnode running HPC Pack 2012 R2 4.5.5079.0.

    Connecting to the headnode using windows authentication (not https) using the FQDN of the headnode

    System.NullReferenceException: Object reference not set to an instance of an object.
       at Microsoft.Hpc.WindowsRegistryBase.GetValueAsync[T](String key, String name, CancellationToken token, T defaultValue)
       at Microsoft.Hpc.RegistryExtension.<GetCertificateValidationTypeAsync>d__27.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
       at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
       at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
       at Microsoft.Hpc.RestServiceUtil.GetCertificateValidationCallback(IHpcContext context)
       at Microsoft.Hpc.RestServiceUtil.IgnoreCertNameMismatchValidation(IHpcContext context)
       at Microsoft.Hpc.HpcContext.GetOrAdd(EndpointsConnectionString connectionString, CancellationToken token, Boolean isHpcService)
       at Microsoft.Hpc.HpcContext.GetOrAdd(String connectionString, CancellationToken token, Boolean isHpcService)
       at Microsoft.Hpc.Scheduler.Store.StoreConnectionContext..ctor(String oldMultiFormatName, CancellationToken token)
       at Microsoft.Hpc.Scheduler.SchedulerConnectionContext..ctor(String oldMultiFormatString, CancellationToken token)
       at Microsoft.Hpc.Scheduler.Scheduler.Connect(String cluster, ConnectMethod method)
       at Microsoft.Hpc.Scheduler.Scheduler.Connect(String cluster)


    Monday, December 18, 2017 10:24 AM
  • Hi TimJRoberts1,

    Thank you for your feedback. We have identified the root cause and will publish a new version of SDK after fixing it.
    For now you can workaround this issue by creating one registry key

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\HPC

    We'll inform you once the new SDK is available.

    Thanks,
    Zihao


    Tuesday, December 19, 2017 3:46 AM
  • Thanks for the reply. The workaround works in the meantime.

    One more request. I am trying to install the headnode remotely and am getting an error on the SQL Server Express 2016 install:

    Message: 
    Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.

    I noticed the command line called by the installer for SQLExpress begins with:
    SQLEXPR_x64_ENU.exe" /qs /action=install /features=sqlengine /instancename=COMPUTECLUSTER .. etc

    If I run this individual command remotely with "qs" changed to "q" it succeeds. (from quietsimple to quiet) Then if I rerun the headnode installer it successfully installs.
    SQLEXPR_x64_ENU.exe" /q /action=install /features=sqlengine /instancename=COMPUTECLUSTER .. etc

    Would it be possible to change the HPC headnode installer to call the SQLExpress installer as "/q" instead of "/qs"?

    Thanks

    Tim

    Tuesday, December 19, 2017 10:22 AM
    •  We will provide guidelines on how to migrate from your HPC Pack 2016 RTM cluster;

    When can I expect the 'migration' document to be made available. Running setup from Update 1 seems to insist that I un-install RTM before installing Update 1.

    I suppose that's not what you folks intend us to do?

    Wednesday, December 27, 2017 9:23 PM
  • Hi Tim,

    New version of HPC pack SDK 5.1.6088 is published. Please have a try.

    Thanks,
    Zihao

    Tuesday, January 2, 2018 1:39 AM
  • Hi, the documents haven't been published yet. Please wait for a few days. I'll update this thread when it is available.


    Qiufang Shi

    Tuesday, January 2, 2018 8:06 AM
  • is the following command line switch still good to install on workstation node?

    setup.exe -unattend -workstationnode:<head_node>

    If so, can we use this to command to package 2016 Update 1 into our SCCM tool, so that it can be installed on new workstation nodes that belong to say an AD group?

    Thursday, January 18, 2018 4:54 PM
  • Since HPC Pack 2016, a certificate is required for secured communication back to the headnode, you need to specify a Certificate.

    Now you shall try: setup.exe -unattend -workstationnode:<MY_HN_NAME> -SSLPfxFilePath:"\\<MY_HN_NAME>\REMINST\test.pfx" -SSLPfxFilePassword:"<my_password>"


    Qiufang Shi


    Friday, January 19, 2018 7:57 AM
    •  We will provide guidelines on how to migrate from your HPC Pack 2016 RTM cluster;

    When can I expect the 'migration' document to be made available. Running setup from Update 1 seems to insist that I un-install RTM before installing Update 1.

    I suppose that's not what you folks intend us to do?

    Still waiting on the instructions to update existing 2016 RTM installation to Update 1.


    Thanks!

    Wednesday, January 24, 2018 3:43 AM
  • Hi, 

      Sriram, the doc is blocked by the doc reviewer for publishing. We can give you an offline version. Just reach us through hpcpack@microsoft.com


    Qiufang Shi

    Wednesday, January 24, 2018 3:50 PM