Windows HPC 2016 with High Avaliabity Setup RRS feed

  • Question

  • Hi ,

    We are  moving HPC infrastructure from HPC 2008 all bare metal to HPC 2016 all VM.

    Microsoft recommends 3 nodes setup for high availability . Does this apply only to bare metal ( physical servers ) or VM as well ?

    VM as per design can failover to different VM in case of failure on OS or host side

    Looking forward to hear back

    • Edited by juliakir Saturday, February 23, 2019 9:55 AM
    Saturday, February 23, 2019 9:53 AM

All replies

  • Hi juliakir,

    3 head node high available setup can also handle the failures within a VM. e.g. if one VM is down due to disk full.

    You may also choose HA or non HA setup according to your production availability requirements. If the production requires no or minimum down time, you may consider to use 3 head node HA setup.


    Yutong Sun

    Monday, February 25, 2019 1:57 AM
  • Thank you for reply As I understand VM can failover to different VM on the same host (OS failure ) Or to the different host if host failure ( disk full option ) Would I still need 3 node setup ? Thank you
    Monday, February 25, 2019 2:06 AM
  • If you use three nodes - and Service Fabric requires the minimum at three nodes - how do you then patch the servers?


    Jacob Hertz

    Tuesday, February 26, 2019 3:52 PM
  • Not sure I understand question . All 3 nodes are active as per my understanding and can be patched . I think one node is primary ..
    Wednesday, February 27, 2019 12:13 PM
  • The 3 head node HA can also handle the failures WITHIN the VMs e.g. services failed to start due to disk IN the VM is full, or VM system crash.

    Hyper-v may not handle such failures inside the VMs.


    Yutong Sun

    Thursday, February 28, 2019 2:48 AM
  • Hi Jacob Hertz,

    The down time by patching one of the three head nodes would not cause availability issues for HPC services on Service Fabric. So you may patch the head nodes one by one to ensure the service continuity. 


    Yutong Sun

    Thursday, February 28, 2019 2:56 AM