MPI Performance Tests fail on HPC 2008 Cluster Manager RRS feed

  • Question

  • I setted up an hpc cluster with an head node, a compute node and 2 WS nodes.

    All seem to work well: nodes are correctly displayed in the Cluster Manager and all the Network tests in the diagnostic section passed.

    I need to use this with Ansys so I need to use MPI but all the 3 MPI performance tests fail returning this error:

    MPI Ping-Pong: Latency
    Test Result: Failure
    Message: Unknown option: -c
    abort: Unable to connect to 'COMPUTENODE_NAME:8678',
    sock error: Error = -1

    abort: Unable to connect to 'WSNODE1_NAME:8678',
    sock error: Error = -1

    Failed nodes list (4)



    Anyone knows what's the matter?

    MPI service is running in the nodes, and Windows firewall seems to be configured properly. Any clue?

    • Edited by MarcAngelo82 Wednesday, February 12, 2014 2:07 PM
    Wednesday, February 12, 2014 11:30 AM

All replies

  • Update: After some try, i found that tests fail only when the head node is involved.

    If I try MPI Pinpong between other nodes, the test is successful.

    Wednesday, February 12, 2014 2:17 PM
  • That was a problem with the installation of the HPC pack on the Head.

    Complete uninstallation and reinstallation of the HPC pack (and re-adding of the old nodes) on the server solved the issue.


    Wednesday, February 12, 2014 3:38 PM