none
Network Interface using HPC for sending mpi messages RRS feed

  • Question

  • Tell me, please, how to learn network interface, which is used by HPC for sending mpi messages, when we start parallel program, if cluster contains some network interfaces?
    Monday, March 30, 2009 5:31 AM

Answers

  • Hi Valia

    The MPI network is controlled on a cluster wide basis by the CCP_MPI_NETMASK environment variable. You can view the current setting by running "cluscfg listenvs" from an admin command prompt, and set it with a command like "cluscfg setenvs CCP_MPI_NETMASK=192.168.0.0/255.255.0.0"

    You can override the setting on a per run basis by passing in MPICH_NETMASK to mpiexec.exe ( mpiexec -env MPICH_NETMASK 10.1.0.0/255.255.0.0 )

    Friday, April 3, 2009 6:11 AM

All replies

  • Hi Valia

    The MPI network is controlled on a cluster wide basis by the CCP_MPI_NETMASK environment variable. You can view the current setting by running "cluscfg listenvs" from an admin command prompt, and set it with a command like "cluscfg setenvs CCP_MPI_NETMASK=192.168.0.0/255.255.0.0"

    You can override the setting on a per run basis by passing in MPICH_NETMASK to mpiexec.exe ( mpiexec -env MPICH_NETMASK 10.1.0.0/255.255.0.0 )

    Friday, April 3, 2009 6:11 AM
  • Thank you very much indeed. But may be there are two network interfaces having the same subnet mask (I understand that it is infrequent occurrence). What to do in this situation  and what network interface will be used for sending mpi messages?
    Saturday, April 4, 2009 10:31 AM