Tell me, please, how to learn network interface, which is used by HPC for sending mpi messages, when we start parallel program, if cluster contains some network interfaces?
Moved byJosh BarnardMonday, March 30, 2009 6:33 PM
The MPI network is controlled on a cluster wide basis by the CCP_MPI_NETMASK environment variable. You can view the current setting by running "cluscfg listenvs" from an admin command prompt, and set it with a command like "cluscfg setenvs CCP_MPI_NETMASK=192.168.0.0/255.255.0.0"
You can override the setting on a per run basis by passing in MPICH_NETMASK to mpiexec.exe ( mpiexec -env MPICH_NETMASK 10.1.0.0/255.255.0.0 )
Marked as answer byDon PatteeWednesday, December 9, 2009 6:33 AM
The MPI network is controlled on a cluster wide basis by the CCP_MPI_NETMASK environment variable. You can view the current setting by running "cluscfg listenvs" from an admin command prompt, and set it with a command like "cluscfg setenvs CCP_MPI_NETMASK=192.168.0.0/255.255.0.0"
You can override the setting on a per run basis by passing in MPICH_NETMASK to mpiexec.exe ( mpiexec -env MPICH_NETMASK 10.1.0.0/255.255.0.0 )
Marked as answer byDon PatteeWednesday, December 9, 2009 6:33 AM
Thank you very much indeed. But may be there are two network interfaces having the same subnet mask (I understand that it is infrequent occurrence). What to do in this situation and what network interface will be used for sending mpi messages?