locked
MSMPI and Windows Server 2016 RRS feed

  • Question

  • Hello community,

    I'm trying to use an MSMPI-based fortran executable on a Windows Server 2016 machine with 4 processors having 18 cores each (144 threads). When I run "mpiexec -n 144 code.exe", only 2 processors are activated handeling 144 threads while the 2 others processors are doing nothing. Among the tests I have done, running simulataneously 144 times the application code.exe activates all processors, meaning the OS can handle multi-threading on all processors correctly. I suspect MSMPI to be obsolete for this system, memory card or processor type.

    This is the machine configuration :

    1 PowerEdge R830 motherboard

    4 Processors Intel Xenon E5-4667v4 (2.2GHz, 18C/36T, 46 Mo cache, 9.6GT/s QPI, 135W, Turbo, HT), mem max 2400MHz

    This is the F90 code I'm using :

    ****************

     program mpisimple
    !
          implicit none
    !
          integer ierr,my_rank
    !
          include 'mpif.h'
    !
          call mpi_init(ierr)
          call mpi_comm_rank(MPI_COMM_WORLD,my_rank,ierr)
    !
    !     print message to screen
    !
          do while(.true.)
          write(7+my_rank,*) 'Hello World, my rank :',my_rank
          enddo
    !
          call mpi_finalize(ierr)
    !
    end

    ****************

    I'm compiling with Intel(R) Visual Fortran Compiler 18.0.0.124 (x64)

    The MSMPI library is Microsoft MPI 8.0.12438.0

    Any advice or update from anybody ?

    Have a good day !


    • Edited by Gomez Felix Friday, December 22, 2017 9:51 AM
    Friday, December 22, 2017 9:48 AM

All replies

  • Hello Gomez,


    the support for multiple processor groups was recently added in MS-MPI v9.0, so now when you run in the console, mpiexec will launch smpd
    in a way that will make it possible to launch MS-MPI processes across multiple groups as well. Beta version of MS-MPI v9.0 is available for download from Microsoft Download Center.

    Hope it helps!

    Wednesday, January 3, 2018 8:21 PM
  • Hello community, 

    we still have the problem that MPI Ranks does not properly use available processors.
    We are using Intel Xeon CPU E7 8890v4 2.2GHz, 4 Sockets, 96 Cores, 192 Logical processors.
    Microsoft MPI Startup Program [Version 10.0.12498.5]
    There are 4 Processor Groups with 48 Cores per Group.

    when starting "C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 179 ampiProcess.exe

    the ampiProcess.exe are pinned mainly to two processor groups. for example
    Group 0: 6 cores
    Group 1: 82 Cores
    Group 2: 7 Cores
    Group 3: 84 Cores

    It would be nice if someone could give us a hint!

    Thanks in advance

    Monday, December 3, 2018 4:18 PM
  • Hi Muts,

      Currently you need HPC Pack 2016 Update 2 + MSMPI 10.0, then it will support multiple process group.

    HPC Pack 2016 Update 2: http://www.microsoft.com/downloads/details.aspx?FamilyID=6e1d2f89-02a1-422e-883c-5ee9b5062b88 (Currently it brings with MSMPI 9.0, we will update to the latest in the upcoming QFE)

    MSMPI 10: https://www.microsoft.com/en-us/download/details.aspx?id=57467

    Then install HPC Pack and upgrade the MSMPI to 10 on all compute nodes. and try again


    Qiufang Shi

    Thursday, December 6, 2018 7:47 AM
  • Hi Qiufang,

    thanks for your advice regarding HPC.

    In cases that you don't have a HPC und need to use MPI local on a Node with >64s CPU und >1 Processor Groups we figured out with the help of ms-mpi (thanks!) that you have to use MS-MPI Version > 10. In the mpiexex call you have to set the affinity flag or if you want to pin your MPI-Processes to specific logical CPUs you have to call mpiexec with the hosts option (mask in a bit-vector the position of the needed CPU {2-1; 4-2; 8-3 ...}):

    "C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -hosts 80 localhost 1,1:0 localhost 1,2:0 localhost 1,4:0 localhost 1,8:0 localhost 1,10:0 localhost 1,20:0 localhost 1,40:0 localhost 1,80:0 localhost 1,100:0 localhost 1,200:0 localhost 1,400:0 localhost 1,800:0 localhost 1,1000:0 localhost 1,2000:0 localhost 1,4000:0 localhost 1,8000:0 localhost 1,10000:0 localhost 1,20000:0 localhost 1,40000:0 localhost 1,80000:0 localhost 1,1:1 localhost 1,2:1 localhost 1,4:1 localhost 1,8:1 localhost 1,10:1 localhost 1,20:1 localhost 1,40:1 localhost 1,80:1 localhost 1,100:1 localhost 1,200:1 localhost 1,400:1 localhost 1,800:1 localhost 1,1000:1 localhost 1,2000:1 localhost 1,4000:1 localhost 1,8000:1 localhost 1,10000:1 localhost 1,20000:1 localhost 1,40000:1 localhost 1,80000:1 localhost 1,1:2 localhost 1,2:2 localhost 1,4:2 localhost 1,8:2 localhost 1,10:2 localhost 1,20:2 localhost 1,40:2 localhost 1,80:2 localhost 1,100:2 localhost 1,200:2 localhost 1,400:2 localhost 1,800:2 localhost 1,1000:2 localhost 1,2000:2 localhost 1,4000:2 localhost 1,8000:2 localhost 1,10000:2 localhost 1,20000:2 localhost 1,40000:2 localhost 1,80000:2 localhost 1,1:3 localhost 1,2:3 localhost 1,4:3 localhost 1,8:3 localhost 1,10:3 localhost 1,20:3 localhost 1,40:3 localhost 1,80:3 localhost 1,100:3 localhost 1,200:3 localhost 1,400:3 localhost 1,800:3 localhost 1,1000:3 localhost 1,2000:3 localhost 1,4000:3 localhost 1,8000:3 localhost 1,10000:3 localhost 1,20000:3 localhost 1,40000:3 localhost 1,80000:3 your_prcess.exe

    Friday, January 18, 2019 8:29 AM