7 января 2011 г. 20:48
I have a Hybrid MPI/OpenMP code that I would like to run on Windows Server 2008 with HPC PACK 2008 SDK SP2. I submit the job in the following manner;
>> call job submit /numprocessors:%cores% mpiexec -np %mpiranks% program.exe
My code provides the %cores% which is the total number of OpenMP threads I would like to use and the %mpiranks% which is the number of MPI processes to spawn. I have an 8 core cluster(Headnode + Compute) with 4 cores on each node.
I would like to run any configuration between 8 MPI X 1 OpenMP to 1 MPI X 8 OpenMP. The current job submit command works well for only a few cases like 1MPI X 4OpenMP where I can see that the Scheduler spawns 1 MPI process on the Headnode and shows 100% usage of all 4 cpus by 4 OpenMP threads.
I'm not able to achieve this kind of result with other configurations. For eg. when I run a simulation with 2 MPI X 4 OpenMP threads each I would expect the scheduler to spawn 1 MPI process on each node showing 100% usage of all 4 cpus by 4 OpenMP threads (on each node). Instead I see 2 MPI process spawned on the Compute Node showing 100% usage of 4 cpus (on the Compute Node itself) and no activity on the Head Node.
Is there anyway of getting the desired result from the job submit command to the Scheduler? Can you please help me out with this inconsistency?
- Перемещено Don PatteeModerator 12 января 2011 г. 2:46 (From:Windows HPC Server Job Submission and Scheduling)
27 января 2011 г. 22:28
First is the clarification of the usage of /numprocessors option of job command:
/numprocessors - The number of processors required by this job;
the job will run on at least the Minimum and no
more than the Maximum [deprecated]
So /numprocessors means how many cores will allocate for the job, not the number of OpenMP threads. Also note that the option -np for mpiexec will take precedence of /numprocessors. So if you run with job submit /numprocessors:4 mpiexec -np 1 program.exe, it will run with just 1 MPI process.
You can achieve your design by one of the following ways:
1. Set the number of OpenMP threads via the enviroment variable: OMP_NUM_THREADS
2. Run with /numnodes option of job command and -c option for mpiexe, for example, if you want to run 1 MPI on each node of your cluster, and 4 threads on each node
job submit /numnodes:2 mpiexec -c 1 -env OMP_NUM_THREADS 4 program.exe
if you want to run 2MPI X 4OpenMP (2 MPI on each node and 2 OpenMP threads on each node)
job submit /numnodes:2 mpiexec -c 2 -env OMP_NUM_THREADS 2 program.exe
Please let me know whehter this works for you.
- Помечено в качестве ответа Don PatteeModerator 25 марта 2011 г. 22:44