none
Job Template - Node Ordering - Even distribution of tasks across nodes RRS feed

  • Question

  • Hello,

    I can't find a solution for this issue. HPC Pack 2012 R2. I have 7 compute nodes with 16 virtual cores each and 32 GB RAM each. When I submit a job with say 50 tasks, they fill each nodes before moving on to the next one leaving some of the nodes unused. To me, this is not efficient. How can I get the tasks to fill evenly across the nodes? Why is it so not straight forward for a trivial issue like this?

    I looked at Node Ordering and I can't make out whether it talks about the amount of available memory or total memory. The same for number of available cores or total cores. If it is total, it won't make any difference if the total memory & cores across all the nodes are same.

    • Memory Size (Ascending). The job will be assigned first to nodes that have the smallest amount of memory.
    • Memory Size (Descending). The job will be assigned first to nodes that have the largest amount of memory.
    • Processor Number (Ascending). The job will be assigned first to nodes that have the smallest number of cores.
    • Processor Number (Descending). The job will be assigned first to nodes that have the largest number of cores.

    Thanks,

    MJ


    ManiMJ

    Thursday, June 29, 2017 2:46 PM

All replies

  • Hi,

      This is the designed behavior. We fill node one by one in stead of spreading the task to different node so that:

    1. When new job comes in, they can get "whole" nodes, instead of sparsed cores from different node

    2. Prevent your job/task from other jobs


    Qiufang Shi

    Monday, July 3, 2017 7:56 AM