none
Dynamic memory balancing RRS feed

  • Question

  • Hi,

    I'm a newbie in this Windows HPC world.

    I've got four worker node machines in my setup. However sometimes I have to run a process on one that is very memory intensive that is run outside of HPC - just from the command line. The problem is that if this is running when I kick off an HPC job; the HPC job will ultimately fail as the tasks on that node will have too little memory.

    I'm just looking for general advice on this:

    1. The "Estimated Memory" feature for Jobs; I'm assuming is more about how much physical memory is available and this this is in no now way dynamic? The documentation is not explicitly if this is static or dynamic. Assuming this is static, it would therefore this is useless to help me here. Correct?

    2. If the offending command line process was somehow submitted as a formal Job... if it would submitted with an appropriate 'Estimated Memory'... would the next Job consider to be submitted consider that this will take memory and in doing so skip this node for task allocation. But again is this memory constraint is static surely this would fail for the same reason too.

    This feels like a problem others have come across however I am not seeing a solution based on my understand of Windows HPC

    I would be thankful for any help.

    Wednesday, June 1, 2016 3:46 PM

All replies

  • Hi,

      The EstimatedMemory will only impact the decision when scheduler allocate node resource to the job to ensure that there is enough memory for the job/task to run, and the available memory will be reduced for that amount even if the process won't take that many. 

      In your case, you can if may need to pre-set the EstimatedMemory for your all jobs so that your formal job won't screw up the high memory job as by default /estimatedprocessmemory is 0.


    Qiufang Shi

    Thursday, June 2, 2016 3:09 AM