Restricting number of tasks per node RRS feed

  • Question

  • Hi everyone,

    I am running a parametric sweep job containing about 400 tasks. I have a handful of local workstation nodes. Each task will take about 5 hours and runs on a single core using around about 3GB memory. I have configured the job with an estimated memory usage of 3GB and to require a single core per task. One of my workstations has 8 cores and 32GB memory and was allocated 8 tasks to run at the same time. This is obviously not an ideal situation because each task will be interrupted to perform OS tasks. It also took me an age to login to the box. I cannot find any settings in node or job templates to restrict the number of tasks on a machine, ideally I would like to have number of cores - 1. Is there a way to restrict this? It seems like something we should be able to configure.


    Thursday, April 3, 2014 12:28 PM

All replies

  • Does the Maximum Cores setting of job apply your case (only having one job running at one time)?

    BR, Yizhong

    Friday, April 4, 2014 6:28 AM
  • Unfortunately not, that setting seems to apply to the entire job so if I set it to 6 for example (hoping to leave 2 cores free on my 8 core machines) then it restricts the entire job to run on 6 cores across my entire server farm. As a work around I could try and set this to (entire server farm core count - number of servers) but I bring all of the machines in my office online overnight to help out so that wouldn't work.
    • Edited by cjbhaines Friday, April 4, 2014 9:35 AM
    Friday, April 4, 2014 9:14 AM
  • What's your HPC version?

    If you are using HPC 2012 or HPC 2012 R2, you may use the "Maximum Cores Per Nodes" property in Job Template. It's a job-wide setting. If you set to 6, all allocated nodes of the job will use 6 cores at maximum.

    If you are using HPC 2008 R2 or early version, I don't think there is a supported way to do this.

    BR, Yizhong

    Tuesday, April 8, 2014 7:24 AM