Hi,
I am noticing that HPC jobs are keeping all of it's allocated resources reserved until all tasks in the job have been completed. This means that if I have 10 single core compute nodes and a job with 11 tasks of equal duration, the cluster will calculate
10 tasks at the same time, and then calculate the final task by itself and keep all other compute nodes allocated while the last task is calculated, even though 9 of them are not doing anything.
I would like the next job in the queue to begin running on the idle compute nodes in this situation. Is it possible to have a pre-emption policy to allow for this?
I do not want to use Immediate pre-emption as that will cancel running tasks. I am using HPC Pack 2016 Update 1.
Regards
Peter Ryan