giovedì 12 aprile 2012 07:46
That is some feedback and questions about Resource Pools feature in SP2. I'm evaluating it now and it doesn't work as I expect. We have a cluster with thousands of jobs, each job is 1 core job with 1 task, and let's say 2 resource pools that divide all cores as 50/50%. So if we have 200 cores, each pool is 100 cores.
1. It is mentioned in documentation that if one group underutilize its resources, other groups can use them, but that is not happening on our cluster. If one group submits thousands of jobs they all still run on 100 cores even if other 100 cores are idle. I understand that it probably can take some time for system to understand that part of cores is underutilized, but I waited for hours and it didn't happen.
2. If the scheduling mode is set to Queued and we have 500 jobs of one group in the queue and then 50 jobs of other group they still will be started in the same order as they are in the queue, however I expect that jobs of second group should be started as soon as cores of they pool are released. To do it I had to switch to Balanced scheduling mode but even in that case jobs of second group sometimes sit in the queue even if there is a free space in their pool. To fix it I had to rise the priority of second group jobs.
3. When all cluster cores are busy the pending message for the job is "Not enough available resources", but when only resource pool for this job is full but other cores are available, the message is the same. I have a suggestion to distinguish such situations and in the latter case write something like "Resource pool *** is full" to understand more correclty why the job is pending.
It seems to me that Resource Pools are not working well or not intended to work when each computational instance is separate job, actually I'm thinking about creating cumulative jobs with multiple computational instances as tasks to move computations from 'job level' to 'task level'. Will I benefit from this change?
Tutte le risposte
martedì 24 aprile 2012 09:43
i have used this feature for a while and i see that its working as expected.
I use Queued schedule mode
Is it so that you have devided the cluster into two parts and specified node groups? and in the job template specified which node groups the job templates are allowed to use.
Or is it so that you have two different job templates were its specified which compute nodes you are allowed to submit jobs to?
venerdì 27 aprile 2012 04:14
Yes, I divided the cluster to some resource pools (actually 5 + default) and make a job template for each pool.
Let's say I have pools A and B, and jobs for pool A have priority Above Normal, jobs for pool B have priority Lowest. What I experience in Queued mode is that jobs A with higher priority are started on the resources of pool B, even though there are B jobs in the queue. So A jobs allocate their own pool A plus pool B, and B jobs are sitting in the queue without resources.
Although it is stated in documentation, that job priority matters only inside the pool.
martedì 15 maggio 2012 10:47
I found one more issue with Resource Pools, although I’m not sure whether it is intended behavior or not. Let’s say you have 2 resource pools, each allocate 100 cores. The user submits Job 1 to Pool 1 with requested resources “1 – Auto cores” and the job has like 10 000 parametric sweep tasks. At the time of the submittal, all cores on the cluster are idle, so the job allocates all 200 cores. But then another user submits Job 2 to the Pool 2 and expects that it will allocate 100 cores. But what I experience is that the cores are not shrunk from the Job 1 to Job 2, so Job 1 still allocates 200 cores even when its tasks are finishing, the new tasks are just starting (I suppose it happens because it has so many tasks and the scheduler thinks that these cores are required), and the Job 2 sits in the queue without resources. That happens even if Job 2 has higher priority than Job 1, because they are submitted to the different resource pools, so priorities doesn’t matter.
Can you advise, what settings should I use to avoid such situation? I tried pretty much everything – Queued/Balanced mode with and without pre-emption, the behavior is the same.
- Modificato Nikita Tropin martedì 15 maggio 2012 11:09
giovedì 24 maggio 2012 05:32
I think I found the root cause of my issues. It appears that if you create a new job template and doesn’t specify Preemptable property in Job Template Editor then by default Preemptable value is False. So my jobs on Pool 1 were all non-preemptable, that’s why they didn’t release the cores. It was a bit confusing because I couldn’t find out the value of Preemptable property in the Job Manager GUI, and I can’t set it specifically to True for new jobs other than from HPC API, so I didn’t even realize that this property exists before reading the documentation thoroughly.
- Contrassegnato come risposta Nikita Tropin giovedì 24 maggio 2012 05:32