Answered by:
Limiting node load in HPC R2

Question
-
In this discussion (http://social.microsoft.com/forums/en-us/windowshpcsched/thread/59DD7FC4-78F0-401E-9A47-867A78B5ECF1) there was a suggestion that in R2, a new feature might be added to limit the number of cores used per node.
Was this feature added?
Our interest in this is that our tasks can potentially use a lot of memory. By limiting the number of cores, we can limit the number of tasks that run on the node, and hopefully prevent tasks from using up all of the memory on that node.
Thanks!
Thursday, October 28, 2010 5:43 PM
Answers
-
Hi Derek,
You can limit number of cores visible to the OS via:
1) Commandline: bcdedit /set numproc <your_num_cores>
2) MSConfig.exe utility: Boot -> Advanced Options -> Number of Processors
Both methods will need you to reboot your node.
Regards,
Łukasz- Marked as answer by Derek Kivi Thursday, January 20, 2011 6:14 PM
Thursday, January 20, 2011 6:12 PM
All replies
-
Does anyone have an answer to this?
As an example, say our compute nodes have 8 cores and 32GB of RAM, giving 4GB per core.
Our service instances can commonly use up to 8GB of RAM per instance, so we would really like to be able to specify that we only want to run 4 service instances on these compute nodes.
It doesn't appear that HPC 2008 makes it easy to achieve this. Does HPC R2 have any features that might make this possible?
Thanks.
Thursday, November 18, 2010 9:46 PM -
That is a concern to me as well.
I know that you can set number of cores per instance and kind of achieve it that way. But it's not very practicle for running a mix of jobs
say 2 instances/node of a job that needs lots of RAM but only 1 core, plus later someone else wants to start another that needs CPU but little RAM
Thursday, December 2, 2010 6:18 AM -
There is currently no built in support for this. There are a few ways to force this to work.
1) use the Exclusive property on a task. This will force only one task to run on a node at a time
job new
job add <id> /exclusive:true myapp.exe
2) Specify the numcores property on a task. For instance your cluster has 4 cores/node and you want 2 tasks to run on a node then ...
job new
job add <id> /numcores:2 myapp.exe
job add <id> /numcores:2 myapp.exe
job add <id> /numcores:2 myapp.exe
...
3) The thrird way is pretty draconian. You can force the OS to report fewer cores/node on the CNs. This would be a good solution if all of the jobs you run on the cluster need to be limited, or if you could do this to some fraction of the cluster then put these nodes in a node group so that particular jobs could use them. This is probably the least desireable solution.
- Proposed as answer by Don PatteeModerator Wednesday, January 12, 2011 2:45 AM
Friday, January 7, 2011 7:51 PM -
Hi Steve,
I am interested in your 3rd point. In general we try to recommend sufficient RAM to our customers for the size of jobs that they want to run, but sometimes this can be difficult depending on the specific jobs they (or we) want to run.
Do you have a reference for how we can get Windows to report fewer cores per node to HPC? In some circumstances we might be interested in this technique.
Thanks for your reply.
Derek
Wednesday, January 12, 2011 11:12 PM -
Hi Derek,
You can limit number of cores visible to the OS via:
1) Commandline: bcdedit /set numproc <your_num_cores>
2) MSConfig.exe utility: Boot -> Advanced Options -> Number of Processors
Both methods will need you to reboot your node.
Regards,
Łukasz- Marked as answer by Derek Kivi Thursday, January 20, 2011 6:14 PM
Thursday, January 20, 2011 6:12 PM -
Thank you Lukasz and Steve for this information! Much appreciated.
Derek
Thursday, January 20, 2011 6:15 PM