Multi-core tasks that will run on a single node (not MPI) RRS feed

  • Question

  • We have tasks that currently are mostly single threaded and thus request a single core from the scheduler.  We would like to be able for some to expand to more cores per task but require that they be on the same node.

    My investigations using of MinimumNumberOfCores/MaximumNumberOfCores properties on the task object assumes that the job is MPI based and allocates cores across multiple compute nodes.  

    Is there a way to allocate more than one core but require that all requested cores be on the same compute node?  We have no MPI based processes so even a cluster wide behavior change is an option.

    Wednesday, May 11, 2011 5:46 PM

All replies

  • I don't know if the new functionality in the SP2 Beta will be of use, there is now some properties that can be set through powershell:



    ·         SubscribedCores – this is the total number of cores on the system which the scheduler should use to schedule jobs.  It can be larger or smaller than the current Cores value.  This value must be set such that there are the same number of cores on each subscribed socket; or if subscribed socket is not specified the are the same number of cores on actual sockets.



    ·         SubscribedSockets – this is the total number of sockets on the system which the scheduler should use to schedule jobs.  It can be larger or smaller than the current Sockets value. The value of this field should be set to the same as the sockets property when a node is added to the administration service



    ·         Affinity – this nullable boolean value indicates whether or not the node manager should affinitize tasks to cores when running tasks on the node.  If the value is set to false, affinity will not be set and operating system will manage placement of the tasks on physical cores.  If true, the node manager will assign tasks to specific cores. If set to null, the cluster wide default for affinity should be used. When a node is added to the cluster, the default value should be set to $null.



    Set-HpcNode [-Name] <String[]> [-Chassis <String>] [-DataCenter <String>] [ -Description <String>] [-Location <String>] [-ManagementIpAddress <String>]  [-ProductKey <String>] [-Rack <String>] [-Role <NodeCommand.NodeRole[]>] [-Scheduler <String>] [-SubscribedCores <integer>|$null] ] [-SubscribedSockets <integer>|$null] [-Affinity <Boolean>|$null] [<CommonParameters>]


    Set-HpcNode [-Chassis <String>] [-DataCenter <String>] [-Description <String>] [-Location <String>] [-ManagementIpAddress <String>] [-ProductKey <String>] [-Rack <String>] [-Role <NodeCommand.NodeRole[]>] [-Scheduler <String>] -Node <HpcNode[]>[-SubscribedCores <integer>|$null] ] [-SubscribedSockets <integer>|$null] [-Affinity <Boolean>|$null] [<CommonParameters>]

    Wednesday, May 11, 2011 10:41 PM
  • The easiest thing you can try is per-socket or per-node allocation. Is this enough for you?


    Also, which version of HPC are you using? We are releasing SP2 Beta this week and it comes with under-subscription capability. E.g., you can manually change the # of cores to 2 when you actually have 8 cores on a node. The scheduler will then only schedule 2 tasks to that node. Each task can take 4 cores.


    This comes to how smart your task is. If it can scale to 16 or 32 threads and detect # of cores automatically, per-node allocation seems the easiest way.

    Thursday, May 12, 2011 8:31 AM
  • We use the processor affinity setting to keep the processes from taking over each other's resources already.  Thanks for the info Mark.

    Setting a task to request a socket is overkill for a single task of ours.  We need 2 possibly 3 cores and on some of our machines we have sockets with 4 core plus hyper threading so giving a task could mean it gets 8 cores.  A process that can efficiently use 3 cores would still be wasting up to 5 cores while it ran if it took over a whole socket.

    The idea is to better utilize the cores we have to speed up the overall throughput of the tasks (shorter individual runtime).  I don't want to under-subscribe a machine as that will idle cores that could be in use by other tasks. 


    What I need is a way to ask for more than one core and guarantee that all cores requested are on the same node.  As I said above we are not and don't plan to use MPI.


    Currently we are running Hpc 2008 Sp1 but have already tested our software with Hpc 2008 R2 SP1 to show the transition is transparent to our software.  I have not see the capability that I need for this to work in previous releases, I will have to look at 2008 R2 SP2 beta to see if there is any new functionality that will do what I need.


    Thursday, May 12, 2011 5:16 PM
  • I believe the most sure-fire way is to use the "requestednodes" in your job submit calls. In which case, your job will only run on the specified node and would not spill over into another node.

    It won't prevent other jobs from joining the job on the node if there is room, however, you would need to do your own scheduling to ensure no single node gets backed up with jobs.

    Monday, May 16, 2011 6:31 PM
  • James:  That still does not solve the problem I am facing in resource allocation.  If a job has 3 compute nodes in the 'requested nodes' list and a single task requests 2-4 cores, the allocated resources can be spread out across all three nodes.  What I need is for all cores that are requested for a single task end up being allocated on a single compute node.


    Is there a mechanism to make a feature request for the next release?

    Tuesday, May 24, 2011 6:54 PM
  • ah, instead of putting requestednodes on the job how about putting requirednodes in the task (I think this was what I meant to type but my mind left on a tangent). So, if you have 3 tasks, and each task will have one (different) requirednode, i think you can kind of get what you're looking for.

    I understand that this is non-ideal. There is an email thread going on now talking about if we can get this mechanism going for the next release.

    Tuesday, May 24, 2011 9:23 PM
  • James,


    Feel free to contact me directly if you would like more clarification as to our situation.  We are based in Seattle and have worked with the Hpc team before. 




    Friday, June 3, 2011 3:47 PM
  • If I understand your problem correctly, then this thread is my attempt to solve a very similar problem - it works for us on R2-SP2, but is a somewhat eccentric solution, and very open to comments/improvements...! I just haven't found any other way...


    Hope that helps,


    Tuesday, July 24, 2012 3:07 PM