I have a small compute cluster wiith a headnode and seven computenodes, all 8 of which are 2-socket Intel X5675 (12-core) systems with 96GB RAM. The headnode is dedicated as such (not a computenode), but it is also being used as an RDS server to run
Flow3D SMP simulations on an ad-hoc basis for certain models. On the headnode, a typical simulation configured to use all 12 cores will run for about 42 minutes and will use about 95% of the CPU for the run. On one of the computenodes, the same
simulation under the same version of Flow3D takes 22 minutes to run and pegs the CPU at 100% for the entire run.
At the end of the run, the simulation output includes elapsed time and cpu time counts (in seconds). Per the vendor, the cpu time is the total number of seconds for all threads in the process, and the elapsed time is the wall clock run time.
For a 12-core (thus 12-thread) run, this means the cpu time count should be about 12 times the elapsed time. Thus, if i divide the cpu time by the elapsed time, I should get (roughly) the number of cores. For the computenode run, this calculation
yields 11.97 cores, whereas for the headnode, it yields 10.74 cores.
I am wondering if there is some sort of resource manager on the headnode that protects ~5% of the CPU to ensure the headnode is functional if it is also used as a headnode. If so, where can I find this documented?