My cluster consists of two nodes - a head node and a compute node. Each server has 2 CPUs. HPC Server version is 2008 R2 SP2.
I have installed the EchoService service from HPC2008R2.SampleCode.zip samples bundle and modified the Echo service method that it must use the current CPU for 100% (Only one CPU is used for job). Then i run several instances of HelloWorldR2 on the workstation. I see in Cluster Manager that HPC creates several jobs according to the number of HelloWorldR2 instances.
For 2 instances -- 2 jobs are running. Head node -- 1st CPU is 100%, 2nd is 2%. Compute node -- 1st 100%, 2nd is 2%.
For 4 instances -- 2 jobs are running, 2 jobs are queueing. Head node -- 1st CPU is 100%, 2nd is 2%. Compute node -- 1st 100%, 2nd is 2%.
SessionStartInfo parameters is Core, 1, 1 (SessionResourceUnitType, MinimumUnits, MaximumUnits). I tried to change it, but it hasn't helped.
My question is -- why HPC didn't use 2nd CPU on the servers? And how can i make it works?
PS: Service configuration:
For "SessionStartInfo parameters is Core, 1, 1 (SessionResourceUnitType, MinimumUnits, MaximumUnits). I tried to change it, but it hasn't helped.", could you help let me know what value you have changed?
Also, how many requests are you sending? 1 request would only consume 1 CPU if SessionResourceUnitType is Core.
I tried following and it works fine - 4 CPU fully 100%
SessionStartInfo info = new SessionStartInfo(headnode, serviceName);
info.MinimumUnits = 4;
for (int i = 0; i < 8; i++)
client.SendRequest<EchoDelayRequest>(new EchoDelayRequest (50000));
Here is my sample bundle: http://rghost.net/16943671
Number of requests is 8, delay value is set in 10000.
SessionResourceUnitType, MinimumUnits, MaximumUnits
One HelloWorldR2.exe instance:
Core, 1, 1 -- the 1st Head Node CPU is used for 100%.
Core, 1, 2 -- the 1st Head Node CPU is used for 100%, two 100% peaks on the 1st Compute Node CPU (two requests are processed on the Compute Node).
Core, 2, 2 -- as Core, 1, 2
Core, 3, 3 -- application fails: Failed to submit job: The number of cores in the cluster (2) is less than the minimum number of cores required (3). Specify a lower minimum number of cores and try again.
This looks strange - why 2 is max number of cores? In Cluster Manager's Node Management view i see that every node has 2 cores (Cores column). But when HelloWorldR2 is running Cores in use column has '1' value for both nodes.
Maybe, the problem is that i have a trial version of the software?
I think this may be due to the fact that your headnode is not enabled with compute node role. To check/enable node role, take the headnode offline and choose to change node role in the right pane in cluster manager. If you do not have any other broker node (which is used to run SOA), you need to set node role to "both broker node and compute node".
Please see if it helps.
Sorry, but this doesn't works.
Compute Node role is enabled for Head Node. If not (Head Node is in WCFBrokerNodes and HeadNodes groups), service runs on only 1st CPU of the Compute Node. But the 2nd CPU is not used. :(
Maybe, there are options like 'Use all CPU's' somewhere in the Jobs Scheduler Configurations dialog? :)
BTW, when Head Node is a broker (Compute Node role is disabled), and i runs application with session parameters Core, 2, 2 i have the error:
Creating a session for EchoService... Failed to submit job: The number of cores in the cluster (1) is less than the minimum number of cores required (2). Specify a lower minimum number of cores and try again.
Why? I have 2 cores on the compute node!
I have changed Scheduling mode from Balanced to Queued and now it works as it should! 1st and 2nd CPU on HN are used for 100%, some tasks are running on 1st CPU of CN.
Session parameters are Node, 1, 2.
I set number of requests to 32 and i get:
Node, Cores, Cores in Use, CPU Usage
Head Node, 2, 2.00, 100.00
Compute Node, 2, 1.00, 50.01
Can't understand why the 2nd CPU of the compute node is not used!