Hi, I am new to HPC clusters and running MPI job, I am trying to understand if I am missing something while running Intel MPI Benchmark(IMB) over Network Direct on a Windows HPC cluster.
I have a IMB(source downloded from Intel site) compiled with MSMPI.lib(from MS Compute Cluster Pack) and tried running through the mpiexec and job submit which failed to run. Below is the error I got,
"Error (14001) The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail."
Later I found the IMB source for compiling them for Windows 2008 server and compiled them with impi.lib which came with that source. I am able to start that job only with the mpiexec that cmae with the same source. I got some performance numbers too. I am confused now if the results I got is for Network Direct, since the IMB-MPI1 was not compiled with MSMPI.lib & the job was not run with mpiexec command from MS HPC Pack. How to confirm the same?
a "side-by-side" error is because of missing dll's in the C:\Windows\winsxs file. i had the same problem on the cluster that im using with the intel tool kit, from what i can work out you need to install the intel mpi libarys on all the nodes as well, althought that creates its own problems as well
J
Marked as answer byDon PatteeWednesday, January 12, 2011 2:49 AM
I assume that your first run failed because your C runtime was not configured correctly. Windows HPC server 2008 is able to run programs that were compiled with Windows CCP.
Your second try seems to be using Intel MPI which is not using Network Direct.
I would recommend that you take your first version, run it again and look at windows applicaiton event log to find out what is not configured correctly (I would assume that the installed CRT is of the wrong version).
a "side-by-side" error is because of missing dll's in the C:\Windows\winsxs file. i had the same problem on the cluster that im using with the intel tool kit, from what i can work out you need to install the intel mpi libarys on all the nodes as well, althought that creates its own problems as well
J
Marked as answer byDon PatteeWednesday, January 12, 2011 2:49 AM