Hi we're kind of new to the HPC realm and have just completed a rather large project. In our after action review we're investigating why our jobs are somehow limited to 2GB of physical memory on our compute nodes. We are on the 2008 HPC with the latest
service pack. Could this be because we launch our executables via a cmd batch file? The actual execuatbles being called are 64bit executable. So I'm wondering if there is something obvious that more expereinced HPC ppl know of.
Right now we are running a test with a single job and 8 tasks. Each task uses the same cmd batch method to call our executables as listed above. I searched through the forums and could not find any similar posts. I did read on msdn regarding the
IMAGE_FILE_LARGE_ADDRESS_AWARE linking option but this for C++ and dev stuff. But is kind of what we are experiencing and I am wondering if its not the method in which we invoke our tasks that being cmd.exe aka Windows Command Processor but am unsure
as with my colleagues.
I have run Lizard on the HPC farm and it performed great btw.
Much Thanks,
Tim