locked
Memory allocation - utilization RRS feed

  • Question

  • Hi we're kind of new to the HPC realm and have just completed a rather large project. In our after action review we're investigating why our jobs are somehow limited to 2GB of physical memory on our compute nodes. We are on the 2008 HPC with the latest service pack. Could this be because we launch our executables via a cmd batch file? The actual execuatbles being called are 64bit executable. So I'm wondering if there is something obvious that more expereinced HPC ppl know of.

    Right now we are running a test with a single job and 8 tasks. Each task uses the same cmd batch method to call our executables as listed above. I searched through the forums and could not find any similar posts.  I did read on msdn regarding the  IMAGE_FILE_LARGE_ADDRESS_AWARE linking option but this for C++ and dev stuff. But is kind of what we are experiencing and I am wondering if its not the method in which we invoke our tasks that being cmd.exe aka Windows Command Processor but am unsure as with my colleagues.

    I have run Lizard on the HPC farm and it performed great btw.

    Much Thanks,

    Tim 

    Thursday, July 22, 2010 6:24 AM

Answers

  • We are about to kick off some additional tests with the HPC R2 RC1 release (since that's the direction we are moving in) - but believe this to be a limitation of the executable we are running. Once we get this hammered out maybe we will follow up with a doc or something.

    Mucho Mahalo,

    Tim

    • Marked as answer by Tim Takata Tuesday, July 27, 2010 7:20 AM
    Tuesday, July 27, 2010 7:20 AM