locked
Windows 2008 HPC and applications RRS feed

  • Question

  • Hello All,

    We have a modelling application (Inro - EMME2) which analyses the flow of people around transport systems when events occur such as shutting down station, road works etc. The application is very processor, I/O intensive and is also single threaded. We would like to implement this onto Windows HPC 2008 onto commodity hardware which should enable us to run our users (around 60) and scale out in the event of more capacity. One thing I cant seem to find in my research is the requirement for the node machines.

    Would we have to install the modelling software on each of the node machines so this can be scheduled by the HPC server? When scheduling jobs would we create the job on the HPC server and it would run the command line script on the next available server?

    Thanks for your help

    Regards

    Tony
    Wednesday, April 1, 2009 12:44 PM

Answers

  • Hi Tony. The answer depends on what you are trying to do.

    One way of thinking about this is that a high level you can have a parallel application, where the processes (application) running on each compute node communicate to each other. This type of application is designed for an application.

    Another use of a cluster is for a scale-out application, sometimes called am embarassingly parallel application. One example is a parametric sweep where you want to evaluate a range of input values against a function, model or application. Each instance is independent, so you can use the cluster to run as many copies as you have resources for.

    There are of course hybrid models, and the application may or may do anything with the results come back from the nodes. You can think of one model as batch--submit inputs and get outputs. And another model as interactive. We have designed our WCF-based "SOA cluster" model for more interactive applications, where a client establishes a session with the cluster to get resources, and then communicates diretly with compute nodes via web services to run calculations.

    How do you see your application scaling--one application instance per compute node (Server) per user? The scheduler could be used to share a pool of compute resources across your set of users.
    Wednesday, April 8, 2009 8:53 PM
  • Tony,
    The Compute Nodes in the cluster must be running a supported version of Windows Server 2008 x64 (supported versions include the HPC, Standard, Enterprise, and Datacenter editions); the hardware requirements for these nodes are simply whatever the OS and your application require.

    How an application is deployed will depend on the app.  Some applications may need to be deployed to each node in the cluster.  We provide some tools to make scripting this installation a bit easier (Clusrun allows remote execution of a command across multiple nodes, or you can customize your node deployment using Node Templates).  Often applications can simply be run off of a share on a file server which makes it much easier to manage versioning and patching of the application, and reduces the complexity of the CN's themselves.

    The HPC scheduler will allocate resources to your jobs one at a time, and then launch your command line on the remote CN machines.

    Thanks,
    Josh


    -Josh
    Thursday, April 2, 2009 10:15 PM
    Moderator

All replies

  • Tony,
    The Compute Nodes in the cluster must be running a supported version of Windows Server 2008 x64 (supported versions include the HPC, Standard, Enterprise, and Datacenter editions); the hardware requirements for these nodes are simply whatever the OS and your application require.

    How an application is deployed will depend on the app.  Some applications may need to be deployed to each node in the cluster.  We provide some tools to make scripting this installation a bit easier (Clusrun allows remote execution of a command across multiple nodes, or you can customize your node deployment using Node Templates).  Often applications can simply be run off of a share on a file server which makes it much easier to manage versioning and patching of the application, and reduces the complexity of the CN's themselves.

    The HPC scheduler will allocate resources to your jobs one at a time, and then launch your command line on the remote CN machines.

    Thanks,
    Josh


    -Josh
    Thursday, April 2, 2009 10:15 PM
    Moderator
  • Hello Josh, thanks for the reply...

    Does that mean you can deploy normal standalone applications (Which EMME is) into the grid and the HPC scheduler will run the commands against the nodes and return the results?

    The reason I ask is from trawling the documentation it seems you can only run application that are HPC aware. Is this the case?

    Thanks for your help
    Friday, April 3, 2009 11:20 AM
  • Hi Tony. The answer depends on what you are trying to do.

    One way of thinking about this is that a high level you can have a parallel application, where the processes (application) running on each compute node communicate to each other. This type of application is designed for an application.

    Another use of a cluster is for a scale-out application, sometimes called am embarassingly parallel application. One example is a parametric sweep where you want to evaluate a range of input values against a function, model or application. Each instance is independent, so you can use the cluster to run as many copies as you have resources for.

    There are of course hybrid models, and the application may or may do anything with the results come back from the nodes. You can think of one model as batch--submit inputs and get outputs. And another model as interactive. We have designed our WCF-based "SOA cluster" model for more interactive applications, where a client establishes a session with the cluster to get resources, and then communicates diretly with compute nodes via web services to run calculations.

    How do you see your application scaling--one application instance per compute node (Server) per user? The scheduler could be used to share a pool of compute resources across your set of users.
    Wednesday, April 8, 2009 8:53 PM