Using HPC for deploying/provisioning test nodes and scheduling tests in a test farm environment? Is this do-able? RRS feed

  • Question

  • After reading about HPC it looks like it could solve a problem my company has been facing.

    We create medical implantable devices.   System testing involves  a Physicians programmer,  an implant, a USB communications "Wand" that talks to the implant.   A heart simulator that connect via RS232.   All these components are automated by a .net automation framework.   Actually the system is more complex than I am describing but I think it is good enough for this discussion.   There is a single installation that installs all the software (if necessary, the .net framework 2.0, NUnit), drivers, COM components (many), assemblies etc.

    Note at times we replace all the hardware with software simulations.  These situations are more processor intensive.   Each test is totally independent and so this would classify Embarrassingly Parallel.   Each  NUnit "job" consists of one or more tests in a test suite.  When all of the tests are done a report is created and stored and the "job" is done, ready for the next  NUnit "job" to start.

    Problem #1 is deployment.  We manually do the installs on each test machine everytime we have a new release (new Phys Programmer, new implant RAM files, new automation tools etc).  Say we have 40 nodes, it would be much nicer to deploy a consistent image and install the lastest build on request and have it happen fast.   There is a single massive installation which helps but it still has to be performed manually on each test node.  All nodes are basically desktop machines running Windows XP professional.

    Problem #2 is scheduling the tests.   The Nunit test dll handles all the automation including spawning all necessary executables, initializing hardware etc. All reporting and test report generation are also handled.  The NUnit tests are completely self-contained and designed for unattended automation.    We currently use TestDirector but it has a problem in that it can't queue up more than 1 test per node.     If we had 40 nodes and hundreds of tests it would be nice to queue them all up and have HPC just manage it.   Most of our current test machines are dual core but some aren't.   For the most part the tests are core agnostic.

    Problem #3 isn't really a problem but it would be nice if production desktop machines could be "co-op"ed to run tests at night and then be used for normal test development during the day.

    While I understand that HPC Windows Server 2008 isn't really designed for this.  It really looks like it could be used for solving our problems.  $475 per node would be quite cheap if we could pull this off.

    Does any of this sound reasonable?

    Friday, October 31, 2008 12:11 AM


  • yes it does sound reasonable.

    Windows HPC have build in mechanism to help with parametric sweep jobs (which is what you are describing) it can queue hundred of parametric tasks to the set of cores and would execute the next task as soon as one complete.

    as for deployment, you can use the automated image deployment, but this is slow. if you have an MSI package or just binaries to copy, you can use clusrun, a cluster tool, that would execute on all nodes at the same time. I use this ften as a simple mechanism for deployment, just copying the necessary files over.
    However Windows HPC 2008 does have good mechanism to help you with deploying your bits.

    as for opting in desktop machines; we found out that it usually works less than optimal especially when you need to access resources and you need balanced computation power for your jobs to complete on time.

    hope this helps,
    • Marked as answer by Don Pattee Friday, May 22, 2009 8:53 PM
    Friday, October 31, 2008 11:12 PM