locked
HPC dependent tasks run after failure? RRS feed

  • Question

  • Hello HPC forum

    Another newbie question :- why do tasks run if their dependencies have failed? e.g.

    job new /numnodes:1

    @rem "A" is going to fail.

    job add <id> /name:"A" cmd.exe /c exit /b 1

    @rem "B" depends "A", should it run?

    job add <id> /name:"B" /depend:"A" cmd.exe /c date /t

    job submit /id:<id>

    My naïve expectation would be that the dependency has failed for Task “B” and so it shouldn’t be run. I would expect if the user wanted run "B" irrespective of the failure of "A", then they don't express a dependency. But it seems to run quite happily. Is there another option to say only run this step if the one(s) it depends on succeeded without connecting back to the scheduler?

    Thanks

    Monday, February 25, 2013 10:51 AM

All replies

  • I think dependency means running job "B" only after job "A" is finished (either with success or failure).If you replace "cmd.exe" with "calc.exe" then job "B" is not executed until "calc.exe" is terminated.

    Daniel Drypczewski

    Tuesday, February 26, 2013 1:59 AM
  • Thanks Daniel, yes, this is definitely the case. The current unconditional /depend tasks are useful for clean-up, but not for multi-stage compute. Conditional /depend tasks can be effected by having the dependent task query the scheduler, but this is awkward/clumsy to script (if you have legacy).

    Tuesday, February 26, 2013 3:25 PM
  • The latest versions of the HPC stack has a job property called "FailDependentTasks" which can be used to change this behavior.
    Tuesday, March 26, 2013 9:47 PM