Hi,
I've got a Windows headnode running Win 2012 R2/HPC 2012 R2, U3, and 6 Ubuntu 16.04.1 LTS linux nodes attached to it. It all basically works - I can submit very simple jobs, (eg, pwd, whoami, ls) and I get all the right answers back in STDOUT for those jobs,
which is a great start.
The problem is: how do users mount network shares for their cluster jobs, so they can do real jobs with real data, without publishing their password in public on the job submit line?
I've set up all the cluster nodes with pam_mount, such that if they logged into the cluster nodes with SSH, all their shares (Windows and other), would be mounted automatically, and this works nicely. But when launching jobs on the linux nodes, the cluster
manager doesn't seem to do an SSH - it seems to do this for user wes:
sudo -E -u wes env "PATH=$PATH" /bin/bash /tmp/nodemanager_task_109_0.dKLE3e/run_dir_in_out.sh
and that doesn't cause any of the shares to get mounted.
Any ideas? Many thanks,
Wes