... | ... | @@ -37,6 +37,33 @@ for i in $(ls inputs) |
|
|
do
|
|
|
sbcast ./inputs/$i /tmp/$SLURM_JOB_ID/$i
|
|
|
done
|
|
|
... get the job done ...
|
|
|
# if convenient:
|
|
|
rm -rf /tmp/$SLURM_JOB_ID
|
|
|
|
|
|
```
|
|
|
A few points deserve our attention:
|
|
|
* we put the data in a job specific directory, in order to avoid overlapping with other jobs
|
|
|
* **sbcat** deal with file, not directories.
|
|
|
* it's possible to tell **sbcast** to override or preserve existing file. Both can be convenient depending on your situation.
|
|
|
* deleting the data at the end can be convenient if your task is allocating the whole node, not so if it's an array with multiple task sharing a node.
|
|
|
* there is a verbosity option in **sbcat** which can be used to debug weird behaviors and track failure.
|
|
|
|
|
|
## Running the demo
|
|
|
|
|
|
```
|
|
|
[alainm@castor local_data]$ sbatch -p fdr ./quad_mean.slurm && squeue -u alainm
|
|
|
Submitted batch job 15469562
|
|
|
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
|
|
|
15469562 fdr quad_mea alainm PD 0:00 1 (None)
|
|
|
[alainm@castor local_data]$ more quad_mean.15469562.out
|
|
|
This job was launch from castor.cluster in /beegfs/SCRATCH/alainm/view/HPC/SLURM/local_data
|
|
|
processing /tmp/15469562/data_0:
|
|
|
Quad. mean of 56.2455 .....
|
|
|
is: 48.2701
|
|
|
...
|
|
|
processing /tmp/15469562/data_99:
|
|
|
Quad. mean of 25.8348
|
|
|
is: 47.3709
|
|
|
```
|
|
|
|