|
## Problem
|
|
## Problem
|
|
When submitting arrays of jobs, you'll probably want to "pack" them. That is, you want a node to run as many jobs as possible et probably want to avoid sharing your node with another user.
|
|
When submitting arrays of jobs, you'll probably want to "pack" them. That is, you want a node to run as many jobs as possible et probably want to avoid sharing your node with another user.
|
|
|
|
|
|
Note that this only make sense on sequential partition, and that only those partition support that feature.
|
|
Note that this only make sense on sequential partition, and that only those partition support that feature.
|
|
|
|
|
|
## How to
|
|
## How to
|
|
If one single jobs require 4 core, you can specify that in your script with:
|
|
If one single jobs require 4 core, you can specify that in your script with:
|
|
```
|
|
```
|
|
#!/usr/bin/env bash
|
|
#!/usr/bin/env bash
|
|
#SBATCH --cpus-per-task=4
|
|
#SBATCH --cpus-per-task=4
|
|
#SBATCH --partition=seq-short
|
|
#SBATCH --partition=seq
|
|
#SBATCH --time=1:30:00
|
|
#SBATCH --time=1:30:00
|
|
...
|
|
...
|
|
<myexe> data_$SLURM_ARRAY_TASK_ID
|
|
<myexe> data_$SLURM_ARRAY_TASK_ID
|
|
```
|
|
```
|
|
|
|
|
|
Then, to launch 20 tasks (that's 5 job per node on a 20 core node):
|
|
Then, to launch 20 tasks (that's 5 job per node on a 20 core node):
|
|
|
|
|
|
```
|
|
```
|
|
$sbatch --array=1-20 --exclusive=user ./cpu-per-task.slurm
|
|
$sbatch --array=1-20 --exclusive=user ./cpu-per-task.slurm
|
|
```
|
|
```
|
|
|
|
|
|
The option '--exclusive=user' indicate that a single node can run more than one job as long as these jobs belong to the same user.
|
|
The option '--exclusive=user' indicate that a single node can run more than one job as long as these jobs belong to the same user.
|
|
|
|
|
|
### Full example
|
|
### Full example
|
|
|
|
|
|
Running the script [packed_array.slurm](https://gitlab.oca.eu/DSI/HPC/-/blob/master/SLURM/sequential/packed_array.slurm) will illustrate the feature:
|
|
Running the script [packed_array.slurm](https://gitlab.oca.eu/DSI/HPC/-/blob/master/SLURM/sequential/packed_array.slurm) will illustrate the feature:
|
|
```
|
|
```
|
|
$ sbatch -p seq-short --array=0-20 packed_array.slurm
|
|
$ sbatch -p seq --array=0-20 packed_array.slurm
|
|
Submitted batch job 18985606
|
|
Submitted batch job 18985606
|
|
$ grep task slurm-18985606_*.out
|
|
$ grep task slurm-18985606_*.out
|
|
slurm-18985606_0.out:Doing task 00000 on node p081.cluster.local
|
|
slurm-18985606_0.out:Doing task 00000 on node p081.cluster.local
|
|
slurm-18985606_10.out:Doing task 00010 on node p081.cluster.local
|
|
slurm-18985606_10.out:Doing task 00010 on node p081.cluster.local
|
|
slurm-18985606_11.out:Doing task 00011 on node p081.cluster.local
|
|
slurm-18985606_11.out:Doing task 00011 on node p081.cluster.local
|
|
slurm-18985606_12.out:Doing task 00012 on node p081.cluster.local
|
|
slurm-18985606_12.out:Doing task 00012 on node p081.cluster.local
|
|
slurm-18985606_13.out:Doing task 00013 on node p081.cluster.local
|
|
slurm-18985606_13.out:Doing task 00013 on node p081.cluster.local
|
|
slurm-18985606_14.out:Doing task 00014 on node p081.cluster.local
|
|
slurm-18985606_14.out:Doing task 00014 on node p081.cluster.local
|
|
slurm-18985606_15.out:Doing task 00015 on node p081.cluster.local
|
|
slurm-18985606_15.out:Doing task 00015 on node p081.cluster.local
|
|
slurm-18985606_16.out:Doing task 00016 on node p081.cluster.local
|
|
slurm-18985606_16.out:Doing task 00016 on node p081.cluster.local
|
|
slurm-18985606_17.out:Doing task 00017 on node p081.cluster.local
|
|
slurm-18985606_17.out:Doing task 00017 on node p081.cluster.local
|
|
slurm-18985606_18.out:Doing task 00018 on node p081.cluster.local
|
|
slurm-18985606_18.out:Doing task 00018 on node p081.cluster.local
|
|
slurm-18985606_19.out:Doing task 00019 on node p081.cluster.local
|
|
slurm-18985606_19.out:Doing task 00019 on node p081.cluster.local
|
|
slurm-18985606_1.out:Doing task 00001 on node p081.cluster.local
|
|
slurm-18985606_1.out:Doing task 00001 on node p081.cluster.local
|
|
slurm-18985606_20.out:Doing task 00020 on node p086.cluster.local
|
|
slurm-18985606_20.out:Doing task 00020 on node p086.cluster.local
|
|
slurm-18985606_21.out:Doing task 00021 on node p086.cluster.local
|
|
slurm-18985606_21.out:Doing task 00021 on node p086.cluster.local
|
|
slurm-18985606_22.out:Doing task 00022 on node p086.cluster.local
|
|
slurm-18985606_22.out:Doing task 00022 on node p086.cluster.local
|
|
slurm-18985606_23.out:Doing task 00023 on node p086.cluster.local
|
|
slurm-18985606_23.out:Doing task 00023 on node p086.cluster.local
|
|
slurm-18985606_24.out:Doing task 00024 on node p086.cluster.local
|
|
slurm-18985606_24.out:Doing task 00024 on node p086.cluster.local
|
|
slurm-18985606_25.out:Doing task 00025 on node p086.cluster.local
|
|
slurm-18985606_25.out:Doing task 00025 on node p086.cluster.local
|
|
slurm-18985606_2.out:Doing task 00002 on node p081.cluster.local
|
|
slurm-18985606_2.out:Doing task 00002 on node p081.cluster.local
|
|
slurm-18985606_3.out:Doing task 00003 on node p081.cluster.local
|
|
slurm-18985606_3.out:Doing task 00003 on node p081.cluster.local
|
|
slurm-18985606_4.out:Doing task 00004 on node p081.cluster.local
|
|
slurm-18985606_4.out:Doing task 00004 on node p081.cluster.local
|
|
slurm-18985606_5.out:Doing task 00005 on node p081.cluster.local
|
|
slurm-18985606_5.out:Doing task 00005 on node p081.cluster.local
|
|
slurm-18985606_6.out:Doing task 00006 on node p081.cluster.local
|
|
slurm-18985606_6.out:Doing task 00006 on node p081.cluster.local
|
|
slurm-18985606_7.out:Doing task 00007 on node p081.cluster.local
|
|
slurm-18985606_7.out:Doing task 00007 on node p081.cluster.local
|
|
slurm-18985606_8.out:Doing task 00008 on node p081.cluster.local
|
|
slurm-18985606_8.out:Doing task 00008 on node p081.cluster.local
|
|
slurm-18985606_9.out:Doing task 00009 on node p081.cluster.local
|
|
slurm-18985606_9.out:Doing task 00009 on node p081.cluster.local
|
|
16:47:56 [alainm@castor sequential]#
|
|
16:47:56 [alainm@castor sequential]#
|
|
``` |
|
``` |
|
|
|
\ No newline at end of file |