... | ... | @@ -45,7 +45,32 @@ This is 3, I got 4, I'm so talented |
|
|
```
|
|
|
## Script explanation
|
|
|
We are now describing the Slurm script [run.slurm](https://gitlab.oca.eu/DSI/HPC/-/blob/master/SLURM/MPI/build.sh). Note that we will run 2 task per node to illustrate the fact that the job will use more than one node, although this toy example would be happy with one job per core.
|
|
|
|
|
|
```
|
|
|
#!/bin/bash
|
|
|
#SBATCH --job-name=Dice # job name
|
|
|
#SBATCH --ntasks=4 # Total number of MPI processes, could be deduced
|
|
|
#SBATCH --ntasks-per-node=2 # Total number of MPI processes per node
|
|
|
#SBATCH --time=00:10:00 # Maximum compute time (HH:MM:SS)
|
|
|
#SBATCH --output=log-dice-%j.out # Ouput file (%j is the job's id)
|
|
|
#SBATCH --error=log-dice-%j.err # Error file
|
|
|
|
|
|
# Go to the submit dir
|
|
|
cd ${SLURM_SUBMIT_DIR}
|
|
|
|
|
|
# Cleanup the environment that could be inherited
|
|
|
module purge
|
|
|
|
|
|
# Load the needed modules, depends on your application
|
|
|
module load intel-gnu8-runtime/19.1.2.254
|
|
|
module load intelpython3 impi
|
|
|
module load boost
|
|
|
|
|
|
# Be verbose
|
|
|
set -x
|
|
|
|
|
|
# Run the code, one could also use mpiexec or mpirun
|
|
|
srun --mpi=pmi2 ./dice
|
|
|
|
|
|
```
|
|
|
[^1]: which has an impact in term of Slurm integration. |
|
|
\ No newline at end of file |