|
[**M**essage **P**assing **I**nterface](https://www.mpi-forum.org/) (MPI) can be used to distribute parallel aplication on more than one node. We support the Intel implementation.[^1]
|
|
[**M**essage **P**assing **I**nterface](https://www.mpi-forum.org/) (MPI) can be used to distribute parallel aplication on more than one node. We support the Intel implementation.[^1]
|
|
|
|
|
|
|
|
[[_TOC_]]
|
|
|
|
|
|
## Get the example
|
|
## Get the example
|
|
```
|
|
```
|
|
14:56:53 [alainm@pollux view]# git clone https://gitlab.oca.eu/DSI/HPC.git
|
|
14:56:53 [alainm@pollux view]# git clone https://gitlab.oca.eu/DSI/HPC.git
|
... | @@ -42,7 +44,7 @@ This is 3, I got 4, I'm so talented |
... | @@ -42,7 +44,7 @@ This is 3, I got 4, I'm so talented |
|
15:08:54 [alainm@pollux MPI]#
|
|
15:08:54 [alainm@pollux MPI]#
|
|
```
|
|
```
|
|
## Script explanation
|
|
## Script explanation
|
|
We are now describing the Slurm script [run.slurm](SLURM/MPI/build.sh). Note that we will run 2 task per node to illustrate the fact that the job will use more than one node, although this toy example would be happy with one job per core.
|
|
We are now describing the Slurm script [run.slurm](https://gitlab.oca.eu/DSI/HPC/-/blob/master/SLURM/MPI/build.sh). Note that we will run 2 task per node to illustrate the fact that the job will use more than one node, although this toy example would be happy with one job per core.
|
|
```
|
|
```
|
|
|
|
|
|
```
|
|
```
|
... | | ... | |