|
|
# Running a task on the cluster
|
|
|
[[_TOC_]]
|
|
|
# Running a job on the cluster
|
|
|
|
|
|
Jobs can be run on the cluster through the [srun](https://slurm.schedmd.com/srun.html) or [sbatch](https://slurm.schedmd.com/sbatch.html) commands. Both command supports common options, the main differences being that
|
|
|
* `srun` is blocking, will print on the standard output and can run arbitrary commands, including sbatch scripts. Please refer to the [srun manual](https://slurm.schedmd.com/srun.html) for a full description.
|
... | ... | @@ -22,8 +23,13 @@ Here are the currently available partitions on Licallo |
|
|
* **x40** parallel job requiring fat node (typically hybrid job) should go in this partition. The nodes in this partition have 2 sockets/processors of 20 core each and 192Go of RAM.
|
|
|
* **1to** the big memory partition, only contains one node, with 1To of RAM and 4 sockets of 8 core each.
|
|
|
|
|
|
#### Lagrange
|
|
|
TBD
|
|
|
|
|
|
### Running interactively
|
|
|
|
|
|
Any command can be dispatched on a compute node through `srun`:
|
|
|
|
|
|
```
|
|
|
$ hostname
|
|
|
pollux.cluster
|
... | ... | @@ -33,7 +39,8 @@ $ srun --partition x40 --time 0:1:0 hostname |
|
|
x033.cluster
|
|
|
$
|
|
|
```
|
|
|
Note that you need to select a *partition*, more on that later.
|
|
|
* The `--partition <partition name>` is required to select the partition
|
|
|
* The `--time <hh:mm:ss>` is required to select the expected runtime upper bound. This option is not mandatory, but he default runtime is very short.
|
|
|
|
|
|
|
|
|
## sbatch script
|
... | ... | |