SLURM script examples
This section contains some script examples that you can adapt to run your jobs. Always refer to the manual for a complete overview of the subject.
EXAMPLE 1
The following example submits to the scheduler a parallel MPI job request, asking for 1 compute node (-N 1), running 48 MPI tasks (--ntasks-per-node=48), without shared memory parallelization (--cpus-per-task=1) and without hyperthreading (--threads-per-core=1).#!/usr/bin/env bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --partition=cpu_sapphire
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mpirun -np 48 my_module_executable [OTHER PARAMETERS]
EXAMPLE 2
The following example submits to the scheduler a parallel MPI job request, asking for 2 compute nodes (-N 2), running 48 MPI tasks per node (--ntasks-per-node=48), without shared memory parallelization (--cpus-per-task=1) and without hyperthreading (--threads-per-core=1). It explicitly requires specific compute nodes for job (--nodelist=compute-X-Y). The executable is then run using a total of 96 MPI tasks.#!/usr/bin/env bash
#SBATCH -N 2
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --partition=cpu_sapphire
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
#SBATCH --nodelist=compute-X-Y,compute-K-Z
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mpirun -np 96 my_module_executable [OTHER PARAMETERS]
EXAMPLE 3
The following example submits to the scheduler a parallel MPI job request, asking for 2 compute nodes (-N 2), running 48 MPI tasks per node (--ntasks-per-node=48), without shared memory parallelization (--cpus-per-task=1) and without hyperthreading (--threads-per-core=1). It explicitly excludes specific compute nodes for job (--exclude=compute-X-Y). The executable is then run using a total of 96 MPI tasks.#!/usr/bin/env bash
#SBATCH -N 2
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --partition=cpu_sapphire
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
#SBATCH --exclude=compute-X-Y,compute-K-Z
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mpirun -np 96 my_module_executable [OTHER PARAMETERS]
EXAMPLE 4
The following example submits to the scheduler a hybrid MPI-OpenMP (see manual for further details) job request, asking for 1 compute node (-N 1), running 12 MPI tasks per node (--ntasks-per-node=12), with 4 OpenMP threads per task (--cpus-per-task=4) and without hyperthreading (--threads-per-core=1).#!/usr/bin/env bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=4
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --partition=cpu_sapphire
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
#SBATCH --exclude=compute-X-Y,compute-K-Z
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mpirun -np 12 my_module_executable [OTHER PARAMETERS]
EXAMPLE 5
The following example submits to the scheduler an MPI job with GPUs, asking for 1 compute node (-N 1), running 4 MPI tasks (--ntasks-per-node=4), on 4 GPUs without shared memory parallelization (--cpus-per-task=1) and without hyperthreading (--threads-per-core=1).#!/usr/bin/env bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --gres=gpu:4
#SBATCH --partition=gpu_a40
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mpirun -np 4 my_module_executable [OTHER PARAMETERS]
EXAMPLE 6
The following example shows one way to use the BeeGFS high performance filesystem. It automatically transfers simulation data to $SCRATCH, run the parallel simulation and then transfer data back to $HOME.#!/usr/bin/env bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --threads-per-core=1
#SBATCH --time 0-00:30:00
#SBATCH --partition=cpu_sapphire
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@polito.it
module purge
module load openmpi/5.0.7_gcc12
module load {my_module} ## change {my_module} with the actual one
mkdir $SCRATCH/path-to-simulation-dir
cp $HOME/path-to-working-dir/* $SCRATCH/path-to-simulation-dir
cd $SCRATCH/path-to-simulation-dir
mpirun -np 4 my_module_executable [OTHER PARAMETERS]
rsync $SCRATCH/path-to-simulation-dir/* $HOME/path-to-working-dir/