Batch system

JURECA is accessed through a dedicated set of login nodes used to write and compile applications as well as to perform pre- and post-processing of simulation data. Access to the compute nodes in the system is controlled by the workload manager.

On JURECA the Slurm (Simple Linux Utility for Resource Management) Workload Manager, a free open-source resource manager and batch system, is employed. Slurm is a modern, extensible batch system that is widely deployed around the world on clusters of various sizes.

A Slurm installation consists of several programs and daemons. The slurmctld daemon is the central brain of the batch system responsible for monitoring the available resources and scheduling batch jobs. The slurmctld runs on an administrative node with a special setup to ensure availability in the case of hardware failures. Most user programs such as srun, sbatch, salloc and scontrol interact with the slurmctld. For the purpose of job accounting slurmctld communicates with the slurmdbd database daemon. Information from the accounting database can be queries using the sacct command. Slurm combines the functionality of the batch system and resource management. For this purpose Slurm provides the slurmd daemon which runs on the compute nodes and interacts with slurmctld. For the executing of user processes slurmstepd instances are spawned by slurmd to shepherd the user processes.

On JURECA no slurmd is running on the compute nodes. Instead the process management is performed by psid, the management daemon from the Parastation Cluster Suite. A plugin psslurm to psid replaces slurmd on the compute nodes of JURECA. Therefore only one daemon is required on the compute nodes for the resource management which minimizes jitter that could affect large-scale applications.

Slurm Partitions

In Slurm multiple nodes can be grouped into partitions which are sets of nodes with associated limits (for wall-clock time, job size, etc.). In practice these partitions can be used for example to signal need for resources that have certain hardware characteristics (normal, large memory, accelerated, etc.) or that are dedicated to specific workloads (large production jobs, small debugging jobs, visualization, etc.).

Hardware Overview

Login partition

Type

Quantity

Description

Login nodes

12

128 cores, 1 TiB, 2× Quadro RTX8000

JURECA DC module

Type

Quantity

Description

Standard / Slim nodes

480

128 cores, 512 GiB

Large memory nodes

96

128 cores, 1 TiB

Accelerated nodes

192

128 cores, 512 GiB, 4× A100 GPUs

Available Partitions

Compute nodes are used exclusively by jobs of a single user; no node sharing between jobs is done. The smallest allocation unit is one node (128 processors). Users will be charged for the number of compute nodes multiplied with the wall-clock time used. On each node, a share of the available memory is reserved and not available for application usage.

The dc-cpu, dc-gpu, dc-cpu-bigmem partitions are intended for production jobs. To support development and code optimization, additional devel partitions are available.

The dc-cpu partition is the default partition used when no other partition is specified. It encompasses CPU-only compute nodes in the JURECA DC module with 512 GiB and 1024 GiB main memory. The dc-gpu partition provides access to JURECA compute nodes with A100 GPUs. The dc-cpu-bigmem partition contains nodes with 1 TiB main memory each.

A limit regarding the maximum number of running jobs per user is enforced. The precise values are adjusted to optimize system utilization. In general, the limit for the number of running jobs is lower for nocont projects.

In addition to the above mentioned partitions the dc-cpu-large and dc-gpu-large partitions are available for large and full-module jobs. The partitions are open for submission but jobs will only run in selected timeslots. The use of these partitions needs to be coordinated with the user support.

In order to request nodes with particular resources (mem1024, gpu) generic resources need to be requested at job submission.

JURECA DC module partitions

Partition

Resource

Value

dc-cpu

max. wallclock time

24 h / 6 h

default wallclock time

1 h

min. / max. number of nodes

1 / 128

node types / gres

mem512 (512 GiB) and
mem1024 (1024 GiB)

features

nodesubset@jrc0[710-719]: largedata

dc-gpu

max. wallclock time

24 h / 6 h

default wallclock time

1 h

min. / max. number of nodes

1 / 24

node types / gres

mem512, gpu:[1-4]
(512 GiB, 4× A100 per node)

dc-cpu-bigmem

max. wallclock time

24 h / 6 h

default wallclock time

1 h

min. / max. number of nodes

1 / 48

node types / gres

mem1024 (1024 GiB)

features

bigmem (1024 GiB)

dc-cpu-devel

max. wallclock time

2 h

default wallclock time

30 min

min. / max. number of nodes

1 / 4

node types / gres

mem512 (512 GiB)

dc-gpu-devel

max. wallclock time

2 h

default wallclock time

30 min

min. / max. number of nodes

1 / 4

node types / gres

mem512, gpu:[1-4]
(512 GiB, 4× A100 per node)

Allocations, Jobs and Job Steps

In Slurm a job is an allocation of selected resources for a specific amount of time. A job allocation can be requested using sbatch and salloc. Within a job multiple job steps can be executed using srun that use all or a subset of the allocated compute nodes. Job steps may execute at the same time if the resource allocation permits it.

Writing a Batch Script

Users submit batch applications (usually bash scripts) using the sbatch command. The script is executed on the first compute node in the allocation. To execute parallel MPI tasks users call srun within their script.

Note

mpiexec is not supported on JURECA and has to be replaced by srun.

The minimal template to be filled is

#!/bin/bash -x
#SBATCH --account=<budget account>
# budget account where contingent is taken from
#SBATCH --nodes=<no of nodes>
#SBATCH --ntasks=<no of tasks (MPI processes)>
# can be omitted if --nodes and --ntasks-per-node
# are given
#SBATCH --ntasks-per-node=<no of tasks per node>
# if keyword omitted: Max. 256 tasks per node
# (SMT enabled, see comment below)
#SBATCH --cpus-per-task=<no of threads per task>
# for OpenMP/hybrid jobs only
#SBATCH --output=<path of output file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory (%j is replaced by
# the job ID).
#SBATCH --error=<path of error file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory.
#SBATCH --time=<walltime>
#SBATCH --partition=<dc-cpu, ...>

# *** start of job script ***
# Note: The current working directory at this point is
# the directory where sbatch was executed.

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun <executable>

Multiple srun calls can be placed in a single batch script. Options such as --account, --nodes, --ntasks and --ntasks-per-node are by default taken from the sbatch arguments but can be overwritten for each srun invocation.

The default partition on JURECA, which is used if --partition is omitted, is the dc-gpu partition.

Note

If --nasks-per-node is omitted or set to a value higher than 128 SMT (simultaneous multithreading) will be enabled. Each compute node in the DC module features 128 physical cores and 256 logical cores.

Job Script Examples

Example 1: MPI application starting 512 tasks on 4 nodes using 128 CPUs per node (no SMT) running for max. 15 minutes:

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=128
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:15:00
#SBATCH --partition=dc-cpu

srun ./mpi-prog

Example 2: MPI application starting 4096 tasks on 16 nodes using 256 logical CPUs (hardware threads) per node (SMT enabled) running for max. 20 minutes:

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=16
#SBATCH --ntasks-per-node=256
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00
#SBATCH --partition=dc-cpu

srun ./mpi-prog

Example 3: Hybrid application starting 8 tasks per node on 4 allocated nodes and starting 16 threads per task (no SMT):

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=16
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00
#SBATCH --partition=dc-cpu

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun ./hybrid-prog

Example 4: Hybrid application starting 8 tasks per node on 3 allocated nodes and starting 32 threads per task (SMT enabled):

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=32
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00
#SBATCH --partition=dc-cpu

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun ./hybrid-prog

The job script is submitted using:

$ sbatch <jobscript>

On success, sbatch writes the job ID to standard out.

Note

One can also define sbatch options on the command line, e.g.:

$ sbatch --nodes=4 --acount=<budget> --time=01:00:00 <jobscript>

Requesting Generic Resources and Features

In order to request resources with special features (additional main memory, GPU devices, largedata) the --gres option to sbatch can be used. For mem1024 nodes, which are accessible via specific partitions, the --gres option can be omitted. Since the GPU and visualization nodes feature multiple user-visible GPU devices an additional quantity can be specified as shown in the following examples.

Option

Requested hardware features

--partition=mem1024

1 TiB main memory

--gres=gpu:2 --partition=dc-gpu

Cluster node, 2 GPUs per node

--gres=gpu:4 --partition=dc-gpu

Cluster node, 4 GPUs per node

--constraint=largedata

XCST storage - largedata, largedata2

--constraint=bigmem

1 TiB main memory

If no specific memory size is requested the default --gres=mem512 is automatically added to the submission. Please note that jobs requesting 512 GiB may also run on nodes with 1024 GiB if no other free resources are available.

If no gpu GRES is given then --gres=gpu:4 is automatically added by Slurm’s submission filter for all partitions with GPU nodes. Please note that GPU applications can request GPU devices per node via --gres=gpu:n where n can be 1, 2, 3 or 4 on GPU compute nodes. Please refer to the JURECA GPU computing page for examples.

Note

The charged computing time is independent of the number of specified GPUs. Production workloads must use all available GPU resources per node.

The XCST storage resource is available on all Login systems as well as on 10 JURECA-DC Compute nodes inside the ususal default batch partitions dc-cpu. For an example on how to use it, please refer to How to access largedata on a limited number of computes within your jobs?

Please see GPU Computing for more details.

Job Steps

The example below shows a job script where two different job steps are initiated within one job. In total 256 cores are allocated on two nodes where -n 128 causes that each job step uses 128 cores on one of the compute nodes. Additionally in this example the option --exclusive is passed to srun to ensure that distinct cores are allocated to each job step.:

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=2
#SBATCH --ntasks=256
#SBATCH --ntasks-per-node=128
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00

srun --exclusive -n 128 ./mpi-prog1 &
srun --exclusive -n 128 ./mpi-prog2 &

wait

Dependency Chains

Slurm supports dependency chains, i.e., collections of batch jobs with defined dependencies. Dependencies can be defined using the --dependency argument to sbatch:

sbatch --dependency=afterany:<jobid> <jobscript>

Slurm will guarantee that the new batch job (whose job ID is returned by sbatch) does not start before <jobid> terminates (successfully or not). It is possible to specify other types of dependencies, such as afterok which ensures that the new job will only start if <jobid> finished successfully.

Below an example script for the handling of job chains is provided. The script submits a chain of ${NO_OF_JOBS} jobs. A job will only start after successful completion of its predecessor. Please note that a job which exceeds its time-limit is not marked successful.:

#!/bin/bash -x
# submit a chain of jobs with dependency
# number of jobs to submit
NO_OF_JOBS=<no of jobs>
# define jobscript
JOB_SCRIPT=<jobscript>
echo "sbatch ${JOB_SCRIPT}"
JOBID=$(sbatch ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
I=0
while [ ${I} -le ${NO_OF_JOBS} ]; do
echo "sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT}"
JOBID=$(sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
let I=${I}+1
done

Interactive Sessions

Interactive sessions can be allocated using the salloc command:

$ salloc --partition=<devel|dc-cpu-devel|...> --nodes=2 --account=<budget> --time=00:30:00

Once an allocation has been made salloc will start a shell on the login node (submission host). One can then execute srun from within the shell, e.g.:

$ srun --ntasks=4 --ntasks-per-node=2 --cpus-per-task=7 ./hybrid-prog

The interactive session is terminated by exiting the shell. In order to obtain a shell on the first allocated compute nodes one can start a remote shell from within the salloc session and connect it to a pseudo terminal using:

$ srun --cpu_bind=none --nodes=2 --pty /bin/bash -i

The option --cpu_bind=none is used to disable CPU binding for the spawned shell. In order to execute MPI application one uses srun again from the remote shell. To support X11 forwarding the --forward-x option to srun is available. X11 forwarding is required for users who want to use applications or tools with provide a GUI.

Below a transcript of an exemplary interactive session is shown. srun can be run within the allocation without delay (note that the first srun execution may take slightly longer due to the necessary node health checking performed upon the invocation of the very first srun command within the session).

[user1@jrlogin04 ~]$ hostname
jrlogin04.jureca
[user1@jrlogin04 ~]$ salloc -n 2 --nodes=2 --account=<budget>
salloc: Pending job allocation 2622
salloc: job 2622 queued and waiting for resources
salloc: job 2622 has been allocated resources
salloc: Granted job allocation 2622
salloc: Waiting for resource configuration
salloc: Nodes jrc0690 are ready for job
[user1@jrlogin04 ~]$ srun --ntasks 2 --ntasks-per-node=2 hostname
jrc0690.jureca
jrc0690.jureca
[user1@jrlogin04 ~]$ srun --cpu-bind=none --nodes=1 --pty /bin/bash -i
[user1@jrc0690 ~]$ hostname
jrc0690.jureca
[user1@jrc0690 ~]$ logout
[user1@jrlogin04 ~]$ hostname
jrlogin04.jureca
[user1@jrlogin04 ~]$ exit
exit
salloc: Relinquishing job allocation 2622
[user1@jrlogin04 ~]$ hostname
jrlogin04.jureca

To support X11 forwarding the --forward-x option to srun is available.

Note

Your account will be charged per allocation whether the compute nodes are used or not. Batch submission is the preferred way to execute jobs.

Hold and Release Batch Jobs

Jobs that are in pending state (i.e., not yet running) can be put in hold using:

scontrol hold <jobid>

Jobs that are in hold are still reported as pending (PD) by squeue but the Reason shown by squeue or scontrol show job is changed to JobHeldUser:

[user1@jrlogin08 ~]$ scontrol show job <jobid>
JobId=<jobid> JobName=jobscript.sh
UserId=XXX(nnnn) GroupId=XXX(nnnn) MCS_label=N/A
Priority=0 Nice=0 Account=XXX QOS=normal
JobState=PENDING Reason=JobHeldUser Dependency=(null)
Requeue=0 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0
RunTime=00:00:00 TimeLimit=01:00:00 TimeMin=N/A
SubmitTime=2020-11-28T11:44:26 EligibleTime=2020-11-28T11:44:26
AccrueTime=Unknown
StartTime=Unknown EndTime=Unknown Deadline=N/A
SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-11-28T11:44:26
Partition=dc-cpu AllocNode:Sid=jrlogin04:19969
ReqNodeList=(null) ExcNodeList=(null)
NodeList=(null)
NumNodes=2-2 NumCPUs=2 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=2,node=2,billing=2
Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0
Features=(null) DelayBoot=00:00:00
Reservation=(null)
OverSubscribe=NO Contiguous=0 Licenses=home@just,project@just,scratch@just Network=(null)
Command=/XXX/jobscript.sh
WorkDir=/XXX
StdErr=/XXX/mpi-err.<jobid>
StdIn=/dev/null
StdOut=/XXX/mpi-out.<jobid>
Power=
TresPerNode=mem512

The job can be released using:

$ scontrol release <jobid>

Slurm commands

Below a list of the most important Slurm user commands available on JURECA is given.

sbatch

is used to submit a batch script (which can be a bash, Perl or Python script)

The script will be executed on the first node in the allocation chosen by the scheduler. The working directory coincides with the working directory of the sbatch program. Within the script one or multiple srun commands can be used to create job steps and execute (MPI) parallel applications.

Note

mpiexec is not supported on JURECA. srun is the only supported method to spawn MPI applications.

salloc

is used to request an allocation

When the job is started, a shell (or other program specified on the command line) is started on the submission host (login node). From the shell srun can be used to interactively spawn parallel applications. The allocation is released when the user exits the shell.

srun

is mainly used to create a job step within an job

srun can be executed without arguments except the program to use the full allocation or with additional arguments to restrict the job step resources to a subset of the allocated processors.

squeue

allows to query the list of pending and running jobs

By default it reports the list of pending jobs sorted by priority and the list of running jobs sorted separately according to the job priority.

scancel

is used to cancel pending or running jobs or to send signals to processes in running jobs or job steps

Example: scancel <jobid>

scontrol

can be used to query information about compute nodes and running or recently completed jobs

Examples:

  • scontrol show job <jobid> to show detailed information about pending, running or recently completed jobs

  • scontrol update job <jobid> set ... to update a pending job

Note

For old jobs scontrol show job <jobid> will not work and sacct -j <jobid> should be used instead.

sacct

is used to retrieve accounting information for jobs and job steps

For older jobs sacct queries the accounting database.

Example: sacct -j <jobid>

sinfo

is used to retrieve information about the partitions and node states

sprio

can be used to query job priorities

smap

graphically shows the state of the partitions and nodes using a curses interface

We recommend Llview as an alternative which is supported on all JSC machines.

sattach

allows to attach to the standard input, output or error of a running job

sstat

allows to query information about a running job

Summary of sbatch and srun Options

The following table summarizes important sbatch and srun command options:

--account

Budget account where contingent is taken from.

--nodes

Number of compute nodes used by the job. Can be omitted if --ntasks and --ntasks-per-node is given.

--ntasks

Number of tasks (MPI processes). Can be omitted if --nnodes and --ntasks-per-node is given. 1

--ntasks-per-node

Number of tasks per compute nodes.

--cpus-per-task

Number of logical CPUs (hardware threads) per task. This option is only relevant for hybrid/OpenMP jobs.

--job-name

A name for the job

--output

Path to the job’s standard output. Slurm supports format strings containing replacement symbols such as %j (job ID). 2

--error

Path to the job’s standard error. Slurm supports format strings containing replacement symbols such as %j (job ID).

--time

Maximal wall-clock time of the job.

--partition

Partition to be used, e.g. batch or large. If omitted, batch is the default.

--mail-user

Define the mail address to receive mail notification.

--mail-type

Define when to send a mail notifications. 3

--pty (srun only)

Execute the first task in pseudo terminal mode.

--forward-x (srun)

Enable X11 forwarding on the first allocated node.

--disable-turbomode (sbatch)

Disable turbo mode of all CPUs of the allocated nodes.

1

If --ntasks is omitted the number of nodes can be specified as a range --nodes=<min no. of nodes>-<max no. of nodes> allowing the scheduler to start the job with fewer nodes than the maximum requested if this reduces wait time.

2

stdout and stderr can be combined by specifying the same file for the --output and --error option.

3

Valid types: BEGIN, END, FAIL, REQUEUE, ALL, TIME_LIMIT, TIME_LIMIT_90, TIME_LIMIT_80, TIME_LIMIT_50, ARRAY_TASKS to receive emails when events occur. Multiple type values may be specified in a comma separated list.

More information is available on the man pages of sbatch, srun and salloc which can be retrieved on the login nodes with the commands man sbatch, man srun and man salloc, respectively, or in the Slurm documentation.

Frequency Scaling Performance Reliability

CPU frequency sets the pace at which instructions are executed by the CPU. A higher frequency results in:
  • Higher power usage

  • Possible higher performance

Each CPU has a base frequency, which is the frequency that the CPU is guaranteed to work at.

Turbo mode means that the CPU increases the frequency above the base frequency, if temperature allows. Higher frequency results in more heat dissipation and a higher temperature. If the temperature passes the designed threshold, the CPU will tend to control the temperature by lowering the frequency, and this might affect the performance.

Therefore, the base frequency is more reliable since application performance does not depend on the current temperature of the allocated CPUs.

As a result, for repeatable performance measurements, it is recommended to use --disable-turbomode to use the base frequency and disable turbo mode, a reference can be found in Summary of sbatch and srun Options