Batch system
JEDI is accessed through a dedicated set of login nodes used to write and compile applications as well as to perform pre- and post-processing of simulation data. Access to the compute nodes in the system is controlled by the workload manager.
On JEDI the Slurm (Simple Linux Utility for Resource Management) Workload Manager, a free open-source resource manager and batch system, is employed. Slurm is a modern, extensible batch system that is widely deployed around the world on clusters of various sizes.
A Slurm installation consists of several programs and daemons. The slurmctld
daemon is the central brain of the batch system responsible
for monitoring the available resources and scheduling batch jobs. The slurmctld
runs on an administrative node with a special setup to
ensure availability in the case of hardware failures. Most user programs such as srun
, sbatch
, salloc
and scontrol
interact with
the slurmctld
. For the purpose of job accounting slurmctld
communicates with the slurmdbd
database daemon.
Information from the accounting database can be queries using the sacct
command.
Slurm combines the functionality of the batch system and resource management. For this purpose Slurm provides the slurmd
daemon which
runs on the compute nodes and interacts with slurmctld
. For the executing of user processes slurmstepd
instances are spawned by
slurmd
to shepherd the user processes.
Slurm Partitions
In Slurm multiple nodes can be grouped into partitions which are sets of nodes with associated limits (for wall-clock time, job size, etc.). In practice these partitions can be used for example to signal need for resources that have certain hardware characteristics (normal, large memory, accelerated, etc.) or that are dedicated to specific workloads (large production jobs, small debugging jobs, visualization, etc.).
Hardware Overview
Type |
Quantity |
Description |
---|---|---|
Standard |
48 |
288 cores, 480 GiB |
Login |
1 |
72 cores, 574 GiB |
Available Partitions
Compute nodes are used exclusively by jobs of a single user; no node sharing between jobs is done. The smallest allocation unit is one node (288 cores). Users will be charged for the number of compute nodes multiplied with the wall-clock time used. On each node, a share of the available memory is reserved and not available for application usage.
The system has only one partition called all
.
A limit regarding the maximum number of running jobs per user is enforced. The precise values are adjusted to optimize system utilization. In general, the limit for the number of running jobs is lower for nocont projects.
JEDI partitions
Partition |
Resource |
Value |
---|---|---|
|
max. wallclock time (normal / nocont) |
6 h / 6 h |
default wallclock time |
1 h |
|
min. / max. number of nodes |
1 / 48 |
|
node types |
|
Internet Access
Due to security measures, we do not allow internet access on the compute nodes - please take this into consideration when running your jobs.
Internet access is, however, allowed on the login nodes and on the devel
partitions to facilitate development activities.
If your production jobs require files available on the internet, consider downloading them first on the login nodes and using them from the jobs (some frameworks allow runs on “offline mode”).
If this is not sufficient, and you need to run a “scraping” style workflow, please contact your Project Mentor or contact SC-Support at sc@fz-juelich.de for assistance.
Note
Although the internet access is allowed, only a few ports (as the usual HTTP/S 80/443) are open, while many other ports are blocked.
Allocations, Jobs and Job Steps
In Slurm a job is an allocation of selected resources for a specific amount of time. A job allocation can be requested using sbatch
and salloc
.
Within a job multiple job steps can be executed using srun
that use all or a subset of the allocated compute nodes. Job steps may execute at
the same time if the resource allocation permits it.
Writing a Batch Script
Users submit batch applications (usually bash scripts) using the sbatch
command. The script is executed on the first compute node in the allocation. To execute parallel MPI tasks users call srun
within their script.
Note
mpiexec
is not supported on JEDI and has to be replaced by srun
.
The minimal template to be filled is
#!/bin/bash -x
#SBATCH --account=<budget account>
# budget account where contingent is taken from
#SBATCH --nodes=<no of nodes>
#SBATCH --ntasks=<no of tasks (MPI processes)>
# can be omitted if --nodes and --ntasks-per-node
# are given
#SBATCH --ntasks-per-node=<no of tasks per node>
# if keyword omitted: Max. 288 tasks per node
# (SMT enabled, see comment below)
#SBATCH --cpus-per-task=<no of threads per task>
# for OpenMP/hybrid jobs only
#SBATCH --output=<path of output file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory (%j is replaced by
# the job ID).
#SBATCH --error=<path of error file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory.
#SBATCH --time=<walltime>
#SBATCH --partition=all
# *** start of job script ***
# Note: The current working directory at this point is
# the directory where sbatch was executed.
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun <executable>
Multiple srun
calls can be placed in a single batch script.
Options such as --account
, --nodes
, --ntasks
and --ntasks-per-node
are by default taken from the sbatch
arguments but can be overwritten for each srun
invocation.
Multiple srun
calls can be placed in a single batch script.
Options such as --account
, --nodes
, --ntasks
and --ntasks-per-node
are by default taken from the sbatch
arguments but can be overwritten for each srun
invocation.
The default partition on JEDI, which is used if --partition
is omitted, is the all
partition.
Job Script Examples
Note
For more information about the use of --cpus-per-task
, SRUN_CPUS_PER_TASK
and SBATCH_CPUS_PER_TASK
after the update to Slurm version 23.02, please refer to the
affinity documention found here: https://apps.fz-juelich.de/jsc/hps/jureca/affinity.html
Example 1: MPI application starting 1552 tasks on 4 nodes using 288 CPUs per node (no SMT) running for max. 15 minutes:
#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=4
#SBATCH --ntasks=1552
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:15:00
#SBATCH --partition=all
srun ./mpi-prog
The job script is submitted using:
$ sbatch <jobscript>
On success, sbatch
writes the job ID to standard out.
Note
One can also define sbatch
options on the command line, e.g.:
$ sbatch --nodes=4 --acount=<budget> --time=01:00:00 <jobscript>
Generic Resources, Features and Topology-aware Allocations
All nodes on JEDI are the same so there is no need to differentiate via Slurms generic resources or features mechanisms.
Note
The charged computing time is independent of the number of specified GPUs. Production workloads must use all available GPU resources per node.
Job Steps
The example below shows a job script where two different job steps are initiated within one job. In total 96 cores are allocated on
two nodes where -n 288
causes that each job step uses 48 cores on one of the compute nodes. Additionally in this example the option
--exclusive
is passed to srun
to ensure that distinct cores are allocated to each job step.:
#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=2
#SBATCH --ntasks=576
#SBATCH --ntasks-per-node=288
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00
srun --exclusive -n 288 ./mpi-prog1 &
srun --exclusive -n 288 ./mpi-prog2 &
wait
Dependency Chains
Slurm supports dependency chains, i.e., collections of batch jobs with defined dependencies. Dependencies can be defined using the --dependency
argument to sbatch
:
sbatch --dependency=afterany:<jobid> <jobscript>
Slurm will guarantee that the new batch job (whose job ID is returned by sbatch) does not start before <jobid>
terminates (successfully or not).
It is possible to specify other types of dependencies, such as afterok which ensures that the new job will only start if <jobid>
finished
successfully.
Below an example script for the handling of job chains is provided. The script submits a chain of ${NO_OF_JOBS}
jobs. A job will only start after
successful completion of its predecessor. Please note that a job which exceeds its time-limit is not marked successful.:
#!/bin/bash -x
# submit a chain of jobs with dependency
# number of jobs to submit
NO_OF_JOBS=<no of jobs>
# define jobscript
JOB_SCRIPT=<jobscript>
echo "sbatch ${JOB_SCRIPT}"
JOBID=$(sbatch ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
I=0
while [ ${I} -le ${NO_OF_JOBS} ]; do
echo "sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT}"
JOBID=$(sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
let I=${I}+1
done
Interactive Sessions
Interactive sessions can be allocated using the salloc
command:
$ salloc --partition=<devel|dc-cpu-devel|...> --nodes=2 --account=<budget> --time=00:30:00
Once an allocation has been made salloc
will start a shell on the login node (submission host). One can then execute srun
from within the shell, e.g.:
$ srun --ntasks=4 --ntasks-per-node=2 --cpus-per-task=7 ./hybrid-prog
The interactive session is terminated by exiting the shell. In order to obtain a shell on the first allocated compute nodes one can start a remote shell from within the salloc
session and connect it to a pseudo terminal using:
$ srun --cpu_bind=none --nodes=2 --pty /bin/bash -i
The option --cpu_bind=none
is used to disable CPU binding for the spawned shell. In order to execute MPI application one uses srun
again from the remote shell. To support X11 forwarding the --forward-x
option to srun
is available. X11 forwarding is required for users who want to use applications or tools with provide a GUI.
Below a transcript of an exemplary interactive session is shown.
srun
can be run within the allocation without delay (note that the first srun
execution may take slightly longer due to the necessary node health checking performed upon the invocation of the very first srun
command within the session).
[user1@jpblt-s01-01 ~]$ hostname
jpblt-s01-01.jupiter.internal
[user1@jpblt-s01-01 ~]$ salloc --nodes=2 --account=<budget>
salloc: Granted job allocation 72
salloc: Waiting for resource configuration
salloc: Nodes jpbot-001-[17-18] are ready for job
[user1@jpblt-s01-01 ~]$ hostname
jpblt-s01-01.jupiter.internal
[user1@jpblt-s01-01 ~]$ srun --ntasks 2 hostname
jpbot-001-18.jupiter.internal
jpbot-001-17.jupiter.internal
[user1@jpblt-s01-01 ~]$ srun --cpu-bind=none --nodes=1 --pty /bin/bash -i
[user1@jpbot-001-17 ~]$ hostname
jpbot-001-17.jupiter.internal
[user1@jpbot-001-17 ~]$ exit
[user1@jpblt-s01-01 ~]$ hostname
jpblt-s01-01.jupiter.internal
[user1@jpblt-s01-01 ~]$ exit
exit
salloc: Relinquishing job allocation 72
salloc: Job allocation 72 has been revoked.
[user1@jpblt-s01-01 ~]$ hostname
jpblt-s01-01.jupiter.internal
To support X11 forwarding the --forward-x
option to srun
is available.
Note
Your account will be charged per allocation whether the compute nodes are used or not. Batch submission is the preferred way to execute jobs.
Hold and Release Batch Jobs
Jobs that are in pending state (i.e., not yet running) can be put in hold using:
scontrol hold <jobid>
Jobs that are in hold are still reported as pending (PD) by squeue
but the Reason
shown by squeue
or scontrol show job
is changed to JobHeldUser
:
The job can be released using:
$ scontrol release <jobid>
Slurm commands
Below a list of the most important Slurm user commands available on JEDI is given.
- sbatch
is used to submit a batch script (which can be a bash, Perl or Python script)
The script will be executed on the first node in the allocation chosen by the scheduler. The working directory coincides with the working directory of the sbatch program. Within the script one or multiple srun commands can be used to create job steps and execute (MPI) parallel applications.
Note
mpiexec
is not supported on JEDI.srun
is the only supported method to spawn MPI applications.- salloc
is used to request an allocation
When the job is started, a shell (or other program specified on the command line) is started on the submission host (login node). From the shell
srun
can be used to interactively spawn parallel applications. The allocation is released when the user exits the shell.- srun
is mainly used to create a job step within an job
srun
can be executed without arguments except the program to use the full allocation or with additional arguments to restrict the job step resources to a subset of the allocated processors.- squeue
allows to query the list of pending and running jobs
By default it reports the list of pending jobs sorted by priority and the list of running jobs sorted separately according to the job priority.
- scancel
is used to cancel pending or running jobs or to send signals to processes in running jobs or job steps
Example:
scancel <jobid>
- scontrol
can be used to query information about compute nodes and running or recently completed jobs
Examples:
scontrol show job <jobid>
to show detailed information about pending, running or recently completed jobsscontrol update job <jobid> set ...
to update a pending job
Note
For old jobs
scontrol show job <jobid>
will not work andsacct -j <jobid>
should be used instead.- sacct
is used to retrieve accounting information for jobs and job steps
For older jobs
sacct
queries the accounting database.Example:
sacct -j <jobid>
- sinfo
is used to retrieve information about the partitions and node states
- sprio
can be used to query job priorities
- smap
graphically shows the state of the partitions and nodes using a curses interface
We recommend Llview as an alternative which is supported on all JSC machines.
- sattach
allows to attach to the standard input, output or error of a running job
- sstat
allows to query information about a running job
Summary of sbatch and srun Options
The following table summarizes important sbatch
and srun
command options:
|
Budget account where contingent is taken from. |
|
Number of compute nodes used by the job. Can be omitted if |
|
Number of tasks (MPI processes). Can be omitted if |
|
Number of tasks per compute nodes. |
|
Number of logical CPUs (hardware threads) per task. This option is only relevant for hybrid/OpenMP jobs. |
|
A name for the job |
|
Path to the job’s standard output. Slurm supports format strings containing replacement symbols such as |
|
Path to the job’s standard error. Slurm supports format strings containing replacement symbols such as |
|
Maximal wall-clock time of the job. |
|
Partition to be used, e.g. |
|
Define the mail address to receive mail notification. |
|
Define when to send a mail notifications. 3 |
|
Execute the first task in pseudo terminal mode. |
|
Enable X11 forwarding on the first allocated node. |
|
Disable turbo mode of all CPUs of the allocated nodes. |
- 1
If
--ntasks
is omitted the number of nodes can be specified as a range--nodes=<min no. of nodes>-<max no. of nodes>
allowing the scheduler to start the job with fewer nodes than the maximum requested if this reduces wait time.- 2
stdout
andstderr
can be combined by specifying the same file for the--output
and--error
option.- 3
Valid types:
BEGIN
,END
,FAIL
,REQUEUE
,ALL
,TIME_LIMIT
,TIME_LIMIT_90
,TIME_LIMIT_80
,TIME_LIMIT_50
,ARRAY_TASKS
to receive emails when events occur. Multiple type values may be specified in a comma separated list.
More information is available on the man pages of sbatch
, srun
and salloc
which can be retrieved on the login nodes with the commands man sbatch
, man srun
and man salloc
, respectively, or in the Slurm documentation.
CPU Limiting Options
Each JEDI node includes 4 Grace Hopper Superchips, as covered in the configuration details for JEDI. Each Superchip is comprised of a CPU and GPU, and each Superchip receives a fixed total power budget of 680 W.
By default, the CPU for each Superchip is limited to a power budget of 100 W, to maximise performance from the GPU, which for many applications will be delivering the bulk of compute performance. However, in some cases, where work is split between CPU and GPU, it may be advantageous to to rebalance the power budget between CPU and GPU.
JSC deploys a custom Slurm plugin, which provides the option --grace-power-cap=<cap-in-watts>
(e.g.
to set to 200 Watts use --grace-power-cap=200
).
This option is available for sbatch
, srun
and salloc
commands, and can be set to values between
100 and 300, changing the CPU power limit to the corresponding number of Watts. The option operates on a per-node level,
i.e. it limits all Grace CPUs on that node to the given value at once (including CPUs with job steps that may already be running).
Warning
Currently, setting this value to a number higher than 300 will silently fail and the CPU power limit will be set to the default of 100 W.
Note
If this option is used in srun
commands, the value will not be reset at the end of that
srun
commands, i.e. if used in a batch script or salloc
session with multiple job steps, subsequent jobs will use the same
custom power limit.
It is important to understand that raising power limits on the CPU can diminish the amount of power available to the GPU, and there is not a 1-to-1 relationship between the power available, clock speeds and performance. If using this option, it is important to benchmark your specific use-case to understand what benefits (if any) you can extract from it, and in which situations this unacceptably degrades GPU performance.