Batch system

JUPITER is accessed through a dedicated set of login nodes used to write and compile applications as well as to perform pre- and post-processing of simulation data. Access to the compute nodes in the system is controlled by the workload manager.

On JUPITER the Slurm (Simple Linux Utility for Resource Management) Workload Manager, a free open-source resource manager and batch system, is employed. Slurm is a modern, extensible batch system that is widely deployed around the world on clusters of various sizes.

A Slurm installation consists of several programs and daemons. The slurmctld daemon is the central brain of the batch system responsible for monitoring the available resources and scheduling batch jobs. The slurmctld runs on an administrative node with a special setup to ensure availability in the case of hardware failures. Most user programs such as srun, sbatch, salloc and scontrol interact with the slurmctld. For the purpose of job accounting slurmctld communicates with the slurmdbd database daemon. Information from the accounting database can be queries using the sacct command. Slurm combines the functionality of the batch system and resource management. For this purpose Slurm provides the slurmd daemon which runs on the compute nodes and interacts with slurmctld. For the executing of user processes slurmstepd instances are spawned by slurmd to shepherd the user processes.

On JUPITER no slurmd is running on the compute nodes. Instead the process management is performed by psid, the management daemon from the Parastation Cluster Suite. A plugin psslurm to psid replaces slurmd on the compute nodes of JUPITER. Therefore only one daemon is required on the compute nodes for the resource management which minimizes jitter that could affect large-scale applications.

Slurm Partitions

In Slurm multiple nodes can be grouped into partitions which are sets of nodes with associated limits (for wall-clock time, job size, etc.). In practice these partitions can be used for example to signal need for resources that have certain hardware characteristics (normal, large memory, accelerated, etc.) or that are dedicated to specific workloads (large production jobs, small debugging jobs, visualization, etc.).

Hardware Overview

Type

Quantity

Description

Standard

48

288 cores, 480 GiB

Login

1

72 cores, 574 GiB

Available Partitions

Compute nodes are used exclusively by jobs of a single user; no node sharing between jobs is done. The smallest allocation unit is one node (288 cores). Users will be charged for the number of compute nodes multiplied with the wall-clock time used. On each node, a share of the available memory is reserved and not available for application usage.

In the current Early Access phase there are partitions jureap, limited to 64 nodes and booster accessible with the jureap_scale.

A limit regarding the maximum number of running jobs per user is enforced. The precise values are adjusted to optimize system utilization. In general, the limit for the number of running jobs is lower for nocont projects.

JUPITER partitions - under construction

Partition

Resource

Value

jureap

max. wallclock time (normal / nocont)

24 h / 6 h

default wallclock time

1 h

min. / max. number of nodes

1 / Size of system

node types

mem480 (480 GiB)

Internet Access

Due to security measures, we do not allow internet access on the compute nodes - please take this into consideration when running your jobs. Internet access is, however, allowed on the login nodes and on the devel partitions to facilitate development activities. If your production jobs require files available on the internet, consider downloading them first on the login nodes and using them from the jobs (some frameworks allow runs on “offline mode”). If this is not sufficient, and you need to run a “scraping” style workflow, please contact your Project Mentor or contact SC-Support at sc@fz-juelich.de for assistance.

Note

Although the internet access is allowed, only a few ports (as the usual HTTP/S 80/443) are open, while many other ports are blocked.

Allocations, Jobs and Job Steps

In Slurm a job is an allocation of selected resources for a specific amount of time. A job allocation can be requested using sbatch and salloc. Within a job multiple job steps can be executed using srun that use all or a subset of the allocated compute nodes. Job steps may execute at the same time if the resource allocation permits it.

Writing a Batch Script

Users submit batch applications (usually bash scripts) using the sbatch command. The script is executed on the first compute node in the allocation. To execute parallel MPI tasks users call srun within their script.

Note

mpiexec is not supported on JUPITER and has to be replaced by srun.

The minimal template to be filled is

#!/bin/bash -x
#SBATCH --account=<budget account>
# budget account where contingent is taken from
#SBATCH --nodes=<no of nodes>
#SBATCH --ntasks=<no of tasks (MPI processes)>
# can be omitted if --nodes and --ntasks-per-node
# are given
#SBATCH --ntasks-per-node=<no of tasks per node>
# if keyword omitted: Max. 288 tasks per node
# (SMT enabled, see comment below)
#SBATCH --cpus-per-task=<no of threads per task>
# for OpenMP/hybrid jobs only
#SBATCH --output=<path of output file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory (%j is replaced by
# the job ID).
#SBATCH --error=<path of error file>
# if keyword omitted: Default is slurm-%j.out in
# the submission directory.
#SBATCH --time=<walltime>
#SBATCH --partition=all

# *** start of job script ***
# Note: The current working directory at this point is
# the directory where sbatch was executed.

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun <executable>

Multiple srun calls can be placed in a single batch script. Options such as --account, --nodes, --ntasks and --ntasks-per-node are by default taken from the sbatch arguments but can be overwritten for each srun invocation.

Multiple srun calls can be placed in a single batch script. Options such as --account, --nodes, --ntasks and --ntasks-per-node are by default taken from the sbatch arguments but can be overwritten for each srun invocation.

The default partition on JUPITER, which is used if --partition is omitted, is the batch partition.

The default partition on JUPITER, which is used if --partition is omitted, is the jureap partition.

Job Script Examples

Note

For more information about the use of --cpus-per-task, SRUN_CPUS_PER_TASK and SBATCH_CPUS_PER_TASK after the update to Slurm version 23.02, please refer to the affinity documention found here: https://apps.fz-juelich.de/jsc/hps/jureca/affinity.html

Example 1: MPI application starting 1552 tasks on 4 nodes using 288 CPUs per node (no SMT) running for max. 15 minutes:

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=4
#SBATCH --ntasks=1552
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:15:00
#SBATCH --partition=all

srun ./mpi-prog

Example 2: A simple way to get good affinity between CPU sockets and GPUs

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=72
#SBATCH --gpus-per-task=1
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:15:00
#SBATCH --partition=all

srun ./mpi-prog

The job script is submitted using:

$ sbatch <jobscript>

On success, sbatch writes the job ID to standard out.

Note

One can also define sbatch options on the command line, e.g.:

$ sbatch --nodes=4 --acount=<budget> --time=01:00:00 <jobscript>

Generic Resources, Features and Topology-aware Allocations

All nodes on JUPITER Booster are identical, so there is no need to differentiate via Slurms generic resources or feature mechanisms. However, --gres=gpu:4 is automatically applied to all jobs where gres is not explicitly set. This can be changed to another number if you do not wish to use all GPUs on a node for testing, but it is expected that production workloads use all available GPUs per node.

Note

The charged computing time is independent of the number of specified GPUs. Production workloads must use all available GPU resources per node.

SLURM is configured with knowledge of the topology of the system, and therefore there should rarely be an issue with default job placement. If you believe you need manual control, though, it is possible to use the option “–switches” to limit the number of L1 switches used for the job, which are connected to 16 nodes each. However, this network localisation is likely to be limited to a Dragonfly Plus group - or up to 48 nodes, or 3 switches. Beyond this, users who need finer controls would have to use the --nodelists option on SLURM. Unfortunately, these require requesting a specific set of nodes, so are likely to need to be used in coordination with JSC teams (e.g. ensuring a sensible nodelist, coordinating a reservation). This is an advanced option, and many other approaches are likely to be more useful to try first. If you believe you need to run on a specific network configuration, please contact sc@fz-juelich.de with details of your jobs.

Node names are defined in the format jpbo-X-Y, where X is the rack number, and Y is the node number within each rack. Dragonfly Plus groups are composed of 5 racks each (so racks 1-5 are the first group, 6-10 the second, etc.). There are 48 nodes to each rack, 16 connected to each of 3 switches.

Job Steps

The example below shows a job script where two different job steps are initiated within one job. In total 96 cores are allocated on two nodes where -n 288 causes that each job step uses 48 cores on one of the compute nodes. Additionally in this example the option --exclusive is passed to srun to ensure that distinct cores are allocated to each job step.:

#!/bin/bash -x
#SBATCH --account=<budget>
#SBATCH --nodes=2
#SBATCH --ntasks=576
#SBATCH --ntasks-per-node=288
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:20:00

srun --exclusive -n 288 ./mpi-prog1 &
srun --exclusive -n 288 ./mpi-prog2 &

wait

Dependency Chains

Slurm supports dependency chains, i.e., collections of batch jobs with defined dependencies. Dependencies can be defined using the --dependency argument to sbatch:

sbatch --dependency=afterany:<jobid> <jobscript>

Slurm will guarantee that the new batch job (whose job ID is returned by sbatch) does not start before <jobid> terminates (successfully or not). It is possible to specify other types of dependencies, such as afterok which ensures that the new job will only start if <jobid> finished successfully.

Below an example script for the handling of job chains is provided. The script submits a chain of ${NO_OF_JOBS} jobs. A job will only start after successful completion of its predecessor. Please note that a job which exceeds its time-limit is not marked successful.:

#!/bin/bash -x
# submit a chain of jobs with dependency
# number of jobs to submit
NO_OF_JOBS=<no of jobs>
# define jobscript
JOB_SCRIPT=<jobscript>
echo "sbatch ${JOB_SCRIPT}"
JOBID=$(sbatch ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
I=0
while [ ${I} -le ${NO_OF_JOBS} ]; do
echo "sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT}"
JOBID=$(sbatch --dependency=afterok:${JOBID} ${JOB_SCRIPT} 2>&1 | awk '{print $(NF)}')
let I=${I}+1
done

Interactive Sessions

Interactive sessions can be allocated using the salloc command:

$ salloc --partition=<devel|dc-cpu-devel|...> --nodes=2 --account=<budget> --time=00:30:00

Once an allocation has been made salloc will start a shell on the login node (submission host). One can then execute srun from within the shell, e.g.:

$ srun --ntasks=4 --ntasks-per-node=2 --cpus-per-task=7 ./hybrid-prog

The interactive session is terminated by exiting the shell. In order to obtain a shell on the first allocated compute nodes one can start a remote shell from within the salloc session and connect it to a pseudo terminal using:

$ srun --cpu_bind=none --nodes=2 --pty /bin/bash -i

The option --cpu_bind=none is used to disable CPU binding for the spawned shell. In order to execute MPI application one uses srun again from the remote shell. To support X11 forwarding the --forward-x option to srun is available. X11 forwarding is required for users who want to use applications or tools with provide a GUI.

Below a transcript of an exemplary interactive session is shown. srun can be run within the allocation without delay (note that the first srun execution may take slightly longer due to the necessary node health checking performed upon the invocation of the very first srun command within the session).

[user1@jpbl-s01-02 ~]$ hostname
jpbl-s01-02
[user1@jpbl-s01-02 ~]$ salloc -n 2 --nodes=2 --account=<budget> --time=00:05:00
salloc: Pending job allocation 56168
salloc: job 56168 queued and waiting for resources
salloc: job 56168 has been allocated resources
salloc: Granted job allocation 56168
salloc: Waiting for resource configuration
salloc: Nodes jpbo-003-[02,12] are ready for job
[user1@jpbl-s01-02 ~]$ hostname
jpbl-s01-02
[user1@jpbl-s01-02 ~]$ srun --ntasks 2 --ntasks-per-node=2 hostname
jpbo-003-02.jupiter.internal
jpbo-003-12.jupiter.internal
[user1@jpbl-s01-02 ~]$  srun --cpu-bind=none --nodes=1 --pty /bin/bash -i
[user1@jpbo-003-02 ~]$ hostname
jpbo-003-02.jupiter.internal
[user1@jpbo-003-02 ~]$ exit
exit
srun: error: jpbo-003-02: task 0: Exited with exit code 1
[user1@jpbl-s01-02 ~]$ hostname
jpbl-s01-02
[user1@jpbl-s01-02 ~]$ exit
exit
salloc: Relinquishing job allocation 56168
[user1@jpbl-s01-02 ~]$ hostname
jpbl-s01-02

To support X11 forwarding the --forward-x option to srun is available.

Note

Your account will be charged per allocation whether the compute nodes are used or not. Batch submission is the preferred way to execute jobs.

Hold and Release Batch Jobs

Jobs that are in pending state (i.e., not yet running) can be put in hold using:

scontrol hold <jobid>

Jobs that are in hold are still reported as pending (PD) by squeue but the Reason shown by squeue or scontrol show job is changed to JobHeldUser:

[user1@jpbl-s02-04 ~]$ scontrol show job <jobid>
   JobId=<jobid> JobName=jobscript.sh
   UserId=XXX(nnnn) GroupId=XXX(nnnn) MCS_label=N/A
   Priority=0 Nice=0 Account=XXX QOS=normal
   JobState=PENDING Reason=JobHeldUser Dependency=(null)
   Requeue=0 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
   RunTime=00:00:00 TimeLimit=01:00:00 TimeMin=N/A
   SubmitTime=2025-06-17T16:19:01 EligibleTime=Unknown
   AccrueTime=Unknown
   StartTime=Unknown EndTime=Unknown Deadline=N/A
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2025-06-17T16:19:01 Scheduler=Main
   Partition=XXX AllocNode:Sid=jpbl-s02-04-interconnect-1.jupiter.internal:79762
   ReqNodeList=(null) ExcNodeList=(null)
   NodeList=
   NumNodes=1-1 NumCPUs=1 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   ReqTRES=cpu=1,mem=878562M,node=1,billing=1
   AllocTRES=(null)
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)
   Command=hostname
   WorkDir=/XXX

The job can be released using:

$ scontrol release <jobid>

Slurm commands

Below a list of the most important Slurm user commands available on JUPITER is given.

sbatch

is used to submit a batch script (which can be a bash, Perl or Python script)

The script will be executed on the first node in the allocation chosen by the scheduler. The working directory coincides with the working directory of the sbatch program. Within the script one or multiple srun commands can be used to create job steps and execute (MPI) parallel applications.

Note

mpiexec is not supported on JUPITER. srun is the only supported method to spawn MPI applications.

salloc

is used to request an allocation

When the job is started, a shell (or other program specified on the command line) is started on the submission host (login node). From the shell srun can be used to interactively spawn parallel applications. The allocation is released when the user exits the shell.

srun

is mainly used to create a job step within an job

srun can be executed without arguments except the program to use the full allocation or with additional arguments to restrict the job step resources to a subset of the allocated processors.

squeue

allows to query the list of pending and running jobs

By default it reports the list of pending jobs sorted by priority and the list of running jobs sorted separately according to the job priority.

scancel

is used to cancel pending or running jobs or to send signals to processes in running jobs or job steps

Example: scancel <jobid>

scontrol

can be used to query information about compute nodes and running or recently completed jobs

Examples:

  • scontrol show job <jobid> to show detailed information about pending, running or recently completed jobs

  • scontrol update job <jobid> set ... to update a pending job

Note

For old jobs scontrol show job <jobid> will not work and sacct -j <jobid> should be used instead.

sacct

is used to retrieve accounting information for jobs and job steps

For older jobs sacct queries the accounting database.

Example: sacct -j <jobid>

sinfo

is used to retrieve information about the partitions and node states

sprio

can be used to query job priorities

smap

graphically shows the state of the partitions and nodes using a curses interface

We recommend Llview as an alternative which is supported on all JSC machines.

sattach

allows to attach to the standard input, output or error of a running job

sstat

allows to query information about a running job

Summary of sbatch and srun Options

The following table summarizes important sbatch and srun command options:

--account

Budget account where contingent is taken from.

--nodes

Number of compute nodes used by the job. Can be omitted if --ntasks and --ntasks-per-node is given.

--ntasks

Number of tasks (MPI processes). Can be omitted if --nnodes and --ntasks-per-node is given. [1]

--ntasks-per-node

Number of tasks per compute nodes.

--cpus-per-task

Number of logical CPUs (hardware threads) per task. This option is only relevant for hybrid/OpenMP jobs.

--job-name

A name for the job

--output

Path to the job’s standard output. Slurm supports format strings containing replacement symbols such as %j (job ID). [2]

--error

Path to the job’s standard error. Slurm supports format strings containing replacement symbols such as %j (job ID).

--time

Maximal wall-clock time of the job.

--partition

Partition to be used, e.g. batch or large. If omitted, batch is the default.

--mail-user

Define the mail address to receive mail notification.

--mail-type

Define when to send a mail notifications. [3]

--pty (srun only)

Execute the first task in pseudo terminal mode.

--forward-x (srun)

Enable X11 forwarding on the first allocated node.

--disable-turbomode (sbatch)

Disable turbo mode of all CPUs of the allocated nodes.

More information is available on the man pages of sbatch, srun and salloc which can be retrieved on the login nodes with the commands man sbatch, man srun and man salloc, respectively, or in the Slurm documentation.

CPU Limiting Options

CPU frequency sets the pace at which instructions are executed by the CPU. A higher frequency results in:

  • Higher power usage

  • Possible higher performance

Each CPU has a base frequency, which is the frequency that the CPU is operating at by default.

Turbo mode means that the CPU increases the frequency above the base frequency, if conditions (such as temperature) allow. Higher frequency results in more heat dissipation and a higher temperature. If the temperature passes the designed threshold, the CPU will tend to control the temperature by lowering the frequency, and this might affect the performance.

Therefore, the base frequency is more reproducible since application performance does not depend on the current temperature of the allocated CPUs.

As a result, for repeatable performance measurements, it is recommended to use --disable-turbomode to use the base frequency and disable turbo mode, a reference can be found in Summary of sbatch and srun Options