Known Issues on JUSUF
This page collects known issues affecting JUSUF’s system and application software.
Note
The following list of known issue is intended to provide a quick reference for users experiencing problems on JUSUF. We strongly encourage all users to report the occurrence of problems, whether listed below or not, to the user support.
Open Issues
Slurm: wrong default task pinning with odd number of tasks/node
Added: 2022-06-20
Affects: All systems at JSC
Description: With default CPU bindings (’–cpu-bind=threads’) the task pinning is not the expected one when we have odd number of tasks per node and those tasks are using number of cores less or equal to half of the total cores on each node.
When we have even number of tasks/node then only real cores are being used by the tasks. When we have odd number of tasks/node then SMT is enabled and different tasks share the hardware threads of same cores (this shouldn’t happen). Following you can see a few examples on JUWELS-CLUSTER.
With 1 task/node and 48 cpus/task it uses SMT:
$ srun -N1 -n1 -c48 --cpu-bind=verbose exec cpu_bind=THREADS - jwc00n001, task 0 0 [7321]: mask 0xffffff000000ffffff set
With 2 tasks/node and 24 cpus/task it uses only physical cores:
$ srun -N1 -n2 -c24 --cpu-bind=verbose exec cpu_bind=THREADS - jwc00n001, task 0 0 [7340]: mask 0xffffff set cpu_bind=THREADS - jwc00n001, task 1 1 [7341]: mask 0xffffff000000 set
With 3 tasks/node and 16 threads/task it uses SMT (task 0 and 1 are on physical cores but task 2 uses SMT):
$ srun -N1 -n3 -c16 --cpu-bind=verbose exec cpu_bind=THREADS - jwc00n001, task 0 0 [7362]: mask 0xffff set cpu_bind=THREADS - jwc00n001, task 1 1 [7363]: mask 0xffff000000 set cpu_bind=THREADS - jwc00n001, task 2 2 [7364]: mask 0xff000000ff0000 set
With 4 tasks/node and 12 cpus/task uses only physical cores:
$ srun -N1 -n4 -c12 --cpu-bind=verbose exec cpu_bind=THREADS - jwc00n001, task 0 0 [7387]: mask 0xfff set cpu_bind=THREADS - jwc00n001, task 2 2 [7389]: mask 0xfff000 set cpu_bind=THREADS - jwc00n001, task 1 1 [7388]: mask 0xfff000000 set cpu_bind=THREADS - jwc00n001, task 3 3 [7390]: mask 0xfff000000000 set
Status: Open.
Workaround/Suggested Action: To workaround this behavior you have to disable SMT with srun option “–hint=nomultithread”. You can compare the cpu masks in the following examples:
$ srun -N1 -n3 -c16 --cpu-bind=verbose exec cpu_bind=THREADS - jwc00n004, task 0 0 [17629]: mask 0x0000000000ffff set cpu_bind=THREADS - jwc00n004, task 1 1 [17630]: mask 0x0000ffff000000 set cpu_bind=THREADS - jwc00n004, task 2 2 [17631]: mask 0xff000000ff0000 set $ srun -N1 -n3 -c16 --cpu-bind=verbose --hint=nomultithread exec cpu_bind=THREADS - jwc00n004, task 0 0 [17652]: mask 0x00000000ffff set cpu_bind=THREADS - jwc00n004, task 1 1 [17653]: mask 0x00ffff000000 set cpu_bind=THREADS - jwc00n004, task 2 2 [17654]: mask 0xff0000ff0000 set
Slurm: srun options –exact and –exclusive change default pinning
Added: 2022-06-09
Affects: All systems at JSC
Description: In Slurm 21.08 the srun options “–exact” and “–exclusive” change the default pinning. For example on JURECA:
$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose exec cpu_bind=THREADS - jrc0731, task 0 0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set ... $ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exact exec cpu_bind=THREADS - jrc0731, task 0 0 [3068]: mask 0x3000300030003000300030003000300030003000300030003000300030003 set ... $ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exclusive exec cpu_bind=THREADS - jrc0731, task 0 0 [3068]: mask 0x3000300030003000300030003000300030003000300030003000300030003 set ...
As you can see with the default pinning only physical cores are used but with “–exact” or “–exclusive” Slurm pins the tasks to SMT cores (Hardware Threads). Actually this means that the task distribution changes to “cyclic”.
Status: Open.
Workaround/Suggested Action: To workaround this behavior you have to request block distribution of the tasks using option “-m” like this:
$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exact -m *:block exec cpu_bind=THREADS - jrc0731, task 0 0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set ... $ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exclusive -m *:block exec cpu_bind=THREADS - jrc0731, task 0 0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set ...
ParaStationMPI: Cannot allocate memory
Added: 2021-10-06
Affects: All systems at JSC
Description: Using ParaStationMPI, the following error might occur:
ERROR mlx5dv_devx_obj_create(QP) failed, syndrome 0: Cannot allocate memory
Status: Open.
Workaround/Suggested Action: Use mpi-settings/[CUDA-low-latency-UD,CUDA-UD,UCX-UD]
(Stage < 2022) or UCX-settings/[UD,UD-CUDA]
(Stage >= 2022) to reduce the memory footprint.
The particular module depends on the user requirements.
Job requeueing failures due to slurmctld prologue bug
Added: 2021-05-18
Affects: All systems at JSC
Description: There is a bug in slurmctld and currently the prologue mechanism and the job requeueing are broken. Normally before a job allocates any nodes the prologue runs and if it finds unhealthy nodes it drains them and requeues the job. Because of the bug now slurcmtld will cancel the jobs that were requeued at least once but finally landed on healthy nodes. We have reported this bug to SchedMD and they are working on it.
Status: Open.
Cannot connect using old OpenSSH clients
Added: 2020-06-15
Affects: All systems at JSC
Description: In response to the recent security incident, the SSH server on JUSUF has been configured to only use modern cryptography algorithms. As a side effect, it is no longer possible to connect to JUSUF using older SSH clients. For OpenSSH, at least version 6.7 released in 2014 is required. Some operating systems with very long term support ship with older versions, e.g. RHEL 6 ships with OpenSSH 5.3.
Status: Open.
Workaround/Suggested Action:
Use a more recent SSH client with support for the newer cryptography algorithms.
If you cannot update the OpenSSH client (e.g. because you are not the administrator of the system you are trying to connect from) you can
install your own version of OpenSSH from https://www.openssh.com.
Logging in from a different system with a newer SSH client is another option.
If you have to transfer data from a system with an old SSH client to JUSUF (e.g. using scp
) you may have to transfer the data
to a third system with a newer SSH client first (scp
’s command line option -3
can be used to automate this).