Known Issues on JEDI

This page collects known issues affecting JEDI’s system and application software.

Note

The following list of known issue is intended to provide a quick reference for users experiencing problems on JEDI. We strongly encourage all users to report the occurrence of problems, whether listed below or not, to the user support.

Open Issues

NVHPC+MPI does not broadcast static variables located on device

Added: 2025-01-07

Affects: All systems at JSC

Description: When using NVHPC/24.9-CUDA-12 and OpenMPI/5.0.5 (default in the current Stages/2025), broadcasting a non-allocatable variable located on the device/GPU using MPI_Bcast may not function as expected. The program will appear to run normally, but the variable is not broadcast. This may silently produce incorrect results. When the variable is a scalar, the compilation will throw a warning:

NVFORTRAN-W-0189-Argument number 1 to mpi_bcast: association of scalar actual argument to array dummy argument (test.f90: 48)
0 inform, 1 warnings, 0 severes, 0 fatal for main

When the variable is an array, the code compiles without warnings and runs without crashing, but may have produced incorrect results. If this is your case, please check your results carefully and consider using the suggested workaround below.

With ParaStationMPI/5.10.0-1 the code compiles without issues, but crashes in the MPI_Bcast call with:

[jrc0352:10037:0:10037] Caught signal 11 (Segmentation fault: invalid permissions for mapped object at address 0x146f23200100)

Status: Open - Reported to Nvidia, work in progress.

Workaround/Suggested Action: Declare the variable to be broadcast as allocatable.

Problems with commercial software like ANSYS using IntelMPI under Slurm 23.11

Added: 2024-12-11

Affects: All systems at JSC

Description: Job execution fails for commercial software like ANSYS which comes with its own intelMPI (No easy build usage since a separate intelMPI version is bundled with the ANSYS SW package.).

Status: Open.

Workaround/Suggested Action: The following settings within a job script to allow multi-node jobs spawned by IntelMPI to run:

export I_MPI_HYDRA_BOOTSTRAP=ssh
unset I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS

Conda Disabled

Added: 2024-11-02

Affects: All HPC systems

Description: Usage of the Conda default channel might not be allowed. Access to the channel is blocked on the systems.

Status: Closed.

Workaround/Suggested Action: Use an alternative channel (conda-forge) or even an alternative, faster client (mamba). See the dedicated description.

Fortran 2008 MPI bindings rewrite array bounds

Added: 2023-08-17

Affects: All systems at JSC

Description: Due to a bug in versions of the gfortran compiler installed in software stages earlier than 2024, the Fortran 2008 bindings (use mpi_f08) of MPICH-based MPI libraries (e.g. ParaStationMPI) erroneously modify the bounds of arrays passed into MPI routines as buffers.

Status: Open.

Workaround/Suggested Action: The issue can be avoided by using:

  • gfortran version 12 or later (available in software stage 2024) or

  • a Fortran compiler other than gfortran (e.g. the Intel Fortran compiler) or

  • an MPI library that is not based on MPICH (e.g. OpenMPI).

ParaStationMPI: Cannot allocate memory

Added: 2021-10-06

Affects: All systems at JSC

Description: Using ParaStationMPI, the following error might occur:

ERROR mlx5dv_devx_obj_create(QP) failed, syndrome 0: Cannot allocate memory

Status: Won’t fix

Workaround/Suggested Action: Use mpi-settings/[CUDA-low-latency-UD,CUDA-UD,UCX-UD] (Stage < 2022) or UCX-settings/[UD,UD-CUDA] (Stage >= 2022) to reduce the memory footprint. The particular module depends on the user requirements.

Cannot connect using old OpenSSH clients

Added: 2020-06-15

Affects: All systems at JSC

Description: In response to the recent security incident, the SSH server on JEDI has been configured to only use modern cryptography algorithms. As a side effect, it is no longer possible to connect to JEDI using older SSH clients. For OpenSSH, at least version 6.7 released in 2014 is required. Some operating systems with very long term support ship with older versions, e.g. RHEL 6 ships with OpenSSH 5.3.

Status: Won’t fix.

Workaround/Suggested Action: Use a more recent SSH client with support for the newer cryptography algorithms. If you cannot update the OpenSSH client (e.g. because you are not the administrator of the system you are trying to connect from) you can install your own version of OpenSSH from https://www.openssh.com. Logging in from a different system with a newer SSH client is another option. If you have to transfer data from a system with an old SSH client to JEDI (e.g. using scp) you may have to transfer the data to a third system with a newer SSH client first (scp’s command line option -3 can be used to automate this).

Recently Resolved and Closed Issues

Apptainer sandbox containers disabled on login nodes

Added: 2024-11-02

Affects: All HPC systems

Description: We have recently discovered a flaw that allows users to crash the Linux kernel when using Apptainer sandbox containers with IBM Storage Scale (formerly GPFS) as the backing file system. Login nodes in both JURECA and JUSUF have fallen victim to this issue, resulting in an unexpected reboot. To prevent users from losing work we have decided to temporarily disable sandbox containers on the login nodes while we wait for a fix for Storage Scale. After upgrading Apptainer to 1.3.6 on all systems, problem is solved.

Status: Resolved. Fixed in Apptainer 1.3.6.

Workaround/Suggested Action: If sandbox containers are essential to your workflow, we suggest you use a compute node where the feature is still enabled. However, make sure to run the container from a local tmpfs such as /tmp or /dev/shm.

Process affinity

Added: 2023-08-03

Affects: All systems at JSC

Description: After an update of Slurm to version 22.05 the process affinity has changed, which results in unexpected pinning in certain cases. This could have a major impact on code’s performance.

Status: Closed.

Workaround/Suggested Action: Further information can be found in the warning section of Processor Affinity.

ParaStationMPI: GPFS backend for ROMIO (MPI I/O)

Added: 2023-04-03

Update: 2023-06-12

Affects: All systems at JSC

Description: GPFS backend for ROMIO (MPI I/O) in ParaStationMPI has been enabled in the 2023 stage after a bug has been fixed. However, occasional segmentation faults have been observed when ParaStationMPI is used with GPFS backend enabled, resulting in job failures. Disabling the GPFS backend, the issue not reproducible anymore, and the jobs complete successfully.

Status: Resolved.

Workaround/Suggested Action: Versions 5.7.1-1 and 5.8.1-1 include a patch to address this issue and have been installed. If you are affected by this issue please explicitly load these versions.

JUST: GPFS hanging waiters lead to stuck I/O

Added: 2023-04-12

Update: As of 2023-05-26 all systems have been updated to a GPFS version that fixed the issue

Affects: All systems at JSC

Description: We are aware, since the 15th of March, that some users have seen their jobs cause waiters on JUST, which leads to these jobs hanging seemingly indefinitely on I/O. This issue has been observed for a specific set of jobs and more frequently occurred on JURECA than other systems. IBM has identified a possible cause and are now in the process of developing a fix.

Status: Resolved.

Workaround/Suggested Action: There are no known workarounds. Once IBM releases the fix, we will shortly schedule a maintenance window and install the patch.

Slurm: wrong default task pinning with odd number of tasks/node

Added: 2022-06-20

Affects: All systems at JSC

Description: With default CPU bindings (’–cpu-bind=threads’) the task pinning is not the expected one when we have odd number of tasks per node and those tasks are using number of cores less or equal to half of the total cores on each node.

When we have even number of tasks/node then only real cores are being used by the tasks. When we have odd number of tasks/node then SMT is enabled and different tasks share the hardware threads of same cores (this shouldn’t happen). Following you can see a few examples on JUWELS-CLUSTER.

With 1 task/node and 48 cpus/task it uses SMT:

$ srun -N1 -n1 -c48 --cpu-bind=verbose exec
cpu_bind=THREADS - jwc00n001, task  0  0 [7321]: mask 0xffffff000000ffffff set

With 2 tasks/node and 24 cpus/task it uses only physical cores:

$ srun -N1 -n2 -c24 --cpu-bind=verbose exec
cpu_bind=THREADS - jwc00n001, task  0  0 [7340]: mask 0xffffff set
cpu_bind=THREADS - jwc00n001, task  1  1 [7341]: mask 0xffffff000000 set

With 3 tasks/node and 16 threads/task it uses SMT (task 0 and 1 are on physical cores but task 2 uses SMT):

$ srun -N1 -n3 -c16 --cpu-bind=verbose exec
cpu_bind=THREADS - jwc00n001, task  0  0 [7362]: mask 0xffff set
cpu_bind=THREADS - jwc00n001, task  1  1 [7363]: mask 0xffff000000 set
cpu_bind=THREADS - jwc00n001, task  2  2 [7364]: mask 0xff000000ff0000 set

With 4 tasks/node and 12 cpus/task uses only physical cores:

$ srun -N1 -n4 -c12 --cpu-bind=verbose exec
cpu_bind=THREADS - jwc00n001, task  0  0 [7387]: mask 0xfff set
cpu_bind=THREADS - jwc00n001, task  2  2 [7389]: mask 0xfff000 set
cpu_bind=THREADS - jwc00n001, task  1  1 [7388]: mask 0xfff000000 set
cpu_bind=THREADS - jwc00n001, task  3  3 [7390]: mask 0xfff000000000 set

Status: Resolved.

Workaround/Suggested Action: To workaround this behavior you have to disable SMT with srun option “–hint=nomultithread”. You can compare the cpu masks in the following examples:

$ srun -N1 -n3 -c16 --cpu-bind=verbose exec
cpu_bind=THREADS - jwc00n004, task  0  0 [17629]: mask 0x0000000000ffff set
cpu_bind=THREADS - jwc00n004, task  1  1 [17630]: mask 0x0000ffff000000 set
cpu_bind=THREADS - jwc00n004, task  2  2 [17631]: mask 0xff000000ff0000 set


$ srun -N1 -n3 -c16 --cpu-bind=verbose --hint=nomultithread exec
cpu_bind=THREADS - jwc00n004, task  0  0 [17652]: mask 0x00000000ffff set
cpu_bind=THREADS - jwc00n004, task  1  1 [17653]: mask 0x00ffff000000 set
cpu_bind=THREADS - jwc00n004, task  2  2 [17654]: mask 0xff0000ff0000 set

Slurm: srun options –exact and –exclusive change default pinning

Added: 2022-06-09

Affects: All systems at JSC

Description: In Slurm 21.08 the srun options “–exact” and “–exclusive” change the default pinning. For example on JURECA:

$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose exec
cpu_bind=THREADS - jrc0731, task  0  0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set
...
$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exact exec
cpu_bind=THREADS - jrc0731, task  0  0 [3068]: mask 0x3000300030003000300030003000300030003000300030003000300030003 set
...
$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exclusive exec
cpu_bind=THREADS - jrc0731, task  0  0 [3068]: mask 0x3000300030003000300030003000300030003000300030003000300030003 set
...

As you can see with the default pinning only physical cores are used but with “–exact” or “–exclusive” Slurm pins the tasks to SMT cores (Hardware Threads). Actually this means that the task distribution changes to “cyclic”.

Status: Closed.

Workaround/Suggested Action: To workaround this behavior you have to request block distribution of the tasks using option “-m” like this:

$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exact -m *:block exec
cpu_bind=THREADS - jrc0731, task  0  0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set
...
$ srun -N1 --ntasks-per-node=1 -c32 --cpu-bind=verbose --exclusive -m *:block exec
cpu_bind=THREADS - jrc0731, task  0  0 [3027]: mask 0xffff0000000000000000000000000000ffff000000000000 set
...

Job requeueing failures due to slurmctld prologue bug

Added: 2021-05-18

Affects: All systems at JSC

Description: There is a bug in slurmctld and currently the prologue mechanism and the job requeueing are broken. Normally before a job allocates any nodes the prologue runs and if it finds unhealthy nodes it drains them and requeues the job. Because of the bug now slurcmtld will cancel the jobs that were requeued at least once but finally landed on healthy nodes. We have reported this bug to SchedMD and they are working on it.

Status: Resolved.