JUWELS Rocky 9 Migration

On 2024-10-29 we will migrate all nodes to Rocky 9. Please refer to the following sections to familiarize yourself with changes.

Prepare in advance for the migration

To prepare for the migration, we have enabled a few login nodes ahead of time, so users can test their environment before the migration takes place. These are the login nodes that have been migrated until now:

  • JUWELS Cluster: juwels[07-09].fz-juelich.de

  • Visualization nodes for JUWELS Cluster: juwelsvis03.fz-juelich.de

  • JUWELS Booster: juwels24.fz-juelich.de

These nodes have a Rocky 9 OS installed, and are the login nodes from which you should use the reservation created for testing Rocky 9 on the compute nodes. Please note that mixing environments (submitting normally to compute nodes from Rocky 9 login nodes, or submitting to Rocky 9 test compute nodes from normal login nodes) will result in compatibility problems in the environment.

To submit to test compute nodes you can use the --reservation rocky9 flag in your sbatch or salloc commands. Please note that the number of nodes reserved for these tests is very small to don’t reduce node availiability too much for normal operation. What that means for you, is that you should try to run just tests that last a few minutes long.

Known Issues with the migration

The migration implies new versions of many different components. Whether possible, we try to keep the migration backwards compatible. However, that is not always possible. Notably:

  • The Intel compiler in Stages/2022 does no longer work due to incompatibilities with the new glibc. Note that this stage is deprecated anyway. Please use a newer stage.

  • The CUDA compiler in Stages/2022 suffers the same problem that the Intel compiler does. However, an transparent update from 11.5.0 to 11.5.2 solves the issue and this will be performed the day of the migration.

  • Some MPI runtimes introduced a hard link-dependency with IME. The IME MPI plugin, used for optimized access to HPST is no longer usable since the hardware has been decomissioned. However, the IME libraries were still installed and necessary due to this link-dependency that some MPI runtimes introduced. To avoid problems with the linux loader with software compiled with older stages we have provided a wrapper-library. Note that this library will be removed in the future and it is advisable to recompile your code as soon as Stages/2025 is available to remove this dependency.

  • The following packages showed diverse problems, and will be recompiled during the maintenance. Until then testing them will likely fail: h5py, mpi4py and IPython.

  • Some vim plugins, and possibly other smaller utilities for other editors or your terminal might need to be reinstalled, in particular those that depend on the python version that ships with the operating system.

  • RHEL9, and therefore Rocky 9, use glibc 2.34. There, various libraries like libpthread have been incorporated in libc. An important side effect from this is that /usr/lib64/libpthread.so does not exist anymore. Some build systems mistakenly try to find and link against this library, instead of the (already in RHEL 8) more appropriate libpthread.so.0. In these cases, users will need to fix the build system and recompile.