Software Modules

This article describes how to find and use pre-installed software on JURECA.

Basic module usage

Loading a module sets environment variables to give you access to a specific set of software and its dependencies. We use a hierarchical organization of modules to ensure that you get a consistent software stack, e.g., all built with the same compiler version or all relying on the same implementation of MPI. Please note that the organization of Software on JURECA, JUWELS and JUSUF is identical but the installed software packages may differ.

What this means on JURECA is that there are multiple compilers and MPI runtimes available. As a JURECA user, your first task would be to load the desired compiler. The compilers available, as well as some other compiler independent tools, can be listed with the module avail command:

[user@system ~]$ module avail

----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

---------------------------------- Compilers ----------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Once you have chosen a compiler you can load it with module load <compiler>:

[user@system ~]$ module load Intel

You can verify which modules you have loaded with module list:

[user@system ~]$ module list

Currently Loaded Modules:
  1) Stages/2022     (S)   4) binutils/.2.37 (H)
  2) GCCcore/.11.2.0 (H)   5) StdEnv/2022
  3) zlib/.1.2.11    (H)   6) Intel/2021.4.0

  Where:
   H:  Hidden Module

Note that the module environment loads the dependencies that are needed, even if they are hidden. Loading the Intel compiler will give you access to a set of software compatible with your selection, that again can be listed with module avail:

[user@system ~]$ module avail

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3


----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Among these newly available modules, the most important ones are the MPI runtimes (which appear at the top of the available software). Loading an MPI runtime will again give you access to software built on top of that runtime. Please note that when loading a module where multiple versions are available, the default one is the one with a (D) at its side.

[user@system ~]$ module load ParaStationMPI
[user@system ~]$ module avail

------------------------ ParaStationMPI settings -------------------------
   mpi-settings/CUDA    mpi-settings/plain    mpi-settings/UCX (L,D)

---- System packages compiled with ParaStationMPI and Intel compilers ----
   ABINIT/9.6.2                     SUNDIALS/6.1.0
   ADIOS2/2.7.1                     Scalasca/2.6
   ARPACK-NG/3.8.0                  Score-P/7.1

   [...]

   SCOTCH/6.1.2                     netcdf4-python/1.5.7  (D)
   SIONfwd/1.0.1                    sprng/1
   SIONlib/1.7.7                    sprng/5-10062021      (D)

--------------------- Settings for software packages ---------------------
   UCX-settings/RC-CUDA          UCX-settings/DC         UCX-settings/UD
   UCX-settings/RC      (L,D)    UCX-settings/plain
   UCX-settings/DC-CUDA          UCX-settings/UD-CUDA

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,L,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3

----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Sometimes, as a user, you simply want to find out which modules you have to load to enable the loading of a particular software package or application. module spider can help you with that task. It will look in the whole hierarchy and report back with specific module combinations to enable the loading of that package:

[user@system ~]$ module spider gromacs

----------------------------------------------------------------------------
  GROMACS:
----------------------------------------------------------------------------
    Description:
      GROMACS is a versatile package to perform molecular dynamics, i.e.
      simulate the Newtonian equations of motion for systems with hundreds
      to millions of particles. It is primarily designed for biochemical
      molecules like proteins and lipids that have a lot of complicated
      bonded interactions, but since GROMACS is extremely fast at
      calculating the non-bonded interactions (that usually dominate
      simulations) many groups are also using it for research on
      non-biological systems, e.g. polymers.

     Versions:
        GROMACS/2019.1
        GROMACS/2019.3
        GROMACS/2020.4-plumed
        GROMACS/2020.4
        GROMACS/2021.2
        GROMACS/2021.4-plumed
        GROMACS/2021.4

----------------------------------------------------------------------------
  For detailed information about a specific "GROMACS" package (including how to load the modules)
  use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider GROMACS/2021.4
----------------------------------------------------------------------------

Currently there are more than 700 packages installed per Stage (see Stages). To keep a clean and uncluttered view, a significant number of these packages (mostly helper libraries) are hidden. If you want to see them you can do it with module --show-hidden avail:

[user@software ~]$ module --show-hidden avail

------------------------ ParaStationMPI settings -------------------------
   mpi-settings/CUDA    mpi-settings/plain    mpi-settings/UCX (L,D)

---- System packages compiled with ParaStationMPI and Intel compilers ----
   ABINIT/9.6.2                     SUNDIALS/6.1.0
   ADIOS2/2.7.1                     Scalasca/2.6
   ARPACK-NG/3.8.0                  Score-P/7.1

   [...]

   SCOTCH/6.1.2                     netcdf4-python/1.5.7  (D)
   SIONfwd/1.0.1                    sprng/1
   SIONlib/1.7.7                    sprng/5-10062021      (D)

--------------------- Settings for software packages ---------------------
   UCX-settings/RC-CUDA          UCX-settings/DC         UCX-settings/UD
   UCX-settings/RC      (L,D)    UCX-settings/plain
   UCX-settings/DC-CUDA          UCX-settings/UD-CUDA

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,L,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3

------------------------- System side compilers -------------------------
   AOCC/3.1.0    AOCC/3.2.0 (D)    Clang/13.0.1

----------------------------- Core packages -----------------------------
   ACTC/.1.1                                   (H)
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ANTLR/.2.7.7                                (H)

   [...]

   zfp/.0.5.5                                  (H)
   zlib/.1.2.11                                (H)
   zlib/.1.2.11                                (H,L)
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module
   H:        Hidden Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Available compilers

JURECA has 3 major compilers available: GCC, Intel and NVHPC. With these compilers we build full toolchains (MPIs, math libraries, applications, etc). Additionally, AOCC and Clang are also available.

The table shows the particular compiler, MPI and basic mathematical libraries (BLAS, LAPACK, FFTW, ScaLAPACK) combinations that have been made available on JURECA. Note that at the moment Intel MKL is the primary math library, but the deployment of BLIS and LibFLAME is planned for the near future.

Compiler

MPI

Math library

GCC

BullMPI

Intel MKL

GCC

OpenMPI

Intel MKL

GCC

ParaStationMPI

Intel MKL

Intel

IntelMPI

Intel MKL

Intel

OpenMPI

Intel MKL

Intel

ParaStationMPI

Intel MKL

Intel

ParaStationMPI-mt 2

Intel MKL

NVHPC

ParaStationMPI

Intel MKL

NVHPC

OpenMPI

Intel MKL

2(1,2,3)

ParaStationMPI with -mt suffix allows to call the MPI runtime from multiple threads at the same time (MPI_THREAD_MULTIPLE)

MPI runtimes

JURECA has 2 major MPI runtimes available, ParaStationMPI and OpenMPI. Both are CUDA-aware (ie: they can directly communicate buffers placed in GPU memory).

Additionally, IntelMPI and BullMPI (with CUDA-awareness) are available on JURECA-DC.

All MPI runtimes load a default MPI-settings module (mpi-settings in older stages), that configures the runtime for most users. JSC provides a set of these modules that accomodate a few different use cases. Note that MPI runtimes are highly configurable, and the configuration modules provided are non-exhaustive.

These are the modules available per runtime. Note that older stages can have slighty different options.

  • For ParaStationMPI there are 2 possibilities:

    • MPI-settings/UCX: This module configures the runtime to run in JURECA. It implies using UCX as communication library. This is the default. If you wish to use the GPUs on this part, please load the next module.

    • MPI-settings/CUDA: This module is the same as MPI-settings/UCX, but it makes sure that the UCX module is loaded with the CUDA transports enabled and disables the shared memory plugin in pscom (in favour of the equivalent in UCX).

  • For OpenMPI there are 4 possibilities:

    • MPI-settings/UCX: This module configures the runtime to run in JURECA. It implies using UCX as communication library. Extra options can be examined with ompi_info -a. This is the default module. If you wish to use the GPUs please load the next module.

    • MPI-settings/CUDA: This module is the same as MPI-settings/UCX, but it makes sure that the UCX module is loaded with the CUDA transports enabled, effectively enabling CUDA-awareness.

    • MPI-settings/UCX-UCC: This module is equivalent to MPI-settings/UCX, with the difference that UCC is prioritized and used by default.

    • MPI-settings/CUDA-UCC: This module is equivalent to MPI-settings/CUDA, with the difference that UCC is prioritized and used by default.

  • For BullMPI the situation is the same as for OpenMPI, with the exception that UCC is not available:

    • MPI-settings/UCX: This module configures the runtime to run in JURECA. It implies using UCX as communication library. Extra options can be examined with ompi_info -a. This is the default module. If you wish to use the GPUs please load the next module.

    • MPI-settings/CUDA: This module is the same as MPI-settings/UCX, but it makes sure that the UCX module is loaded with the CUDA transports enabled, effectively enabling CUDA-awareness.

  • For IntelMPI there is one possibility:

    • MPI-settings/UCX: This is the default, which enables sets the libfabric provider to mlx, ie: communication will happen via the UCX library.

UCX is by far the most used communication library in all our MPIs. UCX is also highly configurable. That’s why we provide also modules for configuring UCX. These modules are also non-exhaustive. The recommended default is loaded automatically for each MPI. The list of modules is:

  • UCX-settings/RC-CUDA enables the accelerated RC (Reliable Connected) and CUDA transports. RC is the default transport and the recommended for most cases.

  • UCX-settings/RC enables the accelerated RC (Reliable Connected) transport. Select this if you see warnings regarding initializing CUDA transports in nodes without GPUs. RC is the default transport and the recommended for most cases.

  • UCX-settings/UD-CUDA enables the accelerated UD (Unreliable Datagram) and CUDA transports. UD has a lower memory footprint than RC, and could be recommended for medium size simulations.

  • UCX-settings/UD enables the accelerated UD (Unreliable Datagram) transport. UD has a lower memory footprint than RC, and could be recommended for medium size simulations.

  • UCX-settings/DC-CUDA enables the DC (Dynamically Connected) and CUDA transports. DC is a relatively new transport, and it has not been tested exhaustively in our systems. It’s memory footprint is low, and could be recommended for very large simulations.

  • UCX-settings/DC enables the DC (Dynamically Connected) transport. DC is a relatively new transport, and it has not been tested exhaustively in our systems. It’s memory footprint is low, and could be recommended for very large simulations.

  • UCX-settings/plain basically disables the restriction of transports, so UCX can decide on its own which transport should be used. Use this if you would like to rely on UCX’s heuristics. This is equivalent to unloading the UCX-settings module.

To see which options are enabled in each MPI runtime or UCX you can type ml show MPI-settings or ml show UCX-settings. To see a full list of UCX options you can type ucx_info -c -f

Warning

Since 2023-08-30 the UCX-settings/*CUDA modules also set UCX_RNDV_FRAG_MEM_TYPE=cuda. This enables the GPU to initiate transfers of CUDA managed buffers. This can have a large speed-up in case Unified Memory (cudaMallocManaged()) is used, as staging of data is avoided. Our testing indicates performance gains, especially when communicating between the GPUs in a node, since NVLink is used effectively. We don’t expect any performance degradation for other codes; if you do notice worse performance, please contact sc@fz-juelich.de

GPUs and modules

Software with specific GPU support are marked with a (g) at their side when listing modules.

They can be reached loading the compilers listed in the table of the previous section.

Finding software packages

There are 3 commands that are the main tools to locate software on JURECA:

  • module avail

  • module spider <software>

  • module key <keyword or software>

Normally, the first 2 are enough. Occasionally, module key can be necessary to look for keywords or packages bundled in a single module. An example would be numpy, which is included in the SciPy-Stack module. In the example below, the module environment looks for all occurrences of numpy in the description of the modules. That helps to locate SciPy-Stack.

[user@system ~]$ module key numpy
----------------------------------------------------------------------------

The following modules match your search criteria: "numpy"
----------------------------------------------------------------------------

  SciPy-Stack: SciPy-Stack/2020-Python-3.8.5, SciPy-Stack/2021-Python-3.8.5
    SciPy Stack is a collection of open source software for scientific computing
    in Python.

  SciPy-bundle: SciPy-bundle/2021.10
    Bundle of Python packages for scientific software

[...]

Additionally, the complete list of software installed can be checked online in the JURECA module browser.

Stages

JURECA will go through major scientific software updates once a year in November, at the same time that new projects start their allocation time. We call these updates Stages. During these stage switches, the available software will be updated to the latest stable releases. Typically this will require that user applications are recompiled. In such cases, there are two possible solutions:

  • Load the new versions of the required dependency modules and recompile.

  • Load the old Stage.

To load the old Stage, users should use these commands:

[user@system ~]$ module use $OTHERSTAGES
[user@system ~]$ module load Stages/2019a

Then the old software view will become available again as before the stage switch. In the example above the desired Stage was 2019a, but as new stage transitions happen more possibilities will be available.

Stages Changelog

The changes of the stages are documented in the changelog of stages.

Scientific software at JSC

JSC provides a significant amount of software installed on its systems. In Scientific Application Software you can find an overview of what is supported and how to use it.

Requesting new software

It is possible to request new software to be installed in JURECA. To do that please send an email to sc@fz-juelich.de, describing which software and version you need. Please note that this will be done on a “best effort” basis and might have limited support.

Installing your own software with EasyBuild

Starting with the 2023 stage, it is possible to install software with [EasyBuild](https://easybuild.io/) in a directory of your choice, leveraging the software that is already available on the system. For that, there are 2 steps you have to follow:

1. First you have to specify a installation directory. You can choose between one of the two possibilities: a) exporting USERINSTALLATIONS b) creating a symbolic link in ~/easybuild Depending on your needs, you might want to have that in a private directory, or in a directory shared with other users or members of a project. Please take into account that the link should not point to a directory in $HOME, as the quota in $HOME is small and the filesystem is not intended to be used this way. Well suited is for example the filesystem $PROJECT or $USERSOFTWARE (you need to apply for a data project in order to use it).

  1. Load the UserInstallations module. That will load EasyBuild and configure it in a way that allows it to reuse the stack already in place. You can use this to install versions of the software that you might need, that are not available in the general stack.

If you would like to use this feature before the 2023 stage is made the default, you can do it simply by typing ml use /p/software/$SYSTEMNAME/userinstallations

If you want to follow the same approach in other systems, you can have a copy of our EasyBuild setup in https://github.com/easybuilders/JSC. After cloning it in a location of your choice, you would have to configure EasyBuild appropriately to leverage it. A good starting point would be to take a look at ml show UserInstallations and adapt variables as required.

[user@system ~]$ export USERINSTALLATIONS=/p/project/yourproject/${USER}
[user@system ~]$ ml UserInstallations
  - Loading default (typically latest) EasyBuild module
  - Enabling bash tab completion for EasyBuild and EasyConfigs
    (You can enable easyconfig autocomplete from robot
     search path in addition by setting JSC_EASYCONFIG_AUTOCOMPLETE)

Found $USERINSTALLATIONS in the environment, using that for installation. To override this and do
a personal installation set the environment variable PREFER_USER=1 and reload this module.

** LOADING USERSPACE DEVELOPER CONFIGURATION **

Preparing the environment for software installation via EasyBuild into userspace leveraging stage 2022

  - Adding our license servers to LM_LICENSE_FILE
  - Giving priority to JSC custom Toolchains (EASYBUILD_INCLUDE_TOOLCHAINS)
  - Giving priority to JSC custom EasyBlocks (EASYBUILD_INCLUDE_EASYBLOCKS)
  - Giving priority to JSC custom easyconfigs (EASYBUILD_ROBOT)
  - Allowing searching of distribution easyconfigs (EASYBUILD_SEARCH_PATHS)
  - To keep module view clean, hiding some dependencies (EASYBUILD_HIDE_DEPS)
  - Using JSC EasyBuild hooks (EASYBUILD_HOOKS)
  - Setting module classes to include side compilers (EASYBUILD_MOODULECLASSES)

  - Setting EASYBUILD_PARALLEL to 8
  - Setting EASYBUILD_OPTARCH to GCCcore:march=haswell -mtune=haswell
  - Setting EASYBUILD_CUDA_COMPUTE_CAPABILITIES to 7.0,6.0
  - Using shared group installation, therefore expanding dependency searching
  - To allow collaboration in the development process, all installations are
    group-writable! (EASYBUILD_GROUP_WRITABLE_INSTALLDIR)

Note: If you wish to submit software builds to slurm with the '--job' flag you will need to set environment
variables to configure job submission, see
https://slurm.schedmd.com/sbatch.html#lbAJ
for details.

To use a module that you or a college has installed in a group readable directory, you need to either export USERINSTALLATIONS or create a symbolic link in ~/easybuild pointing at the correct directory. Module path expansion when loading a module will allow you to load the modules installed with UserInstallations. If you have problems to load the modules compiled with UserInstallations, please try to reload the already loaded modules in your environment. 1 This can for example be done with ml update.

In case of questions or for more advanced use cases please to not hesitate and reach out to sc@fz-juelich.de.

1

This can happen if you set USERINSTALLATIONS after you have loaded the modules that expand the path. The modules that expand the path are, for example, compiler modules (like GCC or GCCcore) or MPI modules (like OpenMPI or ParaStationMPI), as they build the base level of our EasyBuild installation (the toolchain).