Software Modules

This article describes how to find and use pre-installed software on JUSUF.

Basic module usage

Loading a module sets environment variables to give you access to a specific set of software and its dependencies. We use a hierarchical organization of modules to ensure that you get a consistent software stack, e.g., all built with the same compiler version or all relying on the same implementation of MPI. Please note that the organization of Software on JURECA, JUWELS and JUSUF is identical but the installed software packages may differ.

What this means on JUSUF is that there are multiple compilers and MPI runtimes available. As a JUSUF user, your first task would be to load the desired compiler. The compilers available, as well as some other compiler independent tools, can be listed with the module avail command:

[user@system ~]$ module avail

----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

---------------------------------- Compilers ----------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Once you have chosen a compiler you can load it with module load <compiler>:

[user@system ~]$ module load Intel

You can verify which modules you have loaded with module list:

[user@system ~]$ module list

Currently Loaded Modules:
  1) Stages/2022     (S)   4) binutils/.2.37 (H)
  2) GCCcore/.11.2.0 (H)   5) StdEnv/2022
  3) zlib/.1.2.11    (H)   6) Intel/2021.4.0

  Where:
   H:  Hidden Module

Note that the module environment loads the dependencies that are needed, even if they are hidden. Loading the Intel compiler will give you access to a set of software compatible with your selection, that again can be listed with module avail:

[user@system ~]$ module avail

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3


----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Among these newly available modules, the most important ones are the MPI runtimes (which appear at the top of the available software). Loading an MPI runtime will again give you access to software built on top of that runtime. Please note that when loading a module where multiple versions are available, the default one is the one with a (D) at its side.

[user@system ~]$ module load ParaStationMPI
[user@system ~]$ module avail

------------------------ ParaStationMPI settings -------------------------
   mpi-settings/CUDA    mpi-settings/plain    mpi-settings/UCX (L,D)

---- System packages compiled with ParaStationMPI and Intel compilers ----
   ABINIT/9.6.2                     SUNDIALS/6.1.0
   ADIOS2/2.7.1                     Scalasca/2.6
   ARPACK-NG/3.8.0                  Score-P/7.1

   [...]

   SCOTCH/6.1.2                     netcdf4-python/1.5.7  (D)
   SIONfwd/1.0.1                    sprng/1
   SIONlib/1.7.7                    sprng/5-10062021      (D)

--------------------- Settings for software packages ---------------------
   UCX-settings/RC-CUDA          UCX-settings/DC         UCX-settings/UD
   UCX-settings/RC      (L,D)    UCX-settings/plain
   UCX-settings/DC-CUDA          UCX-settings/UD-CUDA

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,L,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3

----------------------------- Core packages ------------------------------
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ARMForge/21.1.2

   [...]

   zarr/2.10.1
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

-------------------- User-based install configuration --------------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Sometimes, as a user, you simply want to find out which modules you have to load to enable the loading of a particular software package or application. module spider can help you with that task. It will look in the whole hierarchy and report back with specific module combinations to enable the loading of that package:

[user@system ~]$ module spider gromacs

----------------------------------------------------------------------------
  GROMACS:
----------------------------------------------------------------------------
    Description:
      GROMACS is a versatile package to perform molecular dynamics, i.e.
      simulate the Newtonian equations of motion for systems with hundreds
      to millions of particles. It is primarily designed for biochemical
      molecules like proteins and lipids that have a lot of complicated
      bonded interactions, but since GROMACS is extremely fast at
      calculating the non-bonded interactions (that usually dominate
      simulations) many groups are also using it for research on
      non-biological systems, e.g. polymers.

     Versions:
        GROMACS/2019.1
        GROMACS/2019.3
        GROMACS/2020.4-plumed
        GROMACS/2020.4
        GROMACS/2021.2
        GROMACS/2021.4-plumed
        GROMACS/2021.4

----------------------------------------------------------------------------
  For detailed information about a specific "GROMACS" package (including how to load the modules)
  use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider GROMACS/2021.4
----------------------------------------------------------------------------

Currently there are more than 700 packages installed per Stage (see Stages). To keep a clean and uncluttered view, a significant number of these packages (mostly helper libraries) are hidden. If you want to see them you can do it with module --show-hidden avail:

[user@software ~]$ module --show-hidden avail

------------------------ ParaStationMPI settings -------------------------
   mpi-settings/CUDA    mpi-settings/plain    mpi-settings/UCX (L,D)

---- System packages compiled with ParaStationMPI and Intel compilers ----
   ABINIT/9.6.2                     SUNDIALS/6.1.0
   ADIOS2/2.7.1                     Scalasca/2.6
   ARPACK-NG/3.8.0                  Score-P/7.1

   [...]

   SCOTCH/6.1.2                     netcdf4-python/1.5.7  (D)
   SIONfwd/1.0.1                    sprng/1
   SIONlib/1.7.7                    sprng/5-10062021      (D)

--------------------- Settings for software packages ---------------------
   UCX-settings/RC-CUDA          UCX-settings/DC         UCX-settings/UD
   UCX-settings/RC      (L,D)    UCX-settings/plain
   UCX-settings/DC-CUDA          UCX-settings/UD-CUDA

----------- System MPI runtimes available for Intel compilers ------------
   IntelMPI/2021.4.0          ParaStationMPI/5.5.0-1-mt (g)
   OpenMPI/4.1.1     (g)      ParaStationMPI/5.5.0-1    (g,L,D)
   OpenMPI/4.1.2     (g,D)

------------- System packages compiled with Intel compilers --------------
   METIS/5.1.0 (D)    libxc/5.1.7    libxsmm/1.16.3

------------------------- System side compilers -------------------------
   AOCC/3.1.0    AOCC/3.2.0 (D)    Clang/13.0.1

----------------------------- Core packages -----------------------------
   ACTC/.1.1                                   (H)
   AMD-uProf/3.4.502
   AMD-uProf/3.5.671                           (D)
   ANTLR/.2.7.7                                (H)

   [...]

   zfp/.0.5.5                                  (H)
   zlib/.1.2.11                                (H)
   zlib/.1.2.11                                (H,L)
   zsh/5.8
   zstd/1.5.0

------------------------------- Compilers -------------------------------
   GCC/11.2.0               Intel/2021.4.0 (L)    NVHPC/22.3 (g,D)
   GCCcore/.11.2.0 (H,L)    NVHPC/22.1     (g)

--------------------------- Production Stages ---------------------------
   Stages/2020 (S)    Stages/2022 (S,L,D)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module
   H:        Hidden Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Available compilers

JUSUF has 3 major compilers available: GCC, Intel and NVHPC. With these compilers we build full toolchains (MPIs, math libraries, applications, etc). Additionally, AOCC and Clang are also available.

The table shows the particular compiler, MPI and basic mathematical libraries (BLAS, LAPACK, FFTW, ScaLAPACK) combinations that have been made available on JUSUF. Note that at the moment Intel MKL is the primary math library, but the deployment of BLIS and LibFLAME is planned for the near future.

Compiler

MPI

Math library

GCC

OpenMPI

Intel MKL

GCC

ParaStationMPI

Intel MKL

Intel

OpenMPI

Intel MKL

Intel

ParaStationMPI

Intel MKL

Intel

ParaStationMPI-mt 1

Intel MKL

NVHPC

ParaStationMPI

Intel MKL

NVHPC

OpenMPI

Intel MKL

1(1,2,3)

ParaStationMPI with -mt suffix allows to call the MPI runtime from multiple threads at the same time (MPI_THREAD_MULTIPLE)

MPI runtimes

JUSUF has 2 major MPI runtimes available, ParaStationMPI and OpenMPI. Both are CUDA-aware (ie: they can directly communicate buffers placed in GPU memory).

All MPI runtimes load a default mpi-settings module, that configures the runtime for most users. JSC provides a set of these modules that accomodate a few different use cases. Note that MPI runtimes are highly configurable, and the configuration modules provided are non-exhaustive.

Starting in the 2022 Stage, these are the modules available per runtime:

  • For ParaStationMPI there are 3 possibilities:

    • mpi-settings/UCX: This is the default, which enables communication via the UCX library.

    • mpi-settings/plain: This module let’s the runtime decide which communication library should be used, which can change with the version of the runtime. Currently, this implies the direct use of libverbs, which is no longer recommended.

    • mpi-settings/CUDA: This module configures the runtime to enable the CUDA-awareness via UCX. This implies disabling the shared memory plugin in pscom (in favour of the equivalent in UCX).

  • For OpenMPI there is a single possibility:

    • mpi-settings/plain: This module configures the runtime to run in JUSUF. It implies using UCX as communication library. CUDA-awareness is enabled by default. Extra options can be examined with ompi_info -a. Please note that the default transports for UCX do not include CUDA transports. You would have to load the appropriate UCX-settings module if you wish to enable them.

UCX is by far the most used communication library in all our MPIs. UCX is also highly configurable. That’s why we provide also modules for configuring UCX. These modules are also non-exhaustive. The recommended default is loaded automatically for each MPI. The list of modules is:

  • UCX-settings/RC-CUDA enables the accelerated RC (Reliable Connected) and CUDA transports. RC is the default transport and the recommended for most cases.

  • UCX-settings/RC enables the accelerated RC (Reliable Connected) transport. Select this if you see warnings regarding initializing CUDA transports in nodes without GPUs. RC is the default transport and the recommended for most cases.

  • UCX-settings/UD-CUDA enables the accelerated UD (Unreliable Datagram) and CUDA transports. UD has a lower memory footprint than RC, and could be recommended for medium size simulations.

  • UCX-settings/UD enables the accelerated UD (Unreliable Datagram) transport. UD has a lower memory footprint than RC, and could be recommended for medium size simulations.

  • UCX-settings/DC-CUDA enables the DC (Dynamically Connected) and CUDA transports. DC is a relatively new transport, and it has not been tested exhaustively in our systems. It’s memory footprint is low, and could be recommended for very large simulations.

  • UCX-settings/DC enables the DC (Dynamically Connected) transport. DC is a relatively new transport, and it has not been tested exhaustively in our systems. It’s memory footprint is low, and could be recommended for very large simulations.

  • UCX-settings/plain basically disables the restriction of transports, so UCX can decide on its own which transport should be used. Use this if you would like to rely on UCX’s heuristics. This is equivalent to unloading the UCX-settings module.

To see which options are enabled in each MPI runtime or UCX you can type ml show mpi-settings or ml show UCX-settings. To see a full list of UCX options you can type ucx_info -c -f

GPUs and modules

Software with specific GPU support are marked with a (g) at their side when listing modules.

They can be reached loading the compilers listed in the table of the previous section.

Finding software packages

There are 3 commands that are the main tools to locate software on JUSUF:

  • module avail

  • module spider <software>

  • module key <keyword or software>

Normally, the first 2 are enough. Occasionally, module key can be necessary to look for keywords or packages bundled in a single module. An example would be numpy, which is included in the SciPy-Stack module. In the example below, the module environment looks for all occurrences of numpy in the description of the modules. That helps to locate SciPy-Stack.

[user@system ~]$ module key numpy
----------------------------------------------------------------------------

The following modules match your search criteria: "numpy"
----------------------------------------------------------------------------

  SciPy-Stack: SciPy-Stack/2020-Python-3.8.5, SciPy-Stack/2021-Python-3.8.5
    SciPy Stack is a collection of open source software for scientific computing
    in Python.

  SciPy-bundle: SciPy-bundle/2021.10
    Bundle of Python packages for scientific software

[...]

Stages

JUSUF will go through major scientific software updates once a year in November, at the same time that new projects start their allocation time. We call these updates Stages. During these stage switches, the available software will be updated to the latest stable releases. Typically this will require that user applications are recompiled. In such cases, there are two possible solutions:

  • Load the new versions of the required dependency modules and recompile.

  • Load the old Stage.

To load the old Stage, users should use these commands:

[user@system ~]$ module use $OTHERSTAGES
[user@system ~]$ module load Stages/2019a

Then the old software view will become available again as before the stage switch. In the example above the desired Stage was 2019a, but as new stage transitions happen more possibilities will be available.

Stages Changelog

The changes of the stages are documented in the changelog of stages.

Scientific software at JSC

JSC provides a significant amount of software installed on its systems. In Scientific Application Software you can find an overview of what is supported and how to use it.

Requesting new software

It is possible to request new software to be installed in JUSUF. To do that please send an email to sc@fz-juelich.de, describing which software and version you need. Please note that this will be done on a “best effort” basis and might have limited support.