Software Modules

This article describes how to find and use pre-installed software on JUSUF.

Basic module usage

Loading a module sets environment variables to give you access to a specific set of software and its dependencies. We use a hierarchical organization of modules to ensure that you get a consistent software stack, e.g., all built with the same compiler version or all relying on the same implementation of MPI. Please note that the organization of Software on JURECA, JUWELS and JUSUF is identical but the installed software packages may differ.

What this means on JUSUF is that there are multiple compilers and MPI runtimes available. As a JUSUF user, your first task would be to load the desired compiler. The compilers available, as well as some other compiler independent tools, can be listed with the module avail command:

[user@system ~]$ module avail

-------------------------- Core packages ---------------------------
   Advisor/2020_update3
   Autotools/20200321
   Autotools/20200321                                 (D)

   [...]

   unzip/6.0
   xpra/4.0.4-Python-3.8.5
   zsh/5.8

---------------------------- Compilers -----------------------------
   GCC/9.3.0                         NVHPC/20.9-GCC-9.3.0  (g)
   Intel/2020.2.254-GCC-9.3.0        NVHPC/20.11-GCC-9.3.0 (g,D)
   NVHPC/20.7-GCC-9.3.0       (g)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Once you have chosen a compiler you can load it with module load <compiler>:

[user@system ~]$ module load Intel

You can verify which modules you have loaded with module list:

[user@system ~]$ module list

Currently Loaded Modules:
  1) GCCcore/.9.3.0 (H)   4) StdEnv/2020
  2) zlib/.1.2.11   (H)   5) Intel/2020.2.254-GCC-9.3.0
  3) binutils/.2.34 (H)

  Where:
   H:  Hidden Module

Note that the module environment loads the dependencies that are needed, even if they are hidden. Loading the Intel compiler will give you access to a set of software compatible with your selection, that again can be listed with module avail:

[steinbusch1@jwlogin07 ~]$ module avail

------------ MPI runtimes available for Intel compilers ------------
   IntelMPI/2019.8.254        ParaStationMPI/5.4.7-1-mt (g)
   OpenMPI/4.1.0rc1    (g)    ParaStationMPI/5.4.7-1    (g,D)

-------------- Packages compiled with Intel compilers --------------
   GSL/2.6       (D)    METIS/5.1.0-RTW64-IDX32        libxc/3.0.1
   HDF/4.2.15           METIS/5.1.0             (D)    libxc/4.3.4    (D)
   HDFView/3.1.1        kim-api/2.1.3                  libxsmm/1.16.1

-------------------------- Core packages ---------------------------
   Advisor/2020_update3
   Autotools/20200321
   Autotools/20200321                                 (D)

   [...]

   unzip/6.0
   xpra/4.0.4-Python-3.8.5
   zsh/5.8

---------------------------- Compilers -----------------------------
   GCC/9.3.0                         NVHPC/20.9-GCC-9.3.0  (g)
   Intel/2020.2.254-GCC-9.3.0 (L)    NVHPC/20.11-GCC-9.3.0 (g,D)
   NVHPC/20.7-GCC-9.3.0       (g)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Among these newly available modules, the most important ones are the MPI runtimes (which appear at the top of the available software). Loading an MPI runtime will again give you access to software built on top of that runtime. Please note that when loading a module where multiple versions are available, the default one is the one with a (D) at its side.

[user@system ~]$ module load ParaStationMPI
[user@system ~]$ module avail

--------------------- ParaStationMPI settings ----------------------
   mpi-settings/CUDA-low-latency    mpi-settings/plain     (L,D)
   mpi-settings/CUDA                mpi-settings/UCX-plain

---- Packages compiled with ParaStationMPI and Intel compilers -----
   ABINIT/8.10.3                    ParMETIS/4.0.3-double
   ARPACK-NG/3.7.0                  ParMETIS/4.0.3                       (D)
   ASE/3.19.2-Python-3.8.5          QuantumESPRESSO/6.6

   [...]

   PETSc/3.14-int8                  sprng/5-14042019                     (D)
   PETSc/3.14                (D)    sundials/5.4.0
   PLUMED/2.6.1                     yaff/1.6.0-Python-3.8.5

------------ MPI runtimes available for Intel compilers ------------
   IntelMPI/2019.8.254        ParaStationMPI/5.4.7-1-mt (g)
   OpenMPI/4.1.0rc1    (g)    ParaStationMPI/5.4.7-1    (g,L,D)

-------------- Packages compiled with Intel compilers --------------
   GSL/2.6       (D)    METIS/5.1.0-RTW64-IDX32        libxc/3.0.1
   HDF/4.2.15           METIS/5.1.0             (D)    libxc/4.3.4    (D)
   HDFView/3.1.1        kim-api/2.1.3                  libxsmm/1.16.1

-------------------------- Core packages ---------------------------
   Advisor/2020_update3
   Autotools/20200321
   Autotools/20200321                                 (D)

   [...]

   unzip/6.0
   xpra/4.0.4-Python-3.8.5
   zsh/5.8

---------------------------- Compilers -----------------------------
   GCC/9.3.0                         NVHPC/20.9-GCC-9.3.0  (g)
   Intel/2020.2.254-GCC-9.3.0 (L)    NVHPC/20.11-GCC-9.3.0 (g,D)
   NVHPC/20.7-GCC-9.3.0       (g)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Sometimes, as a user, you simply want to find out which modules you have to load to enable the loading of a particular software package or application. module spider can help you with that task. It will look in the whole hierarchy and report back with specific module combinations to enable the loading of that package:

[user@system ~]$ module spider gromacs

----------------------------------------------------------------------------
  GROMACS:
----------------------------------------------------------------------------
    Description:
      GROMACS is a versatile package to perform molecular dynamics, i.e.
      simulate the Newtonian equations of motion for systems with hundreds
      to millions of particles. It is primarily designed for biochemical
      molecules like proteins and lipids that have a lot of complicated
      bonded interactions, but since GROMACS is extremely fast at
      calculating the non-bonded interactions (that usually dominate
      simulations) many groups are also using it for research on
      non-biological systems, e.g. polymers. Documentation =============
      Use `gmx` to execute GROMACS commands on a single node, for example,
      to prepare your run. Use `gmx_mpi mdrun` in your job scripts with
      `srun`. The user documentation of GROMACS can be found at
      http://manual.gromacs.org/documentation/current/user-guide/index.html.


     Versions:
        GROMACS/2020.4-plumed
        GROMACS/2020.4

----------------------------------------------------------------------------
  For detailed information about a specific "GROMACS" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider GROMACS/2020.4
----------------------------------------------------------------------------

Currently there are more than 700 packages installed per Stage (see Stages). To keep a clean and uncluttered view, a significant number of these packages (mostly helper libraries) are hidden. If you want to see them you can do it with module --show-hidden avail:

[user@software ~]$ module --show-hidden avail

--------------------- ParaStationMPI settings ----------------------
   mpi-settings/CUDA-low-latency    mpi-settings/plain     (L,D)
   mpi-settings/CUDA                mpi-settings/UCX-plain

---- Packages compiled with ParaStationMPI and Intel compilers -----
   ABINIT/8.10.3
   ARPACK-NG/3.7.0                      (D)
   ASE/3.19.2-Python-3.8.5

   [...]

   sprng/5-14042019                     (D)
   sundials/5.4.0
   yaff/1.6.0-Python-3.8.5

------------ MPI runtimes available for Intel compilers ------------
   IntelMPI/2019.8.254        ParaStationMPI/5.4.7-1-mt (g)
   OpenMPI/4.1.0rc1    (g)    ParaStationMPI/5.4.7-1    (g,L,D)

-------------- Packages compiled with Intel compilers --------------
   GSL/2.6                 (D)    METIS/5.1.0     (D)    libxc/3.0.1
   HDF/4.2.15                     UDUNITS/.2.2.26 (H)    libxc/4.3.4    (D)
   HDFView/3.1.1                  kim-api/2.1.3          libxsmm/1.16.1
   METIS/5.1.0-RTW64-IDX32        libctl/.4.5.0   (H)

-------------------------- Core packages ---------------------------
   ACTC/.1.1                                          (H)
   ANTLR/.2.7.7-Python-3.8.5                          (H)
   APR-util/.1.6.1                                    (H)

   [...]

   zlib/.1.2.11                                       (H)
   zlib/.1.2.11                                       (H,L)
   zsh/5.8

---------------------------- Compilers -----------------------------
   GCC/9.3.0                           NVHPC/20.7-GCC-9.3.0  (g)
   GCCcore/.9.3.0             (H,L)    NVHPC/20.9-GCC-9.3.0  (g)
   Intel/2020.2.254-GCC-9.3.0 (L)      NVHPC/20.11-GCC-9.3.0 (g,D)

----------------- User-based install configuration -----------------
   UserInstallations/easybuild

  Where:
   g:        built for GPU
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
   D:        Default Module
   H:        Hidden Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Available compilers

JUSUF has 3 major compilers available: GCC, Intel and NVHPC. With these compilers we build full toolchains (MPIs, math libraries, applications, etc). Additionally, AOCC and Clang are also available.

The table shows the particular compiler, MPI and basic mathematical libraries (BLAS, LAPACK, FFTW, ScaLAPACK) combinations that have been made available on JUSUF. Note that at the moment Intel MKL is the primary math library, but the deployment of BLIS and LibFLAME is planned for the near future.

Compiler

MPI

Math library

GCC

OpenMPI

Intel MKL

GCC

ParaStationMPI

Intel MKL

Intel

IntelMPI

Intel MKL

Intel

OpenMPI

Intel MKL

Intel

ParaStationMPI

Intel MKL

Intel

ParaStationMPI-mt 1

Intel MKL

NVHPC

ParaStationMPI

Intel MKL

NVHPC

OpenMPI

Intel MKL

1

ParaStationMPI with -mt suffix allows to call the MPI runtime from multiple threads at the same time (MPI_THREAD_MULTIPLE)

MPI runtimes

JUSUF has 3 major MPI runtimes available, as shown in the table above. Of these runtimes, ParaStationMPI and OpenMPI are CUDA-aware (ie: they can directly communicate buffers placed in GPU memory).

All MPI runtimes load a default mpi-settings module, that configures the runtime for most users. JSC provides a set of these modules that accomodate a few different use cases. Please use the following for an overview of currently available mpi-settings modules:

module spider mpi-settings

A prominent settings example is the mpi-settings/CUDA module which configures ParaStationMPI or OpenMPI to enable CUDA-awareness by setting the according environment variables. It is automatically loaded when an MPI runtime is loaded after CUDA is loaded.

GPUs and modules

Software with specific GPU support are marked with a (g) at their side when listing modules.

They can be reached loading the compilers listed in the table of the previous section.

Finding software packages

There are 3 commands that are the main tools to locate software on JUSUF:

  • module avail

  • module spider <software>

  • module key <keyword or software>

Normally, the first 2 are enough. Occasionally, module key can be necessary to look for keywords or packages bundled in a single module. An example would be numpy, which is included in the SciPy-Stack module. In the example below, the module environment looks for all occurrences of numpy in the description of the modules. That helps to locate SciPy-Stack.

[user@system ~]$ module key numpy
----------------------------------------------------------------------------

The following modules match your search criteria: "numpy"
----------------------------------------------------------------------------

  Jupyter: Jupyter/2018a-Python-3.6.5-damian, ...
    Project Jupyter exists to develop open-source software, open-standards,
    and services for interactive computing across dozens of programming
    languages.

  SciPy-Stack: SciPy-Stack/2018a-Python-2.7.14, ...
    SciPy Stack is a collection of open source software for scientific
    computing in Python.

[...]

Stages

JUSUF will go through major scientific software updates once a year in November, at the same time that new projects start their allocation time. We call these updates Stages. During these stage switches, the available software will be updated to the latest stable releases. Typically this will require that user applications are recompiled. In such cases, there are two possible solutions:

  • Load the new versions of the required dependency modules and recompile.

  • Load the old Stage.

To load the old Stage, users should use these commands:

[user@system ~]$ module use $OTHERSTAGES
[user@system ~]$ module load Stages/2019a

Then the old software view will become available again as before the stage switch. In the example above the desired Stage was 2019a, but as new stage transitions happen more possibilities will be available.

Scientific software at JSC

JSC provides a significant amount of software installed on its systems. In Scientific Application Software you can find an overview of what is supported and how to use it.

Requesting new software

It is possible to request new software to be installed in JUSUF. To do that please send an email to sc@fz-juelich.de, describing which software and version you need. Please note that this will be done on a “best effort” basis and might have limited support.