Configuration
Hardware Configuration of the JURECA DC Module (Phase 2: as of May 2021)
- 480 standard compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
512 (16× 32) GB DDR4, 3200 MHz
InfiniBand HDR100 (NVIDIA Mellanox Connect-X6)
diskless
- 96 large-memory compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
1024 (16× 64) GB DDR4, 3200 MHz
InfiniBand HDR100 (NVIDIA Mellanox Connect-X6)
diskless
- 192 accelerated compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
512 (16× 32) GB DDR4, 3200 MHz
4× NVIDIA A100 GPU, 4× 40 GB HBM2e
2× InfiniBand HDR (NVIDIA Mellanox Connect-X6)
diskless
- 12 login nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
1024 (16× 64) GB DDR4, 3200 MHz
2× NVIDIA Quadro RTX8000
InfiniBand HDR100 (NVIDIA Mellanox Connect-X6)
100 Gigabit Ethernet external connection
3.54 (CPU) + 14.98 (GPU) Petaflops per second peak performance
98,304 CPU cores, 768 GPUs
- Mellanox InfiniBand HDR (HDR100/HDR) DragonFly+ network
Ca. 15 Tb/s connection to Booster via gateway nodes
350 GB/s network connection to JUST for storage access
Software Overview (as of Dec. 2018)
Rocky Linux 8 distribution (as of Dec. 2021)
Parastation Modulo
Slurm batch system with ParaStation resource management
- DC module software stack
- Intel Professional Fortran, C/C++ Compiler
Support for OpenMP programming model for intra-node parallelization
ParaTec ParaStation MPI (Message Passing Interface) implementation
Intel MPI
Open MPI
- Intel Professional Fortran, C/C++ Compiler
Support for OpenMP programming model for intra-node parallelization
IBM Spectrum Scale (GPFS) parallel file system
Old Configuration Information
Hardware Configuration of the JURECA DC Module (Phase 1: 2020 - Apr. 2021)
- 288 standard compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
512 (16× 32) GB DDR4, 3200 MHz
InfiniBand HDR100 (Connect-X6)
diskless
- 96 large-memory compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
1024 (16× 64) GB DDR4, 3200 MHz
InfiniBand HDR100 (Connect-X6)
diskless
- 48 accelerated compute nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
512 (16× 32) GB DDR4, 3200 MHz
4× NVIDIA A100 GPU, 4× 40 GB HBM2e
2× InfiniBand HDR (Connect-X6)
diskless
- 12 login nodes
2× AMD EPYC 7742, 2× 64 cores, 2.25 GHz
1024 (16× 64) GB DDR4, 3200 MHz
2× NVIDIA Quadro RTX8000
InfiniBand HDR100 (Connect-X6)
100 Gigabit Ethernet external connection
55,296 CPU cores, 192 GPUs
- Mellanox InfiniBand HDR (HDR100/HDR) DragonFly+ network
Ca. 5 Tb/s connection to Booster via gateway nodes
350 GB/s network connection to JUST for storage access
Hardware Configuration of the Cluster Module (2015 - 2020)
- 1872 compute nodes
- Two Intel Xeon E5-2680 v3 Haswell CPUs per node
2× 12 cores, 2.5 GHz
Intel Hyperthreading Technology (Simultaneous Multithreading)
AVX 2.0 ISA extension
- 75 compute nodes equipped with two NVIDIA K80 GPUs (four visible devices per node)
2× 4992 CUDA cores
2x 24 GiB GDDR5 memory
- DDR4 memory technology (2133 MHz)
1605 compute nodes with 128 GiB memory
128 compute nodes with 256 GiB memory
64 compute nodes with 512 GiB memory
- 12 visualization nodes
Two Intel Xeon E5-2680 v3 Haswell CPUs per node
- Two NVIDIA K40 GPUs per node
2 x 12 GiB GDDR5 memory
10 nodes with 512 GiB memory
2 nodes with 1024 GiB memory
Login nodes with 256 GiB memory per node
45,216 CPU cores
1.8 (CPU) + 0.44 (GPU) Petaflop per second peak performance
Based on the T-Platforms V-class server architecture
Mellanox EDR InfiniBand high-speed network with non-blocking fat tree topology
100 GiB per second storage connection to JUST for storage access
Hardware Configuration of the JURECA Booster Module (2017 - 09/2022)
- 1640 compute nodes
1× Intel Xeon Phi 7250-F Knights Landing, 68 cores, 1.4 GHz
Intel Hyperthreading Technology (Simultaneous Multithreading)
AVX-512 ISA extension
96 GiB memory plus 16 GiB MCDRAM high-bandwidth memory
Shared login infrastructure with the JURECA-DC module
111,520 CPU cores
5 Petaflop per second peak performance
Intel Omni-Path Architecture high-speed network with non-blocking fat tree topology
100+ GiB per second storage connection to JUST for storage access
Software Overview (until Dec. 2018)
CentOS 7 Linux distribution
Parastation Cluster Management
Slurm batch system with Parastation resource management
- Intel Professional Fortran, C/C++ Compiler
Support for OpenMP programming model for intra-node parallelization
Intel Math Kernel Library
ParTec MPI (Message Passing Interface) Implementation
Intel MPI (Message Passing Interface) Implementation
IBM Spectrum Scale (GPFS) parallel filesystem
Booster software stack (Dec. 2018 - Sep. 2022)
Intel Professional Fortran, C/C++ Compiler
Intel Math Kernel Library
ParTec Parastation MPI
Intel MPI