Access/Environment

Dispatch

Prerequisite for using the JURECA system is to successfully pass through the application process. All details can be found on the website for the JURECA cluster (Dispatch)

Login Nodes

The JURECA system is accessible via ssh through 12 login nodes:

Generic Name: jureca
Specific Names: jureca01, jureca02, jureca03, jureca04, jureca05, jureca06, jureca07, jureca08, jureca09, jureca10, jureca11, jureca12
Domain: fz-juelich.de

Users outside the FZ campus have to use fully qualified names including the domain name. When using the generic name, a connection is established to one of the login nodes in the JURECA pool. Initiating two logins in sequence may lead to sessions on different nodes. In order to force the session to be started on the same nodes use the specific node names instead.

Example

$ ssh <userid>@jureca.fz-juelich.de
$ ssh <userid>@jureca01.fz-juelich.de

Login Procedure

Users can't login by suppling username/password credentials. Instead, password free login based on SSH key exchange is required.

The public/private ssh key pair has to be generated on the workstation you are using for accessing JURECA and needs to be uploaded to JURECA afterwards. For details on ssh key generation and upload see FAQ: How to generate ssh keys

Note

Please make sure that your home directory on JURECA is not open for write access for other users or user groups, otherwise ssh access will be blocked.

After the ssh key has been generated and uploaded, a typical login looks like:

$ ssh <userid>@jureca.fz-juelich.de

Here, jureca is the generic name which will establish a connection to one of the Login nodes from the set jureca[01-12].

Note

Too many accesses (ssh or scp) within a short amount of time will be interpreted as an intrusion and will lead to automatically disabling the originating system at the FZJ firewall. For transferring multiple files in a single scp session, the -r option can be used, which allows to transfer a whole directory.

If X11-based graphical tools are to be used on JURECA, it may be necessary to enable X11 forwarding in the file /etc/ssh/config or $HOME/.ssh/config on your workstation:

PubkeyAuthentication yes
ForwardAgent yes
ForwardX11 yes

or to use the -X ssh option (possibly combined with -A for authentication forwarding):

$ ssh -AX

Login Environment

Shell

The login shell on all servers in the JURECA cluster is /bin/bash. It is not possible to change the login shell but users may switch to a personal shell within the login process. Users will find a template within the initial FZJ $HOME/.bashrc. However, please note that only bash is fully supported on JURECA and usage of alternative shells may degenerate the user experience on the system.

Available filesystems

The following table gives an overview over the available filesystems:

Variable Storage Location Accessibilty Description
$HOME GPFS filesystem Login + Compute Storage of user specific data (e.g. ssh-key)
$PROJECT GPFS filesystem Login + Compute Storage of project related source code, binaries, etc.
$SCRATCH GPFS filesystem Login + Compute Scratch filesystem for temporary data
$FASTDATA GPFS filesystem Login + Compute Storage location for large data (JUSTDSS)
$DATA GPFS filesystem Login Storage location for large data (XCST)
$ARCHIVE GPFS filesystem Login Storage location for archiving on tape

It is highly recommended to access files always with help of these variables. $HOME is set automatically during login. The other variables are set by the jutil env init command. The user can set these right core variables ($PROJECT, $ARCHIVE, ...) by using:

jutil env activate -p <project>

For more information look at the jutil command usage.

Starting with the new usage model in November 2018 filesystems are bound to compute or data projects. The following description is just an overview on how to use these filesystems.

For further information, please see: What filesystem to use for different data?

Filesystems for compute projects

Each compute project has access to the following filesystems.

Home directory ($HOME)

Home directories reside in the GPFS filesystem. In order to hide the details of the home filesystem layout the full path to the home directory of each user is stored in the shell environment variable $HOME. References to files in the home directory should always be made through the $HOME environment variable. The initialization of $HOME will be performed during the login process.

The Home directory is limited in space and should only be used for storing smal user-specific data items (e.g. ssh-keys, configuration files).

Project directory ($PROJECT)

Project directories reside in the GPFS filesystem, too. In order to hide the details of the project filesystem layout the full path to these directories is stored in shell environment variables.

As an account can be bound to several projects the variables are marked accordingly $PROJECT_c<project>. The data migrated at the transition to the new usage model was moved to the user-owned subdirectory $PROJECT_c<project>/account. Please note that the project directory itself is writable for the project members, i.e. different organization schemes within the projeect (for example to enable easier sharing of data) are possible and entirely in the hand of the project PI and members.

To activate a certain project for a current session or switch between projects one can use the tool jutil.

During activation of a project environment variables will be exported and the environment variable $PROJECT is set to $PROJECT_c<project>.

This tool can also be used to perform tasks like querying project information/cpu and data quota.

Working directory ($SCRATCH)

Scratch directories are temporary storage locations residing in the GPFS filesystem. They are used for applications with large size and I/O demands. Data in these directories are only for temporary use, they are automatically deleted (files after 90 days by modification and access date, empty directories after 3 days). The structure of the scratch directory and the corresponding environment variables are similar to the project directory.

Filesystems for data projects

Filesystems for data projects are used to store large data. The structure and environment variables are similar to $PROJECT and $SCRATCH. Data projects have to be explicitly applied for and are indipendent from compute projects.

Data directory ($FASTDATA)

Fastdata directories are used for applications with large data and I/O demands similar to the scratch filesystem.

Contrary to $SCRATCH data in $FASTDATA is permanent and backed up by TSM.

Data directory ($DATA)

Data directories are used to store a huge amount of data on disk based storage. The bandwidth is lower than in $FASTDATA. Access to these directories is available from login nodes only.

Archive directory ($ARCHIVE)

Archive directories are used to store all files not in use for a longer time; data are migrated to tape storage by TSM-HSM.

Programming environment, modules and libraries

The persistent settings of the shell environment are governed by the content of .bashrc, .profile or scripts sourced from within these files. Please use these files for storing your personal settings.

The common programming environment is maintained with the concept of software modules in the directory /usr/local/software. The framework provides a set of installed libraries and applications (including multiple-version support) and an easy to use interface (module command) to set up the correct shell environment.

Machine identification file

To simplify users the handling of the shared $HOME filesystem on the different supercomputers JSC provides a machine identification file /etc/FZJ/systemname on all systems. /etc/FZJ/systemname stores the system name (such as juwels, jureca, ...) and can be used perform system specific actions without the need to parse the hostname of the login or compute nodes.

Below an example for the handling of different machines e.g. in .profile or .bashrc is provided:

MACHINE=$(cat /etc/FZJ/systemname)
if test "${MACHINE}" = "juwels"; then
# perform JUWELS specific acctions
elif test "${MACHINE}" = "jureca" ; then
# perform JURECA specific actions
fi

The machine name can also be read within a Makefile using:

$(shell cat /etc/FZJ/systemname)