Environment

Shell

The login shell on all servers in the JUDAC cluster is /bin/bash. It is not possible to change the login shell but users may switch to a personal shell within the login process. Users will find a template within the initial FZJ $HOME/.bashrc. However, please note that only bash is fully supported on JUDAC and usage of alternative shells may degenerate the user experience on the system.

Available file systems

The following table gives an overview over the available file systems:

Variable

Storage Location

Accessibilty

Description

$HOME

parallel file system

Access node

Storage of user specific data (e.g. ssh-key)

$PROJECT

parallel file system

Access node

Storage of project related source code, binaries, etc.

$SCRATCH

parallel file system

Access node

Scratch file system for temporary data

$FASTDATA

parallel file system

Access node

Storage location for large data (JUSTDSS)

$DATA

parallel file system

Access node

Storage location for large data (XCST)

$ARCHIVE

parallel file system

Access node

Storage location for archiving on tape

It is highly recommended to access files always with help of these variables. $HOME is set automatically during login. The other variables are set by the jutil env init command. The user can set these right core variables ($PROJECT, $ARCHIVE, …) by using:

jutil env activate -p <project>

For more information look at the jutil command usage.

Within the current usage model file systems are bound to compute or data projects. The following description is just an overview on how to use these file systems.

For further information, please see: What file system to use for different data?

File systems for compute projects

Each compute project has access to the following file systems.

Home directory ($HOME)

Home directories reside in the parallel file system. In order to hide the details of the home file system layout the full path to the home directory of each user is stored in the shell environment variable $HOME. References to files in the home directory should always be made through the $HOME environment variable. The initialization of $HOME will be performed during the login process.

The Home directory is limited in space and should only be used for storing smal user-specific data items (e.g. ssh-keys, configuration files).

Project directory ($PROJECT)

Project directories reside in the parallel file system, too. In order to hide the details of the project file system layout the full path to these directories is stored in shell environment variables.

As an account can be bound to several projects the variables are marked accordingly $PROJECT_c<project>. The data migrated at the transition to the new usage model was moved to the user-owned subdirectory $PROJECT_c<project>/account. Please note that the project directory itself is writable for the project members, i.e. different organization schemes within the project (for example to enable easier sharing of data) are possible and entirely in the hand of the project PI and members.

To activate a certain project for a current session or switch between projects one can use the tool jutil.

During activation of a project environment variables will be exported and the environment variable $PROJECT is set to $PROJECT_c<project>.

This tool can also be used to perform tasks like querying project information/cpu and data quota.

Working directory ($SCRATCH)

Scratch directories are temporary storage locations residing in the parallel file system. They are used for applications with large size and I/O demands. Data in these directories are only for temporary use, they are automatically deleted (files after 90 days by modification and access date, empty directories after 3 days). The structure of the scratch directory and the corresponding environment variables are similar to the project directory.

File systems for data projects

File systems for data projects are used to store large data. The structure and environment variables are similar to $PROJECT and $SCRATCH. Data projects have to be explicitly applied for and are independent from compute projects.

Data directory ($FASTDATA)

Fastdata directories are used for applications with large data and I/O demands similar to the scratch file system.

Contrary to $SCRATCH data in $FASTDATA is permanent and protected with snapshots.

Data directory ($DATA)

Data directories are used to store a huge amount of data on disk based storage. The bandwidth is lower than in $FASTDATA. Access to these directories is available from login nodes only.

Archive directory ($ARCHIVE)

Archive directories are used to store all files not in use for a longer time; data are migrated to tape storage by TSM-HSM.

Programming environment, modules and libraries

The persistent settings of the shell environment are governed by the content of .bashrc, .profile or scripts sourced from within these files. Please use these files for storing your personal settings.

Only software required for data transfer and data management tasks are available on JUDAC. Currently no module interface is provided.

Machine identification file

To simplify users the handling of the shared $HOME file system on the different supercomputers JSC provides a machine identification file /etc/FZJ/systemname on all systems. /etc/FZJ/systemname stores the system name (such as juwels, jureca, jusuf, …) and can be used perform system specific actions without the need to parse the hostname of the login or compute nodes.

Below an example for the handling of different machines e.g. in .profile or .bashrc is provided:

MACHINE=$(cat /etc/FZJ/systemname)
if test "${MACHINE}" = "juwels"; then
# perform JUWELS specific acctions
elif test "${MACHINE}" = "jureca" ; then
# perform JURECA specific actions
fi

The machine name can also be read within a Makefile using:

$(shell cat /etc/FZJ/systemname)

Transferring files with scp, rsync, etc.

Since outgoing SSH connections are not allowed, file transfers to and from JUDAC which use SSH as the underlying transport have to be initiated from the other system. So instead of

judac$ scp my_file local:

you have to initiate the copy from the local system:

local$ scp judac.fz-juelich.de:my_file .

In some cases, it might not be possible to directly transfer files between another system and JUDAC. This might be, because the other system also disallows outgoing SSH connections, or the SSH client on the other system is too old and does not support the modern cryptographic algorithms required by JSC policy. As a work around, the files have to be transferred to a third system which can make connections to both JUDAC and the other system. This can be automated with scp and its command line argument -3:

local$ scp -3 other.hpc.example.com:my_file judac.fz-juelich.de:

Note

An internet connection that provides fast speeds in both the upload and download direction is recommended for this approach.