Quickstart guide to HPC
This document provides information relevant to users migrating from other HPC systems to supercomputers and clusters at the MPCDF.
Software environment (modules)
A new module system with a hierarchical structure has been introduced. Starting with Raven, no defaults are defined for the compiler and MPI library modules. Users need to explicitly specify the full version for compiler and MPI modules during compilation and in batch scripts to ensure compatibility of the MPI library.
Due to the hierarchical module environment, many libraries and applications only appear after loading a compiler, and subsequently also a MPI module (the order is important here: first, load compiler, then load the MPI). These modules provide libraries and software consistently compiled with/for the user-selected combination of compiler and MPI library.
To search the full hierarchy, the
find-module command can be used. All
fftw-mpi modules, e.g., can be found using
MPI parallel HPC applications
As a default, the Intel compilers for Fortran/C/C++ and the Intel MPI library are recommended on Raven.
The MPI wrapper executables are
wrappers pass include and link information for MPI together with
compiler-specific command line flags down to the Intel compiler.
More information and links to the reference guides for the Intel compilers are provided on the following pages:
In addition, the GNU Compiler Collection, GCC (C, C++, Fortran) is supported to be used with Intel MPI.
Multithreaded (OpenMP) or hybrid (MPI/OpenMP) HPC applications
To compile and link OpenMP applications pass the flag
-qopenmp to the Intel
compiler. Note that the recent Intel compilers do not support the legacy
-openmp option anymore.
In some cases it is necessary to increase the private stack size of the threads at runtime, e.g. when threaded applications exit with segmentation faults. On Cobra the thread private stack size is set via the environment variable OMP_STACKSIZE. On Raven, the intel module presets a value of OMP_STACKSIZE=256MB (in order to relax the very small default value of 4 megabytes). For example, to request a stack size of 512 megabytes on Cobra, set OMP_STACKSIZE=512m.
For information on compiling applications which use pthreads please consult the Intel C/C++ compiler documentation.
Intel Math Kernel Library (MKL) overview
The Intel Math Kernel Library (MKL) is provided as the standard high-performance mathematical library. MKL provides highly optimized implementations of (among others)
direct and iterative solvers,
Parts of the library support thread or distributed-memory parallelism.
Extensive information on the features and the usage of MKL is provided by the official Intel MKL documentation.
Execution of (parallel) programs via Slurm
Parallel programs on the HPC systems are started with
srun (see also the man
For production runs it is necessary to run the MPI program as a batch job. Please refer to the sections on the Slurm batch system and to the sample batch job scripts for further information, that are contained with the user guides on Raven and Cobra.
Be reminded that the compiler and MPI modules that were loaded for compilation also need to be loaded in the batch script in exactly the same version.