Quickstart guide to HPC
This document provides information relevant to users migrating from other HPC systems to supercomputers and clusters at the MPCDF.
Software environment (modules)
A new module system with a hierarchical structure has been introduced. Starting with Raven, no defaults are defined for the compiler and MPI library modules. Users need to explicitly specify the full version for compiler and MPI modules during compilation and in batch scripts to ensure compatibility of the MPI library.
Due to the hierarchical module environment, many libraries and applications only appear after loading a compiler, and subsequently also a MPI module (the order is important here: first, load compiler, then load the MPI). These modules provide libraries and software consistently compiled with/for the user-selected combination of compiler and MPI library.
To search the full hierarchy, the find-module
command can be used. All
fftw-mpi modules, e.g., can be found using find-module fftw-mpi
.
MPI parallel HPC applications
As a default, the Intel compilers for Fortran/C/C++ and the Intel MPI library are recommended on Raven.
The MPI wrapper executables are mpiifort
, mpiicc
and mpiicpc
. These
wrappers pass include and link information for MPI together with
compiler-specific command line flags down to the Intel compiler.
More information and links to the reference guides for the Intel compilers are provided on the following pages:
In addition, the GNU Compiler Collection, GCC (C, C++, Fortran) is supported to be used with Intel MPI.
Multithreaded (OpenMP) or hybrid (MPI/OpenMP) HPC applications
To compile and link OpenMP applications pass the flag -qopenmp
to the Intel
compiler. Note that the recent Intel compilers do not support the legacy
-openmp
option anymore.
In some cases it is necessary to increase the private stack size of the threads at runtime, e.g. when threaded applications exit with segmentation faults. On the systems at the MPCDF, a value of OMP_STACKSIZE=256MB is set (in order to relax the very small default value of 4 megabytes). For example, to request a larger stack size of 512 megabytes, set the environment variable to OMP_STACKSIZE=512m in the Slurm job script.
For information on compiling applications which use pthreads please consult the Intel C/C++ compiler documentation.
Intel Math Kernel Library (MKL) overview
The Intel Math Kernel Library (MKL) is provided as the standard high-performance mathematical library. MKL provides highly optimized implementations of (among others)
LAPACK/BLAS routines,
direct and iterative solvers,
FFT routines,
ScaLAPACK.
Parts of the library support thread or distributed-memory parallelism.
Extensive information on the features and the usage of MKL is provided by the official Intel MKL documentation.
Linking programs with MKL
To use MKL, load the environment module mkl
first. The module sets the
environment variables MKL_HOME
and MKLROOT
to the installation directory
of MKL. These variables can then be used in makefiles and scripts.
The Intel MKL Link Line Advisor is often useful to obtain information on how to link programs with MKL. For example, to link statically with the threaded version of MKL on Raven (Linux, Intel64) using standard 32 bit integers, pass the following command line arguments to the Intel compiler:
-Wl,--start-group
${MKLROOT}/lib/intel64/libmkl_intel_lp64.a
${MKLROOT}/lib/intel64/libmkl_intel_thread.a
${MKLROOT}/lib/intel64/libmkl_core.a
-Wl,--end-group
-liomp5 -lpthread -lm -ldl
Execution of (parallel) programs via Slurm
Parallel programs on the HPC systems are started with srun
(see also the man
page of srun
), not mpirun
or mpiexec
.
For production runs it is necessary to run the MPI program as a batch job. Please refer to the sections on the Slurm batch system and to the sample batch job scripts for further information, that are contained with the user guides on Raven and Viper.
Be reminded that the compiler and MPI modules that were loaded for compilation also need to be loaded in the batch script in exactly the same version.