# Libraries

## Intel Math Kernel Library

The Intel Math Kernel Library (MKL) is installed on Linux
systems and can be loaded using the modules environment. Just enter
`module load mkl`

to get access to the Math Kernel Library.

The MKL library is composed of highly optimized mathematical routines. It contains, among others, LAPACK, the Basic Linear Algebra Subprograms (BLAS) and the extended BLAS. For parallel computing it provides ScaLAPACK, the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).

Detailed information can be found on the Intel Math Kernel Library Website.

The link line for binding MKL to your application depends on the MKL functions used and if parallel or sequential use is required. You can find detailed information on the web at https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor or contact the application support team at MPCDF.

## ELPA

ELPA, a library of scalable, high-performance direct solvers for symmetric/ Hermitian eigen value problems.

On MPCDF computing platforms, ELPA is provided via the environment module system.

## ScaLAPACK

ScaLAPACK includes routines for the solution of dense, band, and tridiagonal linear systems of equations, condition estimation and iterative refinement, for LU and Cholesky factorization, matrix inversion, full-rank linear least squares problems, orthogonal and generalized orthogonal factorizations, orthogonal transformation routines, reductions to upper Hessenberg, bidiagonal and tridiagonal form, reduction of a symmetric-definite/ Hermitian-definite generalized eigenproblem to standard form, the symmetric/Hermitian, generalized symmetric/Hermitian, the nonsymmetric eigenproblem, and the singular value decomposition. See https://www.netlib.org/scalapack/ for more information.

On the MPCDF computing platforms, ScaLAPACK is provided as part of the Intel MKL, a library of optimized mathematical routines.

MKL has to be loaded via the environment module system. It is strongly recommended to use the Intel MKL Link Line Advisor for generating correct link lines.

## PETSc and SLEPc

PETSc, the Portable Extensible Toolkit for Scientific Computation from Argonne (ANL), is a suite of data structures and routines for the uni-processor and parallel-processor solution of large-scale scientific application problems modeled by partial differential equations.

SLEPc is a software package for the solution of large sparse eigenproblems on parallel computers. It is built on top of PETSc.

Both libraries are available on MPCDF machines via environment modules.

## Intel Integrated Performance Primitives

The Intel Integrated Performance Primitives API, containing highly
optimized primitive operations used for **digital filtering, audio and
image processing**, is installed on the Linux systems, and the current
version can be made available by invoking `module load ipp`

.

In-depth information can be found in the Intel IPP documentation.

## Intel Threading Building Blocks

The Intel Threading Building Blocks (TBB) enables the C++ programmer to integrate (shared memory) parallel capability into the code. It is installed on Linux Systems and the current version can be made available by invoking

`module load tbb`

Then compiling is done in the following way:

```
icpc -c -o main.o -I$TBB\_HOME/include/tbb main.cpp
```

Linking is only possible against shared libraries, for example:

```
icpc -o main main.o -L$TBB\_HOME/intel64/gcc4.8 -ltbb
```

For detailed information please read Intel Threading Building Blocks.

## GPU-enabled numerical libraries

Applications that rely on expensive library calls to standard routines from BLAS, LAPACK, FFTW, or similar may benefit from GPU-enabled libraries that can easily be used as drop-in replacements for the respective CPU libraries. Moreover, many GPU-specific libraries exist for specific purposes that may provide the functionality required for a user code.

### GPU-enabled libraries bundled with NVIDIA CUDA

The NVIDIA CUDA Toolkit provides optimized numerical libraries that can be called and linked from user code in order to perform compututations on the GPUs, for example

cuBLAS

cuFFT

cuSPARSE

cuSOLVER

Please consult the NVIDIA CUDA Toolkit documentation for a comprehensive list and more detailed information. CUDA is available via the environment module ‘cuda’.

### MAGMA

The MAGMA library provides implementations of BLAS and LAPACK routines optimized for multi-core CPU and multi-GPU systems. MAGMA is available via the environment module ‘magma’.

## NAG

The NAG library
is a collection of mathematical and statistical routines developed by
the
The **N**umerical **A**lgorithms **G**roup Ltd.,
Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, UK.

### Availability

The NAG library is supported for Fortran and C compilers. The Fortran library is F77 style with additional modules or interfaces for usage from F90/95/2003.

At the MPCDF a campus license (i.e. for arbitrary architectures and compilers and an arbitrary number of users) is available for the serial NAG Fortran and C libraries.

The libraries are provided for Unix architectures (Linux, Solaris).

### Usage

The NAG libraries are made available via the
environment module
system. The output of the command `module avail nag`

provides an up-to-date
list of available versions for the different compilers supported on that
system. Use `module help <naglib>`

for documentation and detailed usage
instructions.

For some NAG routines the license file is also required at run-time.
Therefore, execute the command `module load <naglib>`

before starting your
program.

### Documentation by NAG

A detailed description (Fortran / C) for the major release and previous releases (e. g. numerical and statistical facilities, differences to former releases)

On-line documentation (Fortran / C) for the major release and previous releases (e. g. detailed description of all routines sorted by topics)

Installer’s Notes (Fortran / C) with details on the implementation of different releases (e. g. system and compiler options used for generating)

User’s Notes (Fortran / C) with detailed description and useful hints on usage (e. g. compiler and linking options, precision, sample programs, partly comparison with equivalent routines from other numerical libraries)

## How to link libraries on Linux systems

### Introduction

Assume that there’s a program you would like to build which depends on a (fictitious) library called “libfoo”. As usual on MPCDF systems, the library is provided via an environment module:

```
$ module load libfoo # LIBFOO_HOME is set to the full installation path
$ ls $LIBFOO_HOME/lib # list the contents of the "lib" subdirectory
libfoo.so libfoo.a
```

A file ending with “.so” (e.g. libfoo.so) is called a *shared library*
or *shared object*, whereas a file ending with “.a” (e.g. libfoo.a) is
called a *static library* or *archive*. Both types can be utilized
(linked) from user code. In the following we assume that there is a C
program named “main.c” which calls functions from the library “libfoo”.

### Linking static libraries

To compile the program “main.c” and link it statically with “libfoo.a”, issue the following commands:

```
$ icc -c -I$LIBFOO_HOME/include main.c # compile step
$ icc -o main main.o $LIBFOO_HOME/lib/libfoo.a # link step
```

Note that the full path to “libfoo.a” is provided during the link step, without any additional command line flags (such as “-L” or “-l”). Finally, an executable “main” is created. In particular during the link step, all the functions in “libfoo” which are required by our program are actually copied from “libfoo.a” into the executable “main”. Hence, the executable is independent of the file “libfoo.a” afterwards.

### Linking dynamic libraries, setting the RPATH

To compile the program “main.c” and link it with the dynamic library “libfoo.so”, issue the following commands:

```
$ icc -c -I$LIBFOO_HOME/include main.c
$ icc -o main main.o -L$LIBFOO_HOME/lib -lfoo -Wl,-rpath,$LIBFOO_HOME/lib
```

As a result, an executable “main” is created. Now, with dynamic linking, the functions are not copied from the library to the executable during the link step. They are only referenced from the executable. Hence, the executable depends on the file “libfoo.so” afterwards. It is smaller in size than the statically linked executable.

Note that three command line arguments are involved in the link step:

`-L`

: path where the compiler looks for libraries at link time
`-l`

: name of the library to be linked (i.e. the file name without “lib” and “.so”)
`-Wl,-rpath,`

: path where the executable needs to look for shared
libraries at run time. Without the `-Wl,-rpath,$LIBFOO_HOME/lib`

command line argument, the program would be linked correctly, but, at
runtime, the executable would not be able to find the shared library
because of lacking path information. This is the reason why that
information must be added which is called the RPATH. Note that there’s
no whitespace in the argument. Note, in addition, that the environment
variable LD_LIBRARY_PATH overrides the RPATH. Setting
the LD_LIBRARY_PATH should be avoided, if possible.

You can verify that the library “libfoo” is referenced correctly in the
executable “main” by running `ldd main`

. This will print a list of all
dynamic libraries linked in your executable. Specifically, the output
should contain a line for “libfoo.so”:

```
libfoo.so => /some/path/to/the/library/libfoo.so
```

Note that if “not found” is displayed for any library, this means that the RPATH is not set correctly and the executable will not be able to run.

### Compilers

The information provided here is valid for Intel and GNU compilers (C/C++/Fortran) on Linux systems.

## Watson Sparse Matrix Package

Note that licensing of WSMP has been discontinued in the course of 2020!