GitLab Runners for CI/CD
Introduction
If a GitLab repository contains a continuous integration pipeline, its jobs will be executed via a GitLab Runner. A GitLab Runner is a daemon running on an other server, waiting to be contacted by the central GitLab server to execute CI pipelines.
You can find more detailed information about GitLab Runners in the GitLab Documentation.
There are two different ways of using a GitLab runner:
individual runners, installed on local machines, remote clusters, or cloud-based systems by individual users (without support from MPCDF)
shared runners, which are offered by MPCDF
If you don’t want to install and configure an individual GitLab runner, you can execute your CI pipelines on the shared runners offered by MPCDF. If you don’t add one or more tags to your CI file .gitlab-ci.yml, your pipeline will automatically executed by a shared runner.
Docker images for CI with MPCDF environment modules
To provide the developers of HPC applications with a familiar and comprehensive software environment also within GitLab-based continuous integration (CI) pipelines, the MPCDF is offering special Docker images. These images use environment modules to make software accessible, in a very similar way to how software is managed on the HPC systems. Hence, e.g. build scripts will work on both the HPC systems and the CI cloud runners in a consistent way.
Image tagging-and-purging strategy
Tagging using latest and the calendar year
To limit the individual growth of these Docker images over time, we put the following tagging-and-purging strategy in place:
Essentially, all images are tagged using latest and/or the calendar year. In
the course of a year, say 2024, the images tagged with latest and the year
(2024) are identical and receive regular updates and additions of software.
With the beginning of the new year, all images tagged with the previous year
stay unchanged (frozen). The newly created images for 2025, say, will start out
in early January again in a slim state and will be tagged latest. Users can
then choose to migrate to the more recent images (tagged 2025 and latest in
our example) or stick with the older (but static!) images (tagged 2024) for a
while.
In case a user opts for using the tag latest, please be warned that the
software environment will likely change at the beginning of each year.
Non versioned images tagged latest
Moreover, we provide special images without an explicit version number pointing
to the respective most recent compiler and MPI in the MPCDF software stack. For
the compilers, for example, we offer the images gcc:latest and intel:latest,
similarly for depending images containing Intel- or OpenMPI.
See the page on the
latest image tag
for an overview.
A typical use case for these images is a user’s code which should be built and
tested with the newest available compiler. Only the tag latest exist for these
non-versioned images. In order to seamlessly receive updates, the module load
command should also omit the compiler’s and MPI’s version number in this case
(for example, using module load gcc instead of module load gcc/13).
Case Study: Set up a CI job using a recent Intel C++ compiler
This section shows how to set up a CI job to compile and test some C++ software using the Intel compiler. The required steps are:
Identify the Docker image that provides the required software by checking the CI Docker image website. Copy the image tag, in our example we’re using
gitlab-registry.mpcdf.mpg.de/mpcdf/ci-module-image/intel_2023_1_0_x:2024.Edit your
.gitlab-ci.ymlfile and paste the image tag afterimage:.Select proper tags for the shared runners you’re intending to use.
The resulting
.gitlab-ci.ymlfile might look as follows:
# .gitlab-ci.yml
build_intel:
image: gitlab-registry.mpcdf.mpg.de/mpcdf/ci-module-image/intel_2023_1_0_x:2024
tags:
- mpcdf-shared
script:
- module load intel/2023.1.0.x
- module load gcc/13
- module load cmake
- icpx --version
# compile C++ code and perform tests ...
Case Study: Interactive debugging of a CI job that uses an MPCDF CI module image
Advanced users might be interested in having interactive access to the identical software environment that is used by their CI jobs, e.g. to speed up debugging. While it is not possible to get interactive access to the actual CI runners, users can create a local Apptainer-based copy of a MPCDF CI module image on a login node of an HPC cluster and do the debugging there. The necessary steps are outlined in the following.
Before you can interact with our container registry at all, you will need to
generate an access token with at least the scope read_registry.
Then you can use Apptainer to authenticate with the registry (needs to be done
only once per cluster) and build a local copy of the ci-module-image of your
choice, e.g. gcc_15:2026:
$ module load apptainer/1.4.3
$ apptainer registry login --username "$USER" docker://gitlab-registry.mpcdf.mpg.de
Password / Token:
$ apptainer build gcc_15.sif docker://gitlab-registry.mpcdf.mpg.de/mpcdf/ci-module-image/gcc_15:2026
INFO: Starting build...
Building the image will take a while. You can safely ignore the messages about “harmless EPERM on setxattr” that will be printed. After that you can get an interactive shell in the container using
apptainer shell gcc_15.sif
By default this mounts devices and a number of host directories in the
container. Importantly, environment variables are also forwarded into the
container. This can lead to adverse interactions with the module system inside
the container. For a clean environment use the -e/--cleanenv or
-c/--contain. If you want a completely isolated environment, use the
-C/--containall flag. See the Apptainer help screen for more details.
# Increasing level of isolation
apptainer shell --cleanenv gcc_15.sif
apptainer shell --contain gcc_15.sif
apptainer shell --containall gcc_15.sif
Before you can run the steps of the job interactively inside the container environment you need to make the environment modules available. For that you have to source the module initialization file which is normally stored in the environment variable “$BASH_ENV”.
$ echo "$BASH_ENV"
/mpcdf/soft/SLE_15/packages/x86_64/Modules/current/etc/profile.d/modules.sh
$ . "$BASH_ENV"
$ module load gcc
Note that the CI module images are not designed to be used in batch jobs, use the native environment modules instead. Moreover, as the CI module images are updated regularly, delete the Apptainer-based local copy after the debugging is finished to avoid using an outdated software environment.
Migrating from the legacy module-image to the new CI module images
Users of the previous module-image are encouraged to migrate to the new CI
images now, report potential issues and request additional software modules via
the helpdesk, if necessary. As shown in the previous section, the
module-image can simply be replaced in the user’s .gitlab-ci.yml file with
one of the new images that provides the desired software stack for the
respective CI job.
The new CI module images were first introduced in the December 2023 edition of the Bits and Bytes.
Build a docker image
With the addition of Podman runners, it is now possible to build your own docker image on a shared runner.
Here’s a CI pipeline template to build an image on a Podman runner:
build_image:
tags:
- image-builder
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
BUILDAH_FORMAT: docker
BUILDAH_ISOLATION: chroot
image: quay.io/buildah/stable
before_script:
- buildah login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- buildah build -t $IMAGE_TAG .
- buildah push $IMAGE_TAG
Please, do not use “kaniko” as it has been archived and no longer maintained.