Name of the cluster:



Max Planck Institute for Physics


  • at[01-48] : 1536 CPU cores (3072 with HT enabled)

  • bt[01-28] : 896 CPU cores (1792 with HT enabled)

  • ft[01-08] : 192 CPU cores (384 with HT enabled)

  • zt01 : 192 CPU cores (384 with HT enabled)

  • zt02 : 36 CPU cores (72 with HT enabled)


For security reasons, direct login to the cluster is allowed only from within the MPG networks. Users from other locations have to login to one of the gateway systems first.

If you are connecting for the very first time, you will get a warning that the host’s authenticity cannot be verified. Provided that you can find the presented “fingerprint” in the list below, it is safe to answer with “yes” to continue connecting:


    • RSA Fingerprint: SHA256:8AsKgxrXv3WlGU//zf1lqZ2Aa8+oPNcuqoyJPG3um7o

    • ED25519 Fingerprint: SHA256:zr0QBCxzlGP4pD6HDqYjEDJfWbR+AiTbIbivPv76Oik


    • RSA Fingerprint: SHA256:Zigxk1P121KWOoP58wKJcQ8hk1GJOSGXAPjad76aPQU

    • ED25519 Fingerprint: SHA256:ISQuJtNMUZUreMOlGp1kSEvxNZkx50bDviMLeUcGEp8


    • RSA Fingerprint: SHA256:jSWh5TUeOMGzIDtm+c+wsQ+xtomwZa/gFVIMWmXChhQ

    • ED25519 Fingerprint: SHA256:xYs0IFUrOPpTG55L7tqzKmF5yA3gjmtHUyjLZ3LC9ag

Hardware Configuration

login nodes

  • mppui[1-2] : Intel(R) Xeon(R) CPU E5-2650L @ 1.80GHz; [2x8] 16 cores per node; no hyper-threading; 64 GB RAM

  • mppui4 : Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz; [2x16] 32 cores per node; hyper-threading; 1 TB RAM

compute nodes

  • at[01-48] : Intel(R) Xeon(R) E5-2683 v4 @ 2.10GHz; [2x16] 32 cores per node; hyper-threading; 128 GB RAM

  • bt[01-28] : Intel(R) Xeon(R) GOLD 6130 v4 @ 2.10GHz; [2x16] 32 cores per node; hyper-threading; 187 GB RAM

  • ft[01-08] : Intel(R) Xeon(R) E5-2680 v3 @ 2.80GHz; [2x12] 24 cores per node; hyper-threading; 125 GB RAM

  • zt01 : Intel(R) Xeon(R) Platinum 8160 @ 2.10GHz; [8x24] 192 cores per node; hyper-threading; 6 TB RAM

  • zt02 : Intel(R) Xeon(R) Gold 6140M @ 2.30GHz; [2x18] 36 cores per node; hyper-threading; 3 TB RAM


  • node interconnect is based on 10 Gb/s ethernet.

Batch system based on Slurm

  • sbatch, srun, squeue, sinfo, scancel, scontrol, s* (see man pages for more info)

  • current several partitions exits with varying constraints on walltime and memory. The commands sinfo and scontrol show partition can be used to gain some insight into the current status and constraints on the partitions.

  • some sample batch scripts can be found on Cobra home page (must be modified for MPPMU)


  • save-password: each time your password is changed you need to run the save-password command on one of the login nodes. This will ensure that your batch jobs can access afs.

  • archiving user data: users can archive their data by using the archive server.

  • /tmp : use /tmp or $TMPDIR only for small scratch data and remove them at the end of your job submission script. For large scratch data use /ptmp which is accessible from all the cluster nodes.


For support please create a trouble ticket at the MPCDF helpdesk.