Connecting
How can I log into the machines at the MPCDF?
The compute resources at the MPCDF run Linux, and login is only possible via SSH in combination with two-factor authentication (see below). To access a system at the MPCDF from the public Internet it is necessary to log into one of the gateway machines first, and then log from the gateway system into the target system.
The most commonly used client on the command line is OpenSSH, available for Linux, MacOS and Windows.
Connections are created through the ssh
command, while SSH key pairs creation and management are performed with commands like ssh-keygen
and ssh-add
.
A useful feature is the configuration file located in $HOME/.ssh/config
, which is an alternative to using flags with the ssh
command. It allows to define an alias for each host, along with specific connection parameters.
See this page for further details about the configuration files.
What are the gateway machines for SSH login?
The following gateway machines are available and accept SSH connections from the Internet:
gate1.mpcdf.mpg.de
gate2.mpcdf.mpg.de
Both machines provide a very small local home directory only. From these machines direct logins to HPC systems and clusters are allowed.
Both gateway systems are rebooted weekly - gate1 each Tuesday, 3:45am, and gate2 each Saturday, 3:45am, German local time. User sessions are not persistent across reboots.
More details, including host key fingerprints, are available in the documentation of the gateway machines.
How can I tunnel through the gateway machines?
SSH tunnels through the gateway hosts can be configured using the ‘ProxyJump’ keyword (for OpenSSH 7.3 and newer), older SSH versions use ‘ProxyCommand’.
Here’s an example for a connection to Viper via a gate machine:
ssh -J <user>@gate1.mpcdf.mpg.de <user>@viper.mpcdf.mpg.de
In the configuration file, the same setup reads:
Host gate
Hostname gate1.mpcdf.mpg.de
User MPCDF_USER_NAME
Host viper
Hostname viper.mpcdf.mpg.de
User MPCDF_USER_NAME
ProxyJump gate
These lines in the config file allow to establish a connection to Viper simply by running ssh viper
on the command line.
How can I avoid having to type my password repeatedly?
An SSH ControlMaster setup establishes a connection once and reuses that connection for subsequent logins from the same machine. ControlMaster and ProxyJump/ProxyCommand can be combined as shown below.
Please note that per default only a maximum of 10 sessions is supported within a single ControlMaster instance.
The following code snippet demonstrates a ControlMaster configuration for an OpenSSH client to log into ‘gate’. In addition it sets ‘gate’ as a jump host to log into the Raven HPC system. Alternatively, ‘gate2’ can be used.
# SSH configuration taken from https://docs.mpcdf.mpg.de/faq/
Host gate
Hostname gate.mpcdf.mpg.de
User MPCDF_USER_NAME
ServerAliveInterval 120
ControlMaster auto
ControlPersist 12h
ControlPath ~/.ssh/master-%C
Host raven
Hostname raven.mpcdf.mpg.de
User MPCDF_USER_NAME
ControlMaster auto
ControlPersist 12h
ControlPath ~/.ssh/master-%C
# OpenSSH >=7.3
ProxyJump gate
# OpenSSH <=7.2
#ProxyCommand ssh -W %h:%p gate
You can now connect to the Raven HPC system directly by executing the command:
ssh raven
You only have to type your password, OTP, and password again once and be
able to reuse this connection for the rest of the day with any HPC system.
Note that scp
and rsync
use ssh
under the hood and benefit in
the same way from that configuration.
This example works on Linux and Mac but not on Windows due to limitations
of its ssh
client. On Windows, the PuTTY SSH client supports a feature
similar to ControlMaster: Tick the box “Share SSH connections if
possible” before connecting.
How can I access the clusters using a Windows machine?
The previously mentioned setup is compatible with the Remote SSH extension of the Visual Studio Code (VSCode) editor and OpenSSH in Powershell, except for the ControlMaster settings.
Please follow the instructions in our step-by-step guide to PuTTY and WinSCP.
Login to the gateway machines fails with “Corrupted MAC on input.”
Effective January 2025, stricter rules for various encryption algorithms have been enforced on MPCDF gateway systems. Native OpenSSH-Clients on Windows systems unfortunately tend to fail to connect with errors like “Corrupted MAC on input”. Enforcing a more recent MAC (message authentication code) on the client side mitigates this issue:
ssh -m hmac-sha2-256-etm@openssh.com <user>@gate1.mpcdf.mpg.de
In a configuration file:
Host gate
Hostname gate1.mpcdf.mpg.de
User MPCDF_USER_NAME
MACs hmac-sha2-256-etm@openssh.com
Two-factor authentication (2FA)
Two-factor authentication (2FA) is mandatory. Please find the 2FA-related part of the FAQ here.
Are SSH keys supported?
Logins with SSH keys are not supported, neither on the gateway machines nor on the HPC systems or clusters.
Can ssh/scp/sftp performance be improved?
In order to improve single stream ssh/scp/sftp performance, it is recommended
to pass the option -c aes128-ctr
to your ssh/scp/sftp session.
In a configuration file:
Host gate
Hostname gate1.mpcdf.mpg.de
User MPCDF_USER_NAME
Ciphers aes128-ctr
I get a SSH security warning about a host key change when trying to log in. What does this mean and what should I do?
The current host keys for the gateway machines are documented above. In case of host key changes for existing systems which may occur after maintenance works, the new host keys are communicated via email or via this documentation page. If unsure, submit a ticket to the MPCDF helpdesk.
How can I run applications with graphical user interfaces on MPCDF systems?
Applications with X11-GUIs can be run on the login nodes of the HPC clusters or on dedicated resources. Depending on the requirements the following options are available.
X11 forwarding
Forwarding of X11 displays to a local computer is possible using SSH. From your local system, connect to the MPCDF system with
ssh -C -Y hostname
or with a configuration file:
Host gate
Hostname gate1.mpcdf.mpg.de
User MPCDF_USER_NAME
Compression yes
ForwardX11 yes
ForwardX11Trusted yes
The flag -C
enables compression which typically improves the performance for GUI applications.
While Linux clients mostly have a local X server running, Mac and Windows clients need to install
an X server, e.g., XQuartz, Xming, Cygwin/X.
For complex GUIs, the performance of X forwarding is often not sufficient, and VNC is recommended instead.
VNC
Users can manually start a VNC server on the login nodes of the HPC systems and clusters to run GUI applications without the need for X11 forwarding. That VNC server is persistent until the next reboot of the machine. More conveniently and on dedicated resources but with a time limitation, the web-based remote visualization service allows to launch and use VNC sessions (available on a subset of the HPC systems).
Remote Visualization Service
To enable hardware-accelerated OpenGL rendering, the web-based remote visualization service allows to use the GPUs of some nodes of a subset of the HPC systems for remote visualization.