Logo
  • Frequently Asked Questions
    • Account registration
    • Connecting to MPCDF Systems
      • How do I log in to MPCDF machines?
        • What are the SSH gateway machines?
      • How can I tunnel through the gateway machines?
      • How can I avoid repeatedly typing my password?
      • How can I connect to HPC systems with Visual Studio Code (VSCode)?
      • How do I connect from a Windows machine?
        • What if my connection fails with “Corrupted MAC on input”?
        • Is two-factor authentication (2FA) required?
        • Are SSH keys supported for login?
        • How can I improve ssh/scp/sftp performance?
        • What should I do if I see an SSH host key warning?
      • How can I run GUI applications on MPCDF systems?
        • X11 Forwarding
        • VNC
        • Remote Visualization Service
    • Two-factor authentication (2FA)
      • General information about 2FA
        • Do I need to enable 2FA?
        • How can I check if 2FA is activated on my account?
        • Why is 2FA enforced?
        • What are tokens, OTPs, and seeds?
        • What kinds of tokens are available?
        • When will I be asked for an OTP?
      • Activation of 2FA
        • How do I enable 2FA?
        • How do I enroll and use an app token?
        • Which authenticator app should I use?
        • How do I register an existing hardware token?
        • How do I enroll and use a secondary (backup) token?
        • Why can’t I have an app token and a hardware token simultaneously?
      • 2FA Tips and Tricks
        • Do I have to type in an OTP every time I access the secured systems?
      • 2FA Troubleshooting
        • What should I do if I need to factory-reset my phone?
        • How do I transfer my token to a new phone?
        • What if I can’t validate my token (”Wrong OTP” error)?
        • I can’t log in to the SelfService anymore
        • What if I can’t log in to a gate machine via SSH?
        • Why can’t I access HPC clusters via VNC?
      • Hardware and Client Support
        • How can I use GUI tools (sshfs, rsync, scp, sftp) with 2FA?
        • How can I use FileZilla with 2FA?
        • How do I use 2FA on a phone with a custom time shift?
        • Do you support FIDO2, U2F, or YubiKeys?
      • Security
        • How are token seeds secured on the server?
        • Who provides the hardware tokens, and do they know the seeds?
    • HPC Software and Applications
      • General Questions
        • How can I install my own software?
      • Environment modules
        • How do I use environment modules interactively?
        • How do I use environment modules in scripts?
        • How can I avoid using absolute paths in my scripts?
        • Examples
        • How do hierarchical environment modules work?
        • How do I quickly find a module?
        • How can I disable the “MPCDF specific note” for module avail?
        • Why are there no BLAS/LAPACK modules?
      • Compiled Languages
        • CMake
        • C/C++ and Fortran
        • Debugging C/C++ and Fortran Codes
      • Interpreted Languages
        • Python
        • R
        • Julia
        • Jupyter Notebooks
        • MATLAB
      • Message Passing Interface (MPI)
        • Which MPI implementations are supported?
        • How do I compile and link an MPI application?
        • What if CMake cannot find MPI?
        • Why can’t I use mpirun to launch my MPI code?
      • Visualization
        • How do I create a movie from a sequence of images?
        • How do I install additional TeX/LaTeX packages?
      • GUI Applications
        • Why am I having trouble with VSCode on older HPC clusters?
        • Why are some GUI applications not working on the login nodes?
    • HPC Systems and Services
      • Raven
        • What are the recommended compiler flags for Raven?
      • Viper
        • What are the recommended compiler flags for Viper?
      • Slurm Batch System
        • How do I submit a job to Slurm?
        • Can I submit jobs that run longer than 24 hours?
        • How do I launch an MPI application?
        • What is the correct order of commands in a Slurm script?
        • Can I run an interactive job for debugging?
        • How can I check the estimated start time of my job?
        • How do I get detailed information about a running job?
        • How do I get information about a finished job?
        • What happens if a hardware failure occurs during my job?
        • How do I handle CPU pinning?
      • Parallel File Systems (GPFS)
        • Which file systems are available and how should I use them?
        • How can I improve my I/O performance?
        • How do I share files with other users?
      • How do I transfer files to and from the HPC systems?
      • Performance Monitoring
        • How can I check the performance of my jobs?
        • How do I disable the performance monitoring for my job?
      • GPU Computing
        • How do I use the NVIDIA Multi-Process Service (MPS)?
        • How do I profile my GPU code?
        • Are there dedicated resources for interactive GPU development?
      • Containers
        • Can I run Docker containers?
      • Remote Visualization
        • How do I run GUI applications that use OpenGL?
        • How do I access the remote visualization service?
    • Tips and Tricks
      • How do I change my default shell?
      • How do I use ssh-agent to avoid typing my SSH key password?
    • Getting Help, Support, and Training
      • How can I get help and support?
      • Do you offer training on how to use MPCDF resources?
      • How should I acknowledge MPCDF in my publications?
  • Documentation
    • Computing
      • Introduction
        • Overview
        • Compute Facilities
      • Application Support for HPC, AI and HPDA
      • Gateway machines
        • Login
        • GSSAPI-based logins to MPCDF hosts
        • Tunneled access to MPCDF services
      • Dais User Guide
        • How to get Access permissions
        • Access
        • Hardware Configuration
        • Filesystems
        • Compilers and Libraries
        • Batch system based on Slurm
        • Current Slurm configuration on DAIS
        • Useful tips
        • Slurm example batch scripts
        • Support
      • Viper-CPU User Guide
        • System overview
        • Access
        • Hardware configuration
        • File systems
        • Software
        • Slurm batch system
        • Slurm example batch scripts
        • Migration guide for users coming from Intel-based HPC systems
      • Viper-GPU User Guide
        • System overview
        • Access
        • Hardware configuration
        • File systems
        • Software
        • Slurm batch system
        • Slurm example batch scripts
        • Migration guide for users coming from Intel- and NVIDIA-based HPC systems
      • Raven User Guide
        • System Overview
        • Access
        • Hardware configuration
        • File systems
        • Software
        • Slurm batch system
        • Slurm example batch scripts
      • Dedicated clusters
        • Astronomy
        • Astrophysics
        • Biochemistry
        • Biological Cybernetics
        • Biological Intelligence
        • Biophysics
        • Brain Research
        • Chemical Physics of Solids
        • Extraterrestrial Physics
        • Geoanthropology
        • Gravitational Physics
        • Gravitational Physics - ACR
        • Gravitational Physics - CRA
        • MPSD / PKS
        • Physics
        • Plasma Physics
        • Polymer Research
        • Psychiatry
        • Quantum Optics
        • Radioastronomy
        • Science of Light
        • Sustainable Materials
      • Software
        • Environment Modules
        • HPC Application Packages
        • AI software
        • Compilers and languages
        • Libraries
        • Debugging tools
        • Performance tools
        • Mathematical tools
        • Bioinformatics
        • Containers
        • VNC
      • Quickstart guide to HPC
        • Software environment (modules)
        • MPI parallel HPC applications
        • Multithreaded (OpenMP) or hybrid (MPI/OpenMP) HPC applications
        • Execution of (parallel) programs via Slurm
      • Performance Monitoring
        • Introduction
        • PDF Performance Reports for Users
        • Suspending the Performance Monitoring System for Specific Jobs
        • Technical Background
        • Overhead
        • Further information
      • Training
        • Courses and workshops arranged by or in collaboration with the MPCDF
        • Training programmes of other institutions
    • Data
      • DataShare: Sync and Share Service
        • DataShare: An Introduction
        • DataShare: Migration to Nextcloud
        • Switching from the ownCloud to the Nextcloud client
        • Desktop and Mobile clients
        • FAQ
      • Globus Online: Large-Scale data Transfer and Sharing
        • MPCDF DataHub and Globus Online
        • Staging Files to HPC systems via Globus Online
        • Staging data via Flows
        • GO-Nexus
        • Globus Demo Videos - Demonstrating Globus Functionality for end users
      • Nexus-S3: Object Storage for data Transfer and Sharing
        • Nexus-S3
        • Publishing Data for public access via S3
      • Small to Medium Scale Data Transfers
        • Data Transfer: Tools & Tips
        • MPCDF DataHub and Globus Online
        • Sharing Large Files with DataShare
      • GitLab: Software Development
        • The MPCDF GitLab Instance: an introduction
        • Poetry and GitLab: Devops for Python developers
        • GitLab Runners for CI/CD
      • Data Publication and Metadata Management
        • Service: Data Repositories
        • The MPCDF Metadata Tools: User Documentation
        • The MPCDF Metadata Tools: Developer Documentation
        • MetaStore User Documentation
      • Backup and Archive
        • Backup & Archive Systems
        • How to archive data
        • Backup HPC
        • Backup Linux Clusters
        • Backup AFS
        • Backup Desktops
      • Deprecated: The Andrew File System (AFS)
        • Store (AFS)
        • Introduction
        • Specific-Technical
        • Glossary
        • Troubleshooting
    • HPC-Cloud
      • Technical and User Documentation
        • Quick Start
        • Compute
        • Storage
        • Network
        • Command Line Interface and Scripting
        • Recipes
      • Rental Model
        • Introduction
        • Cost components
        • Setup procedure and billing procedure
        • Available Resources
        • Compute resources
        • Storage resources
      • Terms of Use
        • General
        • Backup and Recovery
        • Data Privacy and Sensitive Data
        • Proprietary Software
        • Service Interventions and Scheduled Down times
        • Performance
        • Support
        • Potential Sanctions
    • Visualization
      • Support for the Visualization of Scientific Data
      • Remote Visualization and Jupyter Notebook Services
        • Web Interface
        • Command line interface
        • Technical details
        • Troubleshooting
      • Robin
    • Campus
      • Software
      • Wi-Fi
        • Guest networks
        • Eduroam
        • Installation (MPCDF/MPQ staff only)
  • Bits and Bytes
    • No.220, December 2025
      • High-performance Computing
        • Viper News
      • Software News
        • AMD software on Viper-GPU
        • New version of JAX on Viper-GPU and Raven
        • Major NumPy version update introduced with python-waterboa/2025.06
        • Intel software stack
        • Provisioning of AI Software
        • New ELPA module version scheme
      • Introducing the MPCDF LLM Inference Service
      • GitLab CI
        • Building Docker images with Podman-Runners
        • New tags for MPCDF GitLab runners
        • New naming scheme for CI module images containing CUDA and ROCm
      • HPC-Cloud Software Updates
        • Looking for alternatives
        • Preparing for the migration
        • Conclusions
      • Globus Migration to SelfService/MPCDF SSO
      • News & Events
        • Multifactor authentication: deactivation of E-mail tokens
        • International HPC Summer School 2026
        • AMD workshop
        • Meet MPCDF
        • Introduction to MPCDF Services
    • No.219, August 2025
      • High-performance Computing
        • HPC system Viper
        • Using Visual Studio Code (VSCode) with the Remote-SSH extension
      • Software News
        • New ELPA version 2025.06.001
        • Compilers
        • Water Boa Python 2025.06
        • Accelerated prediction of protein structures and complexes with ColabFold
        • AlphaFold2 available on Viper-GPU
        • DataShare command-line client (ds/pocli) modernized
      • Using Access Tokens in GitLab
        • Using access tokens on the command line
      • News/Announcements
        • DataShare: Migration to Nextcloud
        • A farewell to the AFS cell “ipp-garching.mpg.de”
        • Multifactor authentication: deactivation of E-mail tokens
        • Account re-applications
        • Export control and updated Terms of Use
      • Events
        • WAMTA 2026
        • MPG-DV-Treffen
        • MPG-NFDI and Research Data Management Workshops
        • Introduction to MPCDF services
        • Meet MPCDF
        • Python for HPC
    • No.218, April 2025
      • High-performance Computing
        • Viper-GPU operational
        • HPC performance monitoring on Viper-GPU
      • Software News
        • Activation of python-waterboa as the new Python default
        • New ELPA version 2025.01.001
      • GitLab shared CI runner with AMD GPU available
      • HowTo: External Usage of GitLab Wikis
        • Note-taking Apps
      • Enhanced SSH Configuration
      • News
        • MPCDF Status Page
        • Mandatory 2FA for GitLab
      • Events
        • AMD workshop
        • Introduction to MPCDF services
        • MeetMPCDF
        • MPG-DV-Treffen
        • Research Data Alliance - Deutschland-Tagung 2025
    • No.217, December 2024
      • High-performance Computing
        • AlphaFold3 available on Raven
        • Resource limits on the HPC machines
        • HPC monitoring on Viper
        • Routine transition to a new set of CI module images in 2025
        • Checks for uninitialized variables disabled in latest Intel Fortran compiler
      • A Shared AI System for Eleven MPIs
      • Events
        • International HPC Summer School 2025
        • Introduction to MPCDF services
        • Meet MPCDF
    • No.216, August 2024
      • High-performance Computing
        • New HPC system Viper (Phase-1: CPU)
      • HPC Software News
        • MPCDF GitLab module-image to be discontinued on October 31
        • New module images available in MPCDF GitLab
        • Nvidia HPC SDK version 24.3 available on Raven
      • Major Change in the Python Infrastructure on the HPC Clusters
      • Introducing containerized Applications as Part of the MPCDF Software Stack
      • Nexus-S3: Object Storage in the HPC-Cloud and beyond
        • Example use case
        • Opt-in via SelfService
        • Object storage for larger projects
      • The MetaStore Research Data Publication Platform
        • Managing data sets and resources in MetaStore
        • Who can use it
      • News
        • Password policy
        • 2FA for DataShare and GitLab
      • Events
        • AMD GPU workshop & hackathon (November 5-7)
        • Introduction to MPCDF services (October 24)
        • Meet MPCDF
        • MPCDF at Garching Campus Open Doors (October 3)
        • HPC-Cloud workshop (September 10-12)
        • Course on “Python for HPC” (November 26-28)
    • No.215, April 2024
      • High-performance computing
        • Licensed software in Slurm (Comsol)
      • HPC Software News
        • Improved workflow for multimer predictions with AlphaFold2 on Raven
        • New version of Intel oneAPI with deprecation of ifort compiler
        • CUDA modules on Raven
        • New AMD-GPU ELPA release
      • Kubernetes in the HPC-Cloud
        • Usage
        • Function
        • Existing applications
      • GitLab: Graphs & Diagrams
        • Alternatives
      • LLMs meet MPCDF
      • News & Events
        • Introduction to MPCDF services
        • Meet MPCDF
    • No.214, December 2023
      • HPC Software News
        • CUDA-aware OpenMPI on Raven
        • GPU-accelerated VASP
        • Intel oneAPI: transition from ifort to ifx
      • Module Software Stacks for Continuous Integration Pipelines on MPCDF GitLab Shared Cloud Runners
        • Introducing a novel module-enabled Docker image infrastructure
        • Announcing legacy status and later discontinuation of the module-image
      • Compressed Portable Conda Environments for HPC Systems
        • Introduction and Motivation
        • Move Conda environments into compressed image files
        • Basic usage examples
        • Limitations
        • Availability
      • New Features in the HPC-Cloud
        • Expanded menu of flavors and images
        • SSD-based block volumes
        • Automated domain name service
        • Shared filesystem service
        • The Robin cluster
      • News & Events
        • AMD-GPU development workshop
        • Meet MPCDF
        • RDA Deutschland Tagung 2024
    • No.213, August 2023
      • High-performance Computing
        • New GPU development partition on Raven
        • Memory profiling with heaptrack
        • New compilers and libraries: Intel oneAPI 2023.1
      • Using linters to improve and maintain code quality
      • HPC-Cloud Object Storage
      • JADE - Automated Slurm deployments in the HPC-Cloud
      • GitLab: Tips & Tricks
        • Online editing of source code revisited
        • Custom badges
        • Security warning
      • New IBM tape library and tape drives installed at MPCDF
      • News & Events
        • Open positions at MPCDF
        • Meet MPCDF
        • AMD-GPU development workshop
    • No.212, April 2023
      • High-performance Computing
        • New Supercomputer of the MPG - Cobra successor
        • Documentation of HPC hardware characteristics
        • CMake Recipes Repository
      • MPCDF HPC Cloud
        • Introduction
        • Hardware Resources
        • Project Support
        • Summary
      • News
        • MPCDF SelfService
        • Pushing Fusion-Plasma Simulations Towards Exascale
        • Base4NFDI: Creating NFDI-wide basic services in a world of specific domains
      • Events
        • Meet MPCDF
        • Introduction to MPCDF services
        • AI Training Course
        • Course on “Python for HPC”
        • RDA-Deutschland Conference
    • No.211, December 2022
      • High-performance Computing
        • Anaconda Python modules
        • Hints for architecture-specific and optimized CUDA compilation
        • New Intel C/C++ compilers and associated MPCDF software stack
        • Turbomole license for MPG renewed
        • Apptainer on HPC clusters, the Linux Foundation successor to Singularity
      • GitLab Tips & Tricks: Use of Docker Images in GitLab CI
      • GO-Nexus
      • News & Events
        • Discontinuation of General VPN
        • Meet MPCDF
        • Python for HPC
        • Advanced HPC Workshop
    • No.210, August 2022
      • High-performance Computing
        • Cobra successor procurement
        • CO2 footprint of MPCDF
        • Software news
      • GitLab CI Distributed Cache
        • Introduction
        • Adding a cache to your CI configuration
        • CI of software that depends on third-party packages
        • CI of a complex HPC code that requires CPU and partly GPU resources
        • Concluding remarks
      • Globus Flows
        • Introduction
        • Flows in detail
        • Example
        • Summary
      • News & Events
        • Introduction to MPCDF services
        • Meet MPCDF
        • Advanced HPC workshop
        • Python for HPC
    • No.209, April 2022
      • High-performance Computing
        • AlphaFold2 on the HPC system Raven
      • GitLab CI
        • GitLab shared runners on GPUs
        • Continuous integration testing for HPC codes on MPCDF GitLab
      • Globus Online
        • DataHub access via the Globus Online Portal
        • Enhanced functionality
        • More information
      • New SelfService Features and Improvements
        • Redesign of the login process
        • Viewing accounting data
        • Additional improvements
      • Access to AFS restricted for local Access only
      • News & Events
        • AI bootcamp
        • International HPC Summer School 2022
        • Workshop “Introduction to MPCDF services (online)”
        • “Meet MPCDF”: New online forum and lectures for MPCDF users
        • RDA-Deutschland-Tagung 2022
    • No.208, December 2021
      • High-performance Computing
        • Termination of general user operation for Draco login nodes
        • Announcement of CUDA no-defaults on Cobra and Raven
        • Usage of /tmp and /dev/shm on Cobra and Raven
        • Eigensolver library ELPA
        • Using Python-based hybrid-parallel codes on HPC systems
      • Python bindings for C++ using pybind11 and scikit-build
        • Interfacing Python/NumPy with C++ using pybind11
        • Build with scikit-build
      • The Gitlab Package Registry
        • Example: Publishing Python packages
      • Using Application Tokens instead of Passwords
        • DataShare
        • GitLab
      • Software Publishing
        • Do it yourself
        • Publish via a data archiving site
        • Software Heritage
        • Final remarks
      • News & Events
        • International HPC Summer School 2022
        • Advanced HPC workshop 2021
        • 60 Years Max Planck Computing Centre in Garching
    • No.207, August 2021
      • High-performance Computing
        • HPC system Raven fully operational
        • GPU Computing on Raven
        • HPC system Cobra - Module system to be aligned with Raven
        • Decommissioning of Draco
      • HPC Cloud
      • Poetry: Packaging and Dependency Management for Python
        • Introduction
        • A Poetry project and initial configuration
        • Dependency Management
        • Running your code
        • Package creation and publishing
        • Conclusion
      • More functionality for the SelfService - Call to action for 2FA users
        • Migration of MyMPCDF functionality
        • Updated password policy
        • Two-Factor Authentication (2FA) - Call to action
      • News & Events
        • Brochure “High-Performance Computing and Data Science in the MPG”
        • GPU bootcamp
        • Introductory course for new users of MPCDF
        • Advanced HPC workshop
        • Python for HPC
    • No.206, April 2021
      • High-performance Computing
        • HPC System Raven: deployment of the final system
        • Charliecloud and Singularity containers supported on Cobra and Raven
        • Control and verification of the CPU affinity of processes and threads
      • High-performance data analytics and AI software stack at MPCDF
      • Decommissioning of AFS
      • Relaunch of MPCDF website and new technical documentation platform
      • Events
        • New online introductory course for new users of MPCDF
        • Advanced HPC workshop: save the date
    • Previous Editions
Technical Documentation
  • Documentation
  • Computing

Computing

This chapter provides comprehensive documentation on how to access and use HPC and cluster compute resources at the MPCDF.

  • Introduction
  • Application Support for HPC, AI and HPDA
  • Gateway machines
  • Dais User Guide
  • Viper-CPU User Guide
  • Viper-GPU User Guide
  • Raven User Guide
  • Dedicated clusters
  • Software
  • Quickstart guide to HPC
  • Performance Monitoring
  • Training
Previous Next

Last updated on Aug 12, 2025.

© Copyright Max Planck Computing and Data Facility | Imprint | Privacy Policy

Built with Sphinx using a theme provided by Read the Docs.