Biophysics CHEM


Name of the cluster:

CHEM

Institution:

Max Planck Institute of Biophysics

Login nodes:

  • chem11.bc.mpcdf.mpg.de

  • chem12.bc.mpcdf.mpg.de

Hardware Configuration:

2 login node chem[11-12]
2 x AMD EPYC 9554 64-Core Processor @ 3.10 GHz
64 cores per node
hyper-threading enabled - 2 threads per core
377 GB RAM
4 x NVIDIA L40S GPUs per node
56 execution nodes chemg[101-156]
2 x AMD EPYC 9554 64-Core Processor @ 3.10 GHz
64 cores per node
hyper-threading enabled - 2 threads per core
377 GB RAM
4 x NVIDIA L40S GPUs per node
2 execution nodes chemg[201-202]
2 x AMD EPYC 9554 64-Core Processor @ 3.10 GHz
64 cores per node
hyper-threading disabled - 1 threads per core
1.5 TB RAM
4 x NVIDIA H200 GPUs per node
node interconnect

based on Mellanox Technologies InfiniBand fabric (Speed: 400Gb)

Filesystems:

GPFS-based with total size of 1.7 PB and independent inode space for the following filesets:

/u

shared home filesystem; GPFS-based; user quotas (100 GB data, 1M files/) enforced; quota can be checked with ‘/usr/lpp/mmfs/bin/mmlsquota’. NO BACKUPS YET

/chem2/scratch

shared scratch filesystem; GPFS-based; no quotas enforced. NO BACKUPS !

/phys

see PHYS cluster documenation for its specs; its performance is limited on chem, so please use with care.

Compilers and Libraries:

Hierarchical environment modules are used at MPCDF to provide software packages and enable switching between different software versions. User have to specify the needed modules with explicit versions at login and during the startup of a batch job. Not all software modules are displayed immediately by the module avail command, for some user first needs to load a compiler and/or MPI module. You can search the full hierarchy of the installed software modules with the find-module command.

Batch system based on Slurm

The batch system on CHEM is the Slurm Workload Manager. A brief introduction into the basic commands (srun, sbatch, squeue, scancel, …) can be found on the Raven home page. For more detailed information, see the Slurm handbook. See also the sample batch scripts which must be modified for CHEM cluster.

Current Slurm configuration on CHEM:

  • default turnaround time: 2 hours

  • current max. turnaround time (wallclock): 24 hours

  • p.chem partition include all batch nodes in exclusive usage and is default

  • s.chem partition can be used for serial jobs and can be shared

  • l.chem shared partition for long running (up to 5 days) serial jobs (<=10 cores per job; <=512 cores in total)

  • alpha qos for only AlphaFold jobs (up to 5 days)

Useful tips:

By default run time limit used for jobs that don’t specify a value is 2 hours. Use --time option for sbatch/srun to set a limit on the total run time of the job allocation but not longer than 24 hours

Default memory per node in the shared partition is 47000 MB. To grant the job access to all of the memory on each node use --mem=0 option for sbatch/srun

The OpenMP codes require a variable OMP_NUM_THREADS to be set. This can be obtained from the Slurm environment variable $SLURM_CPUS_PER_TASK which is set when --cpus-per-task is specified in a sbatch script (an example is on help information page)

To use GPUs add in your slurm scripts --gres option and choose how many GPUs and/or which model of them to have: #SBATCH --gres=gpu:l40s:X, where X is a number of resources (1, 2, 3 or 4)

GPU cards are in default compute mode.

Support:

For support please create a trouble ticket at the MPCDF helpdesk