Iron Research
- Name of the cluster:
CMMC-CMTI-CMMG
- Institution:
Max Planck Institute for Iron Research
Login nodes:
|
|
Hardware Configuration:
Login nodes |
Compute nodes (14320 CPU cores) |
Compute nodes (24.576 CPU cores) |
|
---|---|---|---|
cmti[001-002] |
cmti[003-360] |
cmmg[001-096] |
|
CPU |
Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz |
AMD EPYC 9754 128-Core Processor |
|
CPU(s) |
40 |
40 |
512 |
Thread(s) per core |
1 |
1 |
2 |
Core(s) per socket |
20 |
20 |
128 |
Socket(s) |
2 |
2 |
2 |
RAM |
192 GB |
192 GB |
768 GB |
cmmg[001-096] nodes: interconnect is based on Mellanox Technologies InfiniBand fabric NDR200 (Speed: 200Gb)
cmti[001-360] nodes: interconnect is based on Intel Omni-Path Fabric Technologies (Speed: 100Gb)
Filesystems:
GPFS-based with total size of 420 TB and independent inode space for the following filesets:
- /u
shared home filesystem; GPFS-based; user quotas (currently default is 1TB) enforced; quota can be checked with ‘/usr/lpp/mmfs/bin/mmlsquota’.
- /cmmc/ptmp
shared scratch filesystem ; GPFS-based; no quotas enforced; NO BACKUPS!
Compilers and Libraries:
The “module” subsystem is implemented on CMMC. Please use ‘module available’ to see all available modules.
Intel compilers (-> ‘module load intel’): icc, icpc, ifort
GNU compilers (-> ‘module load gcc’): gcc, g++, gfortran
Intel MKL (-> ‘module load mkl’): $MKL_HOME defined; libraries found in $MKL_HOME/lib/intel64
Intel MPI (-> ‘module load impi’): mpicc, mpigcc, mpiicc, mpiifort, mpiexec, … This module becomes visible and loadable only after a compiler module (Intel or GCC) has been loaded
Python (-> ‘module load anaconda’): python
Batch system based on Slurm:
The batch system on CMMC is the Slurm Workload Manager. A brief introduction into the basic commands (srun, sbatch, squeue, scancel, …) can be found on the Cobra home page For more detailed information, see the Slurm handbook. See also the sample batch scripts which must be modified for the CMMC-CMTI-CMMG cluster.
Current Slurm configuration on CMMC-CMTI-CMMG:
default turnaround time: 24 hours
current max. turnaround time (wallclock): 96 hours
two partitions: p.cmfe and s.cmfe (default)
p.cmfe partition: for parallel MPI or hybrid MPI/OpenMP jobs. Resources are exclusively allocated on nodes. Max. nodes per job is 44
s.cmfe partition: for serial or OpenMP jobs. Nodes are shared. Jobs are limited to use CPUs only on one node. Default RAM per CPU is 2 GB
cmti nodes features: cmti, cascadelake, mem192G
Useful tips:
By default run time limit used for jobs that don’t specify a value is 24 hours. Use --time option for sbatch/srun to set a limit on the total run time of the job allocation but not longer than 96 hours
By default jobs use all memory on nodes in p.cmfe partition.
The OpenMP codes require a variable OMP_NUM_THREADS to be set. This can be obtained from the Slurm environment variable $SLURM_CPUS_PER_TASK which is set when --cpus-per-task is specified in a sbatch script (an example is on help information page)
Support:
For support please create a trouble ticket at the MPCDF helpdesk.