Overview of cluster#
The Esrum cluster is a cluster managed by the Data Analytics Platform (formerly the Phenomics Platform) at CBMR. Hosting and technical support is handled by UCPH-IT.
In addition to the documentation provided here, UCPH-IT also provides documentation for the UCPH computing/HPC Systems on KUnet.
Architecture#
The cluster consists of a head node, 12 compute nodes, 1 GPU / high-memory node, 2 GPU nodes, 2 RStudio web servers, and 1 server for running containers. A Shiny servers server managed by UCPH-IT is also available.
Users connect to the "head" node, from which jobs can be submitted to the individual compute nodes using the Slurm Workload Manager:
Node |
RAM |
CPUs |
GPUs |
Name |
|
---|---|---|---|---|---|
1 |
Head |
2 TB |
2x24 core AMD EPYC 7413 |
esrumhead01fl |
|
12 |
Compute |
2 TB |
2x32 core AMD EPYC 7543 |
esrumcmpn*fl |
|
1 |
GPU / high-memory |
4 TB |
2x32 core AMD EPYC 75F3 |
2x NVIDIA A100 80GB |
esrumgpun01fl |
2 |
GPU |
2 TB |
2x32 core AMD EPYC 9354 |
2x NVIDIA H100 80GB |
esrumgpun0[3-4]fl |
2 |
Rstudio |
2 TB |
2x32 core AMD EPYC 7543 |
esrumweb*fl |
|
1 |
Container |
2 TB |
2x32 core AMD EPYC 7543 |
esrumcont01fl |
Software#
The nodes all run Red Hat Enterprise Linux 8 and a range of scientific and other software is made available using environment modules. Missing software can be requested via UCPH-IT.
Backup policies and quotas#
Your /home
folder and the apps
. data
, and people
folders
in projects are automatically backed up. The scratch
folders are NOT
backed up. The specific frequency and duration of backups differ for
each type of folder and may also differ for individual projects.
As a rule, folders for projects involving GDPR protected data (indicated
by the project name ending with -AUDIT
) is subject to more frequent
backups. However, on-site backups are kept for a shorter time to prevent
the unauthorized recovery of intentionally deleted data.
See Data storage on Esrum for more information.
Additional resources#
Official UCPH computing/HPC Systems documentation on KUnet.