HW available through MetaCentrum
The computing equipment falls into several categories:
- High density (HD) clusters have two CPUs (i.e., 8-40 cores) in shared memory (96-515 GB), offering maximal computation power/price ratio. They cover the needs of applications with limited or no internal parallelism that can make use of running many simultaneous instances (high-throughput computing). Another important class of applications are those that use commercial software with per-core licensing. Some nodes are equipped with GPU accelerators.
- Symmetric Multiprocessing (SMP) clusters with more CPUs (40-80 cores) in shared memory (up to 1250 GB), oriented towards memory-demanding applications and applications requiring larger numbers of CPUs communicating via shared memory. SMP nodes are suitable for applications with finer parallelisation on the higher number of cores.
- Special SMP machines (SGI UV2) with extremely large shared memory (6TB) and reasonably high number of CPU cores (up to 348). This machine is available for extremely demanding applications, either in terms of parallelism or memory. Typical examples of such applications are quantum chemical calculations, fluid dynamics (including climate simulations) or bioinformatics code.
- Cluster with Xeon Phi many core processors - Xeon Phi is massively-parallel architecture consisting of high number of x86 cores. Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture.
Cluster nodes and data storage are mutually interconnected with 1 Gbit/s and 10 Gbit/s Ethernet as well as local InfiniBand network with higher bandwidth (40Gbit/s) and lower latency.
Special machines
2x Special SMP machines (SGI UV2) with extremely large shared memory (6 TB each) and reasonably high number of CPU cores (currently 384 and 288) are available for extremely demanding applications, either in terms of parallelism or memory, and are a bridge towards “classical” supercomputing environment. Naturally, applications must be tuned specifically in order to run efficiently on such machines, in particular they must be aware of non-uniform memory access time (NUMA architecture). Typical examples of applications are quantum chemical calculations, fluid dynamics (including climate simulations) or bioinformatics code.
- urga.cerit-sc.cz (384 CPU, 1 node, January 2015)
- 48x 8-core Intel Xeon E5-4627v2 3.30GHz
- 6 TB RAM
- 72 TB scratch
- 6x Infiniband 40 Gbit/s, 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 13198 points in the SPECfp2006 base rate benchmark (34.37 per core)
- ungu.cerit-sc.cz (288 CPU, 1 node, December 2013)
- 48x 6-core Intel Xeon E5-4617 2.9GHz
- 6 TB RAM
- 72 TB scratch
- 6x Infiniband 40 Gbit/s, 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 8830 points in the SPECfp2006 base rate benchmark (30.66 per core)
The smaller one
- uruk.cerit-sc.cz (HPE Superdome Flex, 144 CPU, 1 node, from February 2019)
- 8x18 Intel Xeon Gold 6140 3.70GHz
- 3.4 TB RAM
- 2x1.82 TiB SSD NVME, 2x 447.13 GiB 7.2, 4x 3.64 TiB 7.2 scratch
- 1x Infiniband 8 Gbit/s, 4x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 700 points in the SPECfp2017 base rate benchmark (4.86 per core)
Cluster with 6 newest Intel Xeon Phi 7210 many core processors. Xeon Phi is massively-parallel architecture consisting of high number of x86 cores. Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture. Thus, you can submit jobs to Xeon Phi nodes in the same way as to CPU-based nodes, using the same applications. No recompilation or algorithm redesign is needed, although may be beneficial.
- phi.cerit-sc.cz (384 CPU, 6 nodes, December 2016), each of the nodes has the following hardware specification:
- 6x 64-core Intel Xeon Phi 7210, 1.30GHz
- 208 GB RAM
- 1x 800 GB SSD, 2x 3 TB in RAID-0, giving 6,8 TB in every node
- Ethernet 1 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 748 points in the SPECfp2006 base rate benchmark (11.7 per core)
- Xeon Phi 7210 has 256 virtual cores (64 physical) running at 1.3GHz with overall performance of 2.66 TFlops in double precision and 5.32 TFlops in single precision.
SMP clusters
Symmetric Multiprocessing (SMP) clusters with more CPUs (40-80 cores) in shared memory (up to 1250 GB), oriented towards memory-demanding applications and applications requiring larger numbers of CPUs communicating with shared memory. On the other hand, due to technical restrictions, only CPUs with slightly lower per-core performance can be used in such systems. SMP nodes are therefore suitable for applications with finer parallelization on higher number of cores.
- zelda.cerit-sc.cz (720 CPU, 10 nodes, January 2018), each of the nodes has the following hardware specification:
- 4x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
- 768 GB RAM
- disk 4x 4TB 7.2k + 2x 1.8 TB SSD
- 1x InfiniBand 56 Gbit/s, 1x 10 Gbit/s Ethernet
- The performance of each node (as measured during acceptation tests of the cluster) reaches 2700 points in the SPECfp2006 base rate benchmark (37.5 per core)
- zenon.cerit-sc.cz (1920 CPU, 60 nodes), each of the nodes has the following hardware specification:
- 2x AMD EPYC 7351 (2x 16 Core) 2.40 GHz
- RAM 503.68 GiB
- disk 1x1.82 TiB SSD NVME
- 1x InfiniBand 56 Gbit/s, 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1320 points in the SPECfp2006 base rate benchmark (41,25 per core)
- decommissioned – zorg.priv.cerit-sc.cz (40 CPU, 1 node, January 2017 – February 2018)
- 4x 10-core Intel Xeon E7-8891 v2 3.20 GHz
- 512 GB RAM,
- 4x 1.2TB 10k RAID5
- 1x InfiniBand, 1x 10 Gbit/s Ethernet
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1400 points in the SPECfp2006 base rate benchmark (35 per core)
- zefron.priv.cerit-sc.cz (320 CPU, 8 nodes, January 2016), each of the nodes has the following hardware specification:
- 4x 10-core Intel Xeon E5-4627 v3 @ 2.60GHz
- 1 TiB RAM
- 4x 1 TB 10k, 2x 480 GB SSD
- 1x Infiniband 40 Gbit/s, 1x Ethernet 10 Gbit/s, 1x Ethernet 1 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1370 points in the SPECfp2006 base rate benchmark (34 per core)
- Machine zefron8 contains 1 GPU card NVIDIA Tesla K40.
- zewura.cerit-sc.cz (560 CPU, 8 nodes, from December 2011; 6 nodes decommissioned in 2018 - 2020)
- Intel Xeon E7-510131 processors (10 CPU cores, 2.26 GHz),
- 512 GB RAM,
- 20x 900GB hard drives to store temporary data (/scratch), configured in RAID-10, thus having 8 TB capacity.
- InfiniBand: two 4xQDR interfaces per node, 10 Gbit/s Ethernet
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1250 points in the SPECfp2006 base rate benchmark (15,62 per core)
- decommissioned – zebra.priv.cerit-sc.cz (960 CPU, 24 nodes, from June 2012 )
- 8 Intel Xeon E7-4860 processors (10 CPU cores, 2.26 GHz),
- 512 GB RAM
- 12x 900GB hard drives to store temporary data (/scratch), configured in RAID-5, thus having 9.9 TB capacity.
- InfiniBand: two 4xQDR interfaces per node, 10 Gbit/s Ethernet
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1220 points in the SPECfp2006 base rate benchmark (15,25 per core).
HD clusters
High density (HD) clusters have two CPUs (i.e., 8-40 cores) in shared memory (128-512 GB), offering maximal computation power/price ratio. They cover the needs of applications with limited or no internal parallelism that can make use of running many simultaneous instances (high-throughput computing). Another important class of applications are those that use commercial software with per-core licensing.
- zia.cerit-sc.cz (640 CPU, 5 nodes. with 4x GPU NVIDIA A100 each, from January 2021)
- 2x AMD EPYC 7662 (2x 64 Core) 2.00 GHz
- 1 TB RAM
- 3x1.46 TiB SSD NVME
- 1x InfiniBand 8 Gbit/s, 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 527 points in the SPECfp2006 base rate benchmark (4.1 per core)
- 4x NVIDIA A100
- glados.cerit-sc.cz (720 CPU, 17 nodes, from April2018), each of the nodes has the following hardware specification:
- 2x Intel Xeon Gold 6138 (2x 20 Core)
- 384 GB RAM
- 2x 2 TB SSD NVME
- 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1320 points in the SPECfp2006 base rate benchmark (41.25 per core)
- 1 node with GPU Titan V, 8 nodes with GPU Nvidia 1080Ti
- gita.cerit-sc.cz (896 CPU, 28 nodes), each of the nodes has the following hardware specification:
- 2x AMD EPYC Processor (with IBPB) (2x 16 Core)
- 472 GB RAM
- The performance of each node (as measured during acceptation tests of the cluster) reaches 332 points in the SPECfp2017 base rate benchmark (10.4 per core)
- 2x NVIDIA 2080Ti in 14 nodes
- black1.cerit-sc.cz (24 CPU, 1 node, from July 2018):
- 2x Intel Xeon Gold 6138 (2x 12 Core) 2.0 GHz
- 512 GB RAM
- 2x 1.8 TB SSD NVME
- 1x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 806 points in the SPECfp2006 base rate benchmark (33.58 per core)
- 4 GPU Tesla P100
- hde.cerit-sc.cz (1920 CPU, 60 nodes, zenon, from September 2018), each of the nodes has the following hardware specification:
- 2x AMD EPYC 7351 (2x 16 Core) 2.40 GHz
- 524 GB RAM
- 1x1.82 TiB SSD NVME
- 1x Infiniband 56 Gbit/s, 2x Ethernet 10 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 1320 points in the SPECfp2006 base rate benchmark (41.25 per core)
- decommissioned – hdc.cerit-sc.cz (1744 CPU, 109 nodes, zapat, January 2013 – September 2018), each of the nodes has the following hardware specification:
- 2x 8-core Intel E5-2670 2.6GHz
- 128 GB RAM
- 2x 600 GB 15k
- location Jihlava
- 1x Infiniband 40 Gbit/s, 2x Ethernet 1 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 471 points in the SPECfp2006 base rate benchmark (29 per core)
- decommissioned – hdb.cerit-sc.cz (248 CPU, 32 nodes, zigur, from January 2013), each of the nodes has the following hardware specification:
- 2x 4-core Intel E5-2643 3.3GHz
- 128 GB RAM
- 2x 600 GB 15k
- location Jihlava
- 1x Infiniband 40 Gbit/s, 2x Ethernet 1 Gbit/s
- The performance of each node (as measured during acceptation tests of the cluster) reaches 327 points in the SPECfp2006 base rate benchmark (41 per core)
- decommissioned – hda.cerit-sc.cz (576 CPU, 48 nodes, zegox, July 2012 – September 2018, each of the nodes has the following hardware specification:
- 2x Intel E5-2620 (6 cores)
- 96 GB RAM
- 2x 600GB hard drives to store temporary data (/scratch)
- InfiniBand: 1x per node, 10 Gbit/s Ethernet: only 8 nodes, 1 Gbit/s Ethernet: two interfaces per node.
- The performance of each node (as measured during acceptation tests of the cluster) reaches 250 points in the SPECfp2006 base rate benchmark (20,83 per core).
The centre CERIT-SC is an integral part and the most powerful node of the National Grid Infrastructure MetaCentrum operated by CESNET.
Being an experimental facility, the Centre represents an indispensable part of the national e-infrastructure, complementary to the CESNET association and IT4Innovations supercomputing centre, a complex system of interconnected computation and storage capacities and related services for the research community of the Czech Republic.