Institute Cluster I
The SCC operated a computer system called InstitutsCluster I, which was funded by the DFG (http://www.dfg.de) and jointly procured with several KIT institutes. |
Configuration of Institute Cluster I
The Institute Cluster I included
- 2 login nodes, each with 8 cores with a theoretical peak performance of 85.3 GFLOPS and 32 GB of main memory per node,
- 200 computing nodes, each with 8 cores with a theoretical peak performance of 85.3 GFLOPS and 16 GB of main memory per node
- and an InfiniBand 4X DDR interconnect with ConnectX Dual Port DDR HCAs.
The Institute Cluster I was a hybrid massively parallel computer with a total of 206 nodes. All nodes (except for the service nodes) had a clock frequency of 2.667 GHz. All nodes had local memory, local disks and network adapters. A single compute node had a theoretical peak performance of 85.3 GFLOPS, resulting in a theoretical peak performance of 17.57 TFLOPS for the entire system. The main memory across all computing nodes amounted to 3.3 TB-->. All nodes were connected to each other via an InfiniBand 4X interconnect. -->
The basic operating system on each node was Suse Linux Enterprise (SLES) 11 SP 1. KITE served as the management software for the cluster; KITE is an open environment for the operation of heterogeneous computing clusters.
The scalable, parallel Lustre file system was connected as the global file system via a separate InfiniBand network. By using several Lustre Object Storage Target (OST) servers and Meta Data Servers (MDS), both high scalability and redundancy in the event of individual server failures were achieved. Approximately 380 TB of disk space was available, which could also be accessed by other computers. In addition, each node of the cluster was equipped with 4 local disks for temporary data.
Detailed brief description of the nodes and the connection network:
6 eight-way (login) nodes, each with 2 quad-core Intel Xeon X5355 with a clock frequency of 2.667 GHz, 32 GB main memory and 4x250 GB local disk space; 200 eight-way (computing) nodes, each with 2 quad-core Intel Xeon X5355with a clock frequency of 2.667 GHz, 16 GB of main memory and 4x250 GB of local disk space and 5 8-way service nodes, each with 2 quad-core Intel Xeon E5345 with a clock frequency of 2.3 GHz, 8 GB of main memory and 4x250 GB of local disk space. A single quad-core processor had 2 x 4MB cache, operated the system bus at 1333 MHz and the front-side bus (FSB) at 1066 MHz.
The connection network was an InfiniBand 4x DDR switch (288 ports) from Flextronics (F-XR430095) with a total throughput rate of 288 x 40 Gb/s = 11.5 Tb. ConnectX IB HCAs (Dual Port DDR, PCIe2.0x8 2.5GT/s) were used as adapters.