The service descriptions are being translated at the moment

If there´s no English information available please use the German counterpart. We are working on the English translations right now and we apologize for any inconvenience.

bwUniCluster



On 27.1.2014, the SCC put a parallel computer into operation as a state service under the name "bwUniCluster" as part of the Baden-Württemberg implementation concept for high-performance computing, bwHPC. On May 2, 2017, an extension of the cluster was put into operation, which continues to operate as part of bwUniCluster 2.0.

The extended HPC system consisted of more than 860 SMP nodes with 64-bit Xeon processors from Intel. The parallel computer was used to provide the universities of the state of Baden-Württemberg with basic computing power and could be used free of charge by the employees of all universities in Baden-Württemberg.

The older part of the HPC system was decommissioned on March 10, 2020.



Each state university regulated the access authorization to this system for its employees itself. KIT members received access authorization through a separate activation of their KIT account, which was requested from the Service Desk using a corresponding form. The form could be downloaded from the Service Desk website .

Configuration of the bwUniCluster

The bwUniCluster included

  • 2 login nodes, each with 16 cores in "Sandy Bridge" architecture with a theoretical peak performance of 332.8 GFLOPS and 64 GB main memory per node,
  • 2 login nodes, each with 28 cores in "Broadwell" architecture and 128 GB main memory per node,
  • 512 "thin" computing nodes (Sandy Bridge), each with 16 cores with a theoretical peak performance of 332.8 GFLOPS and 64 GB main memory per node,
  • 352 computing nodes (Broadwell), each with 28 cores with a theoretical peak performance of approx. 770 GFLOPS and 128 GB of main memory per node,
  • 8 "fat" computing nodes (Sandy Bridge), each with 32 cores with a theoretical peak performance of 614.4 GFLOPS and 1 TB of main memory per node
  • and an InfiniBand 4X FDR Interconnect as the connection network.



The bwUniCluster was a massively parallel parallel computer with a total of 886 nodes, 10 of which were service nodes. This did not include the file server nodes. All "Sandy Bridge" nodes - except for the "fat" nodes - had a clock frequency of 2.6 GHz. The "fat" nodes had a clock frequency of 2.4 GHz. All nodes had local memory, local disks and network adapters. A single Sandy Bridge node had a theoretical peak performance of 332.8 and 614.4 GFLOPS respectively, and a single Broadwell node had a theoretical peak performance of approximately 770 GFLOPS, resulting in a theoretical peak performance of approximately 444 TFLOPS for the entire system. The main memory across all computing nodes amounted to approx. 86 TB. All nodes were interconnected by an InfiniBand 4X FDR interconnect.

The base operating system on each node was a Red Hat Enterprise Linux (RHEL) 7.x. KITE served as the management software for the cluster; KITE is an open environment for the operation of heterogeneous computing clusters.

The scalable, parallel Lustre file system was connected as the global file system via a separate InfiniBand network. The use of several Lustre Object Storage Target (OST) servers and Meta Data Servers (MDS) ensures both high scalability and redundancy in the event of individual server failures. 425 TiB of disk space was available toKIT employees in the HOME directory, with the same amount of disk space being made available to users from Baden-Württemberg as a HOME directory. Approx. 850 TiB of disk space was available in the WORK directory; a further 850 TiB of disk space was available for workspaces. In addition, each node of the cluster was equipped with local disks for temporary data.

Detailed brief description of the nodes:

  • 2 16-way (login) nodes, each with 2 octa-core Intel Xeon E5-2670 processors (Sandy Bridge) with a clock frequency of 2.6 GHz, 64 GB of main memory and 4 TB of local disk space,
  • 2 20-way (login) nodes, each with 2 10-core Intel Xeon E5-2630 v4 processors (Broadwell) with a standard clock frequency of 2.2 GHz, 128 GB main memory and 480 GB local SSD,
  • 512 16-way (computing) nodes, each with 2 octa-core Intel Xeon E5-2670 processors (Sandy Bridge) with a clock frequency of 2.6 GHz, 64 GB main memory and 2 TB local disk space,
  • 352 28-way (computing) nodes, each with 2 14-core Intel Xeon E5-2660 v4 processors (Broadwell) with a standard clock frequency of 2.0 GHz, 128 GB main memory and 480 GB local SSD,
  • 8 32-way (compute) nodes, each with 4 octa-core Intel Xeon E5-4640 processors (Sandy Bridge) with a clock frequency of 2.4 GHz, 1 TB of main memory and 7 TB of local disk space and
  • 10 16-way service nodes, each with 2 octa-core Intel Xeon E5-2670 processors with a clock frequency of 2.6 GHz and 64 GB of main memory.

A single octa-core processor (Sandy Bridge) had 20 MB L3 cache and operated the system bus at 1600 MHz, with each individual core of the Sandy Bridge processor having 64 KB L1 cache and 256 KB L2 cache.
A single 14-core processor (Broadwell) had 35 MB of L3 cache and operated the system bus at 2400 MHz, with each individual core of the Sandy Bridge processor having 64 KB of L1 cache and 256 KB of L2 cache.


Access to the bwUniCluster for KIT employees

Only secure procedures such as secure shell (ssh) and the associated secure copy (scp) were permitted when logging in or copying data to and from the bwUniCluster. telnet, rsh and other r commands were disabled for security reasons. To log in to the bwUniCluster (uc1 or uc1e), the following command should be used:

ssh kit-account does-not-exist.bwunicluster scc kit edu(or also ssh kit-account does-not-exist.uc1 scc kit edu)# "Sandy Bridge" architecture
ssh kit-account does-not-exist.bwunicluster-broadwell scc kit edu(or also ssh kit-account does-not-exist.uc1e scc kit edu)# "Broadwell" architecture



Recognize languageAfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBengaliBosnianBulgarianBurmeseCebuanoChichewaChinese (ver)Chinese (trad)DanishDanish (German)EnglishEsperantoEstonianFinnishEstonian FrenchGalicianGeorgianGreekGujaratiHaitianHausaHebrewHindiHmongIgboIndonesianIrishIcelandicItalianJapaneseJavaneseYiddishKannadaKazakhCatalanKhmerKoreanCroatianLaoLatinLatvianLithuanianMalabarMalagasyMalayMalteseMaoriMarathiMacedonianMongolianNepaleseDutchNorwegianPersianPolishPortuguesePunjabiRomanianRussianSwedishSerbianSesothoSinhalaSlovakianSlovenianSomaliSpanishSwahiliSundaneseTajikTagalogTamilTeluguThaiCzechTurkishUkrainianHungarianUrduUzbekVietnameseWelshBelarusianYorubaZulu

AfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBengaliBosnianBulgarianBurmeseCebuanoChichewaChinese (ver)Chinese (trad)DanishDanish (German)EnglishEsperantoEstonianFinnishEstonian FrenchGalicianGeorgianGreekGujaratiHaitianHausaHebrewHindiHmongIgboIndonesianIrishIcelandicItalianJapaneseJavaneseYiddishKannadaKazakhCatalanKhmerKoreanCroatianLaoLatinLatvianLithuanianMalabarMalagasyMalayMalteseMaoriMarathiMacedonianMongolianNepaleseDutchNorwegianPersianPolishPortuguesePunjabiRomanianRussianSwedishSerbianSesothoSinhalaSlovakianSlovenianSomaliSpanishSwahiliSundaneseTajikTagalogTamilTeluguThaiCzechTurkishUkrainianHungarianUrduUzbekVietnameseWelshBelarusianYorubaZulu









The sound function is limited to 200 characters