• High Performance Computing (HPC) and Cluster Computing

  • SCC gives KIT users access to High Performance Computing (HPC) and Data Intensive Computing (DIC) infrastructures. SCC itself operates several computer systems (bwUniCluster 2.0, HoreKa) for different user groups. In addition, KIT scientists can use the bwForClusters within the framework of the bwHPC federation. The SCC also advises on applications for computing time on supra-regional high-performance computers (Tier-2), such as the HoreKa, or on the national supercomputers (Tier-1) of the Gauss Centre for Supercomputing (JSC, HLRS, LRZ).

HoreKa Supercomputer

The HoreKa supercomputer system is a tier-2 HPC system located directly below the national tier-1 systems. It has nearly 60.000 CPU cores, 668 GPUs, more than 220 terabytes of main memory and can deliver up to 17 PetaFLOPS.

Read more


bwUniCluster 2.0

The Baden-Württemberg University (or universal) and GFP (Großforschungsbereich) cluster, i.e. bwUniCluster 2.0, is a parallel computer for a comprehensive basic supply of Baden-Württemberg universities with high-performance computing capacity. SCC operates this tier-3 cluster as part of the Baden-Württemberg state concept for high performance computing "bwHPC".

Read more

4 bwForCluster

In addition to the bwUniCluster basic supply system, there are four high-performance Computing Clusters for research purposes (in short, bwForCluster) at HPC performance level 3 (Tier-3) in Baden-Württemberg, which are used to supply different scientific areas with computing time.

Read more

Future Technologies Partition

In addition to the supercomputer HoreKa, NHR@KIT has established a second operating environment, the so-called "Future Technologies Partition".

Read more


The Helmholtz AI COmpute REssources (HAICORE) infrastructure project was launched in early 2020 as part of the Helmholtz Incubator "Information & Data Science" to provide high-performance computing resources for artificial intelligence (AI) researchers in the Helmholtz Association.

Read more


An essential task for the SCC is to support the users with their technological and scientific applicationswhich do not only have to be purely within the HPC area. In addition, support and consulting is provided for components of software development such as compilers, debuggers, analysis tools, MPI, etc. as well as for open source codes and numerical libraries. Research-related and research-accompanying support is provided by SimLabs (Simulation Laboratories), which currently cover four research areas: Earth (climate) and Environment, NanoMicro, Energy and Astroparticle Physics. For more information on SimLabs, please visit the Scientific Computing and Simulation Department.

In addition to supporting research and development with hardware, software and many years of know-how in these two areas, teaching in the HPC field and its environment is of equal importance. The page "Teaching, Training and Further Education provides information about offers in the field of teaching.

Cost of this service will be calculated according to the budgeting rules applied to your organization.

Storage for scientific computing

The SCC operates storage systems for different purposes:

Storage systems at the clusters

So-called parallel file systems are directly connected to the clusters. These are characterized by a very high throughput performance and very good scalability. Since the beginning of 2005, Lustre has been used as the data system for the clusters at the SCC, which is currently used by the research high performance computer HoreKa and the state computer bwUniCluster. At present there are a total of 10 Lustre file systems with a storage capacity of 9573 TiB, 61 servers and 2600 clients on these HPC systems.

Further information:

  • User Guides of the various HPC systems.
  • Slides and presentations on productive Lustre installations on the page Lustre.

On-demand file systems
Future applications will place higher demands on the storage subsystem of HPC systems. To meet these demands, the SCC provides the functionality of an on-demand file system on current and future HPC systems, which will be created exclusively for an HPC job and will only be available on the allocated compute nodes during simulation. More information about this topic can be found here and on the corresponding documentation of the HPC systems.

Mass storage for scientific data (LSDF Online Storage)
The service "LSDF Online Storage" provides users of KIT with access to a data storage, which is especially designed for the storage of scientific measurement data and simulation results of data intensive scientific disciplines. The LSDF Online Storage is operated by the Steinbuch Centre for Computing. Access is guaranteed via standard protocols.

The backup and protection of the data is carried out according to the current state of the art. The service is not suitable for storing personal data.
More information: Mass storage for scientific data (LSDF Online Storage).

Archive storage (bwDataArchive)
The state service bwDataArchive offers a technical infrastructure for long-term archiving of scientific data.

bwDataArchive is available especially for members of universities and public research institutions in Baden-Württemberg. Data archiving is carried out at KIT and includes reliable storage of even large data sets for a period of ten years or more. The service enables a qualified implementation of the recommendations of the German Research Foundation (DFG) on good scientific practice (recommendation 7 on the securing and storage of research data).

Online storage for sharing data (bwSync&Share)
The state service bwSync&Share is an online storage service for employees and students of universities and colleges in Baden-Württemberg. It has been operated at KIT since January 1, 2014 and enables users to synchronize or exchange their data between different computers, mobile devices, and users. Access to the bwSync&Share portal via bwsyncandshare.kit.edu.