bwGRiD

As part of the BMBF-funded D-Grid initiative, the Scientific Computing Center (SCC) at the Karlsruhe Institute of Technology (KIT) offered virtual organizations the opportunity to use computing time on various cluster systems and storage space. The SCC operated a cluster and the central storage system specifically for bwGRiD.

The computer clusters in bwGRiD consisted of a total of 101 IBM Blade Centers, which were located at seven of the nine universities, 10 of which were in Karlsruhe. The other locations Freiburg, Mannheim/Heidelberg, Stuttgart, Tübingen, Ulm/Constance could also be used in the same way. 14 HS21 XM Blades were plugged into each Blade Center, each of which was equipped with

  • 2 quad-core Intel Xeons with 2.8GHz,

  • 16GB main memory

  • Gigabit Ethernet

  • Infiniband adapter

was equipped. The Esslingen site had a further 180 computing nodes, each with 8 cores. The Infiniband networking at the individual locations was carried out with Voltaire switch fabrics of the Grid Director ISR 2012 type. The Infiniband was a 4-fold DDR.

The following directories were available for data storage:

  • The home directory was visible on all nodes and was used for permanent data storage. The data was backed up to tape using Tivoli Storage Manager (TSM).

  • Scratch directories could be created with the ws_allocate mechanism as at the other locations and were visible on all nodes.

  • Each computing node had its own /tmp directory on the internal hard disk with 120 GB of storage space, which could be used for temporary data during the job runtime.

The home and scratch directories were located in a Lustre file system supplied by HP. The storage capacity was 32 Tbytes with a maximum throughput of 1.5 GB/s.

Scientific Linux 5.5 was used as the operating system. In addition to the basic Linux distribution, a large number of other software packages were pre-installed on the systems.

Access to the Karlsruhe cluster was possible with the following middleware systems:

  • Globus Toolkit 4.0.x: bwgrid-cluster.scc.kit.edu

  • Unicore 5: bwgrid-unicore.scc.kit.edu (VSite: KIT-bwGRiD-Cluster)

  • GSISSH Interactive Node: bwgrid-cluster.scc.kit.edu (Port 2222)


In order to access the resources with grid middleware, a grid certificate and membership in the virtual organization (VO) bwGRiD or another D-Grid-VO was required.


In addition to the computing resources, the bwGRiD had a central storage system. Lustre was also used as software. The hardware consisted of the following components:

  • Hard disks: 768 1TB SATA and 48 146 GB SAS

  • Storage systems: 18 HP MSA2212fc and 50 MSA2000 JBODs

  • Storage networks: 8 16-port FibreChannel switches

  • Server systems: 12 HP DL380G5 with FibreChannel and InfiniBand adapters

  • Communication network: 1 24-port Infiniband DDR switch

  • Middleware or client systems: 6 HP DL380G5 with InfiniBand and 10 GE adapters

This resource was exclusively available to users of the VO bwgrid. The memory was distributed across two file systems:

  • Backup (/bwfs/backup), 128 TByte (with backup)

  • Work (/bwfs/work), 256 TByte (no backup)


Access was possible with the following middleware systems:

  • GridFTP: bwgrid-se.scc.kit.edu

  • GSI-SSH: bwgrid-se.scc.kit.edu