Translated with DeepL.com

Parallelization

Different programming concepts can be used when creating parallel programs.

Parallel computers with distributed memory are nowadays usually parallelized with MPI (Message Passing Interface). MPI is standardized and supports explicit communication between independent processes, usually on different processors, with their own local memory. All manufacturers of distributed memory parallel computers provide efficient implementations of MPI. However, the use of MPI is not limited to distributed memory systems, as most vendors (IBM, HP, Sun, Cray, NEC, ...) offer MPI in optimized form on their "shared memory" machines. MPI is supported at SCC on all parallel computers.
OpenMP is a directive-bound portable parallelization method for shared memory systems. The OpenMP language extension is available for the Fortran, C and C++ programming languages and is the de facto standard in the scientific community today as the most important method for programming shared memory systems. OpenMP is also supported at SCC on all parallel computers.

Parallel processing relies on communication and synchronization between many individual processes and thus creates new sources of error situations and performance bottlenecks. Various tools are available as tools for the development and optimization of parallel programs, which are listed on the software development web page.