• MPI

  • Message Passing Interface (MPI) is a standard that describes message exchange in parallel computing on distributed computer systems.

  • Services provided by SCC:
    Provisioning:

MPI

The Message Passing Interface (MPI) is a vendor-independent programming interface for the development of parallel applications for parallel computers or workstation clusters which are to be interconnected to form a parallel computer with distributed memory (distrubuted memory), and is used as a means of communication between independent processors with their own local memory. MPI is implemented by many parallel computer manufacturers. There are also several public domain implementations.
An example program in Fortran might look something like this:


...
include 'mpif.h
...
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
if ( myid .eq. 0 ) then
call MPI_SEND(.....)
else
call MPI_RECV(.....)
endif
...
call MPI_FINALIZE(ierr)
...




This program runs x times on the machine on which the <executable> program is usually run by calling
mpirun -np x <executable>
is started; only the variable myid is used to distinguish whether it is the master or a slave process.

At the SCC MPI can be used on all parallel computers. The implementations OpenMPI and Intel MPI are available.


MPI on the HPC systems of the SCC

Various programming concepts are used to write parallel applications and are consequently offered on the HPC systems. This includes the concepts for programming distributed memory computers as well as shared memory computers. On distributed-memory computers, the "message passing" programming model is most often used, i.e., the programmer must insert subroutine calls from a communication library into his program to transfer data from one task to another. In recent years, the "message passing interface" (MPI) has become the "de facto" standard. On HPC systems, MPI is part of the parallel environment. You will find information on the following topics:


Compiling and binding MPI programs

There are special compiler scripts to compile and bind MPI programs. These scripts start with theThese scripts start with the prefix mpi:

mpiccC ompile and bind C programs

mpiCCC ompile and bind C++ programs

mpif77 or mpif90 Compiling and binding Fortran programs


Further information about MPI can be found in the respective User Guide or at the manufacturer specific information about MPI on the website"Online Manuals".


Execution of parallel programs

Parallel programs can be started interactively or under the control of a batch system. When starting programs interactively, it is not possible to use a node other than the one on which you are logged in.

The syntax to start parallel applications is

mpirun[ mpirun_options ] program

or

mpirun[ mpirun_options ] -f appfile (when using OpenMPI)

or

mpirun [ mpirun_options ] exe1:exe2:... (when using Intel MPI)

both for interactive calls and for calls in shell scripts to run batch jobs. The mpirun_options are different for OpenMPI and Intel MPI.

To run a parallel application as a batch job, the shell script usually required for the sbatch command must contain the mpirun command with the application as input file.

Important for understanding: the -n # option is required when mpirun is called interactively, but is ignored when mpirun is called in batch jobs (the number of processors used in batch jobs is controlled by an option of the sbatch command). There is no option to specify the number of nodes!


Courses on MPI


Documentations about MPI



Same text in English - Same text in English - Same text in English - Same text in English - Same text in English


MPI on HP XC Systems

Different programming concepts for writing parallel programs are used in high performance computing and are therefore supported on the HP XC systems. This includes concepts for programming of distributed memory systems as well as for shared memory systems. For distributed memory systems most often explicit message passing is used, i.e. the programmer has to introduce calls to a communication library to transfer data from one task to another one. As a de facto standard for this type of parallel programming the Message Passing Interface (MPI) has been established during the last years. On the HP XC systems MPI is part of the parallel environment. You will find here information on the following topics:


Compiling and linking MPI Programs

There are special compiler scripts to compile and link MPI programs. These scripts start with the prefix mpi:

mpiccc

ompile and link C programs

mpiCCc

ompile and link C++ programs

mpif77 or mpif90

compile and link Fortran programs Further informations on MPI can be found in the respective User Guide or at the manufacturer specific informations on the website"Online Manuals".


Execution of parallel Programs

Parallel programs can be started interactively or under control of the batch system. By launching programs interactively you won't be able to use another node than the one you are logged in.

The syntax to start a parallel application is

mpirun[ mpirun_options ] program

or


mpirun[ mpirun_options ] -f appfile

(using OpenMPI)

or

mpirun [ mpirun_options ] exe1:exe2:

... (using Intel MPI)


both for interactive calls and calls within shellscripts to execute batch jobs. The mpirun_options

are also the same for both modes.

To start a parallel application as batch job the shellscript that is usually required by the command job_submit must contain the command mpirun

with the application as input file.

Important for the understanding: the option -n # is required calling mpirun interactively but is ignored calling mpirun in batch jobs (the number of processors used in batch jobs is controlled by an option of the command sbatch

). There is no option to specify the number of nodes!

Information in the web


Training


Documentation