Compute Canada

How is MPI used?

Please note: The FAQ pages at the HPCVL website are continuously being revised. Some pages might pertain to an older configuration of the system. Please let us know if you encounter problems or inaccuracies, and we will correct the entries.

MPI is a set of subroutines which are used explicitly to communicate between processes. As such, MPI programs are truly "multi-processing". Parallelisation can not be done automatically or semi-automatically as in "multi-threading" programs. Instead, function and subroutine calls have to be inserted into the code and form an integral part of the program. Often it is beneficial to alter the algorithm of the code with respect to the serial version.

The need to include the parallelism explicitly in the program is both a curse and a blessing: while it means more work and requires more planning than multi-threading, it also often leads to more reliable and scalable code since the behaviour of the latter is in the hands of the programmer. Well-written MPI codes can be made to scale for thousands of CPUs.

To create an MPI program, you need to:

  1. Include appropriate header files for the definitions of variables and data structures. These are called mpif.h, mpi.h, and mpi++.h for Fortran, C, and C++, respectively.
  2. Program the communication between processes in the form of calls to the MPI communication routines. These are commonly of the form MPI_* for Fortran and C, and MPI::* for C++.
  3. Bind in the proper libraries at the linking stage of program compilation. This is usually done with the -lmpi option of the compiler/linker.

MPI programs also usually need a special runtime environment to be executed properly. This is commonly supplied by the machine vendor and is machine specific.