Please note: The FAQ pages at the HPCVL website are continuously being revised. Some pages might pertain to an older configuration of the system. Please let us know if you encounter problems or inaccuracies, and we will correct the entries.
Sometimes it is necessary to re-write the code in a parallel fashion, so that it can be executed on several separate processors, or indeed machines, separately. For this, it is necessary to establish some communication between the processes, and this is usually done by some form of message passing. A platform independent standard for this is a set of almost 300 routines, available in Fortran, C and C++, that comprise the MPI (Message Passing Interface) standard. Using these routines requires a little rethinking of the code structure, but is in reasonably simple and effective in many cases.
MPI is best used if your code has a good potential to employ many processors independently with none sitting idle. It is also advantageous to have only relatively little communication being necessary between processes. Examples are numerical integration (where independent evaluations of the integrant can be done separately), Monte-Carlo methods, finite-difference and finite-element methods (if the problem can be divided up into blocks of equal size with minimal communication). MPI requires some serious re-coding in some cases, but with a relatively small number of routines, great scaling can be achieved.