How is MPI implemented on the HPCVL machines?

Please note: The FAQ pages at the HPCVL website are continuously being revised. Some pages might pertain to an older configuration of the system. Please let us know if you encounter problems or inaccuracies, and we will correct the entries.

While MPI itself is a portable, platform independent standard, much like a programming language, the actual implementation is necessarily platform dependent since it has to take into account the architecture of the machine or cluster in question.

HPCVL machines are mostly shared-memory machines that form a cluster. The Sun MPI implementation makes use of this structure by exploiting the rapid access to a common memory space for MPI communication. This is done by means of a so-called "shared-memory layer". As a result, communication between CPUs on the same shared-memory node are orders of magnitude faster and more reliable than between nodes. Our cluster is therefore configured in to preferably schedule processes within a single node.

The MPI libraries, headers, etc. reside in the /opt/SUNWhpc directory subtree. It is usually not necessary to include the proper directories in the PATH variable as this is done by default by the system. The /opt/SUNWhpc directory includes several versions (6, 7, and 8) of the Sun ClusterTools (CT) environment which enables the compiling and running of multi-process programs.

The current default version of ClusterTools is 8.1 which uses OpenMPI as its MPI implementation. For compatibility, we keep older versions of ClusterTools, such as theSun MPI based CT6. This will not be discontinued in the future.

If possible run MPI jobs using the default version of ClusterTools. This version is based on OpenMPI which is an OpenSource version of MPI and therefore offers a greater degree of compatibility with other platforms.