Please note: The FAQ pages at the HPCVL website are continuously being revised. Some pages might pertain to an older configuration of the system. Please let us know if you encounter problems or inaccuracies, and we will correct the entries.
To use MPI on our clusters, you will have to do the following things:
#include <mpi.h>This is important for the definition of variables and constants that are used by theMPI system.
-I/opt/SUNWhpc/include -L/opt/SUNWhpc/lib -R/opt/SUNWhpc/lib -lmpiThese tell the compiler, linker and runtime environment where to look for include files, static libraries and runtime dynamic libraries. The command -lmpi loads theMPI library.
tmf90, tmcc, or tmCCmacros for Fortran, C, and C++, respectively, instead of the standard compilers/linkers. These will automatically call the right flags. It also implies usage of the -lmpi library flag.
mpirun [options]where options specify the parameters of the run.
The mpirun command is part of the ClusterTools programming environment, and is necessary to run MPI programs and allocate the separate processes across the multi-processor system. The setup for ClusterTools is part of the default on our cluster. The/opt/SUNWhpc/bin directory must be in your PATH (which it is for the default environment).
mpirun lets you specify the number of processors, e.g.
mpirun -np 4 test_par
runs the MPI program test_par on 4 processors. There is a myriad of other options for this command, many of which are concerned with details of process allocation that are automatically handled by the system on HPCVL clusters, and do therefore not have to concern the user.
For help on ClusterTools, consult Sun's Documentation Site and search for HPC Cluster Tools User's Guide.