GPC MPI Versions

From oldwiki.scinet.utoronto.ca
Revision as of 12:58, 24 September 2009 by Northrup (talk | contribs)
Jump to navigation Jump to search

You can only use ONE mpi version at a time as most versions use the same names for the mpirun and compiler wrappers, so be careful which modules you have loaded in your ~/.bashrc.

OpenMPI

To use OpenMPI compiled with intel load the modules

module load openmpi intel  

or for the gcc version use

module load openmpi/1.3.3-gcc-v4.4.0-ofed gcc 

The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.

OpenMPI has been build to support various communication methods and automatically uses the best method depending on how and where it is run. To explicitly specify the method you can use the following --mca flags on ethernet

mpirun --mca btl self,sm,tcp -np 16 -hostfile $PBS_NODEFILE ./a.out

and the following for infiniband

mpirun --mca btl self,sm,openib -np 16 -hostfile $PBS_NODEFILE ./a.out

For more information on available flags see the OpenMPI FAQ

MVAPICH2

MVAPICH2 is a MPICH2 derivative primarily designed for MPI communications over Infiniband. To use MVAPICH2 compiled with the intel compilers for infiniband load the modules

module load mvapich2 intel

or for the ethernet version use

module load mvapich2/1.4rc1-3378_intel-v11.0-tcpip intel

The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.

MVAPICH2 requires a .mpd.conf<\tt> file containing the variable "MPD_SECRETWORD=..." in your $HOME directory. To create this file use

echo "MPD_SECRETWORD=ABC123" > ~/.mpd.conf
chmod 600  ~/.mpd.conf

The easiest way to run is to use mpirun_rsh as follows

mpirun_rsh -np 16 -hostfile $PBS_NODEFILE ./a.out

IntelMPI

IntelMPI is also a MPICH2 derivative customized by Intel. To use IntelMPI compiled with the intel compilers load the modules

module load intelmpi intel

The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.

IntelMPI also requires a .mpd.conf file which is described in the MVAPICH2 section.

IntelMPI, like OpenMPI, has been build to support various communication methods and automatically uses the best method depending on how and where it is run.

mpirun -r ssh -np 16 ./a.out

To explicitly specify the method you can use the following flags to use ethernet (tcp) and shared memory

mpirun -r ssh  -n 2  -env I_MPI_DEVICE ssm  ./a.out

of the following flags for infiniband (rdma udapl) and shared memory

mpirun -r ssh  -n 2  -env I_MPI_DEVICE rdssm  ./a.out

For more information on these an other flags see Intel's Documentation page especially the "Getting Started (Linux)" Guide.