GPC MPI Versions
You can only use ONE mpi version at a time as most versions use the same names for the mpirun and compiler wrappers, so be careful which modules you have loaded in your ~/.bashrc.
OpenMPI
To use OpenMPI compiled with the intel compilers load the modules
module load intel openmpi
or for the gcc version use
module load openmpi/1.3.3-gcc-v4.4.0-ofed gcc
The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.
OpenMPI has been built to support various communication methods and automatically uses the best method depending on how and where it is run. To explicitly specify the method you can use the following --mca flags on ethernet
mpirun --mca btl self,sm,tcp -np 16 -hostfile $PBS_NODEFILE ./a.out
and the following for infiniband
mpirun --mca btl self,sm,openib -np 16 -hostfile $PBS_NODEFILE ./a.out
For mixed openMP/MPI applications, set OMP_NUM_THREADS to the number of threads per process and add '--bynode' to the mpirun command, e.g.,
export OMP_NUM_THREADS=4 mpirun -np 6 --bynode -hostfile $PBS_NODEFILE ./a.out
would start 6 MPI processes on different nodes, each with 4 openMP threads. If your script requests 3 nodes, each node gets 2 MPI processes.
For more information on available flags see the OpenMPI FAQ
IntelMPI
IntelMPI is also a MPICH2 derivative customized by Intel. To use IntelMPI compiled with the intel compilers load the modules
module load intelmpi intel
The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.
IntelMPI requires a .mpd.conf file containing the variable "MPD_SECRETWORD=..." in your $HOME directory. To create this file use
echo "MPD_SECRETWORD=ABC123" > ~/.mpd.conf chmod 600 ~/.mpd.conf
IntelMPI, like OpenMPI, has been built to support various communication methods and automatically uses the best method depending on how and where it is run.
mpirun -r ssh -np 16 ./a.out
To explicitly specify the method you can use the following flags to use ethernet (tcp) and shared memory
mpirun -r ssh -np 2 -env I_MPI_DEVICE ssm ./a.out
or the following flags for infiniband (rdma udapl) and shared memory
mpirun -r ssh -np 2 -env I_MPI_DEVICE rdssm ./a.out
or the following flags using Intel's "DET" over ethernet (EXPERIMENTAL!!!) and shared memory
mpirun -r ssh -np 8 -env I_MPI_DEVICE rdssm:Det-eth0 -env I_MPI_USE_DYNAMIC_CONNECTIONS 0 ./a.out
For hybrid openMP/MPI runs, set OMP_NUM_THREADS to the desired number of OpenMP threads per MPI process and specify the number of MPI processes per node on the mpirun command line with -ppn <num>. E.g.
export OMP_NUM_THREADS=4 mpirun -r ssh -ppn 2 -np 6 -env I_MPI_DEVICE ssm ./a.out
would start a total of 6 MPI processes each with 4 threads, with each node running 2 MPI processes. Your script should request 3 nodes in this case.
For more information on these an other flags see Intel's Documentation page especially the "Getting Started (Linux)" Guide.
EXPERIMENTAL: MPICH2 with Hydra (Ethernet only)
MPICH2 1.3a1 is a preview release of MPICH2 which uses the Hydra process manager. Note that this release is not recommended for production systems at this time. To use MPICH2 1.3a1 compiled with the intel compilers load the modules
module load intel use.experimental mpich2/mpich2-1.3a1-intel
To run mpich2 applications:
mpiexec -rmk pbs -n 16 ./a.out
Mixed openMP/MPI runs with MPICH2 are a bit clumsy. You have to set OMP_NUM_THREADS to the desired number of OpenMP threads per MPI process and specify the number of MPI processes per node in a 'machine file'. The machine file should contain the names of the nodes followed by a colon and the number of MPI processes for that node. Since the nodes on which you run are not known beforehand, you have to generate this file. One way is as follows:
uniq $PBS_NODEFILE | awk '{print $1":2"}' > $PBS_JOBID.mf export OMP_NUM_THREADS=4 mpiexec -machinefile $PBS_JOBID.mf -rmk pbs -n 6 ./a.out
This launches 2 MPI processes per core, each with 4 threads, for a total of 6 MPI processes. The job script should therefore request 3 nodes in this case.
NOTE: This version of MPI is for ethernet usage only.
For more information on these an other flags, see the MPICH2 User’s Guide.