GPC MPI Versions
You can only use ONE mpi version at a time as most versions use the same names for the mpirun and compiler wrappers, so be careful which modules you have loaded in your ~/.bashrc.
OpenMPI
To use OpenMPI compiled with the intel compilers load the modules
module load openmpi intel
or for the gcc version use
module load openmpi/1.3.3-gcc-v4.4.0-ofed gcc
The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.
OpenMPI has been built to support various communication methods and automatically uses the best method depending on how and where it is run. To explicitly specify the method you can use the following --mca flags on ethernet
mpirun --mca btl self,sm,tcp -np 16 -hostfile $PBS_NODEFILE ./a.out
and the following for infiniband
mpirun --mca btl self,sm,openib -np 16 -hostfile $PBS_NODEFILE ./a.out
For more information on available flags see the OpenMPI FAQ
IntelMPI
IntelMPI is also a MPICH2 derivative customized by Intel. To use IntelMPI compiled with the intel compilers load the modules
module load intelmpi intel
The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.
IntelMPI also requires a .mpd.conf file which is described in the MVAPICH2 section.
IntelMPI, like OpenMPI, has been built to support various communication methods and automatically uses the best method depending on how and where it is run.
mpirun -r ssh -np 16 ./a.out
To explicitly specify the method you can use the following flags to use ethernet (tcp) and shared memory
mpirun -r ssh -np 2 -env I_MPI_DEVICE ssm ./a.out
or the following flags for infiniband (rdma udapl) and shared memory
mpirun -r ssh -np 2 -env I_MPI_DEVICE rdssm ./a.out
or the following flags using Intel's "DET" over ethernet (EXPERIMENTAL!!!) and shared memory
mpirun -r ssh -np 8 -env I_MPI_DEVICE rdssm:Det-eth0 -env I_MPI_USE_DYNAMIC_CONNECTIONS 0 ./a.out
For more information on these an other flags see Intel's Documentation page especially the "Getting Started (Linux)" Guide.