Difference between revisions of "GPC MPI Versions"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 117: Line 117:
 
mpirun -r ssh -np 8 -env I_MPI_DEVICE rdssm:Det-eth0 -env I_MPI_USE_DYNAMIC_CONNECTIONS 0 ./a.out
 
mpirun -r ssh -np 8 -env I_MPI_DEVICE rdssm:Det-eth0 -env I_MPI_USE_DYNAMIC_CONNECTIONS 0 ./a.out
 
</pre>
 
</pre>
 +
 +
For hybrid openMP/MPI runs, set <tt>OMP_NUM_THREADS</tt> to the desired number of OpenMP threads per MPI process and specify the number of MPI processes per node on the mpirun command line with <tt>-ppn <num></tt>. E.g.
 +
<pre>
 +
    export OMP_NUM_THREADS=4
 +
    mpirun -r ssh -np 6 -ppn 2 -env I_HYBRID_DEVICE ssm ./a.out
 +
</pre>
 +
would start a total of 6 MPI processes each with 4 threads, with each node running 2 MPI processes. Your script should request 3 nodes in this case.
  
 
For more information on these an other flags see Intel's [http://software.intel.com/en-us/articles/intel-mpi-library-documentation/ Documentation]
 
For more information on these an other flags see Intel's [http://software.intel.com/en-us/articles/intel-mpi-library-documentation/ Documentation]
 
page especially the "Getting Started (Linux)" Guide.
 
page especially the "Getting Started (Linux)" Guide.

Revision as of 09:38, 5 May 2010

You can only use ONE mpi version at a time as most versions use the same names for the mpirun and compiler wrappers, so be careful which modules you have loaded in your ~/.bashrc.

OpenMPI

To use OpenMPI compiled with the intel compilers load the modules

module load intel openmpi

or for the gcc version use

module load openmpi/1.3.3-gcc-v4.4.0-ofed gcc 

The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.

OpenMPI has been built to support various communication methods and automatically uses the best method depending on how and where it is run. To explicitly specify the method you can use the following --mca flags on ethernet

mpirun --mca btl self,sm,tcp -np 16 -hostfile $PBS_NODEFILE ./a.out

and the following for infiniband

mpirun --mca btl self,sm,openib -np 16 -hostfile $PBS_NODEFILE ./a.out

For mixed openMP/MPI applications, set OMP_NUM_THREADS to the number of threads per process and add '--bynode' to the mpirun command, e.g.,

export OMP_NUM_THREADS=4
mpirun -np 6 --bynode -hostfile $PBS_NODEFILE ./a.out

would start 6 MPI processes on different nodes, each with 4 openMP threads. If your script requests 3 nodes, each node gets 2 MPI processes.

For more information on available flags see the OpenMPI FAQ


IntelMPI

IntelMPI is also a MPICH2 derivative customized by Intel. To use IntelMPI compiled with the intel compilers load the modules

module load intelmpi intel

The MPI library wrappers for compiling are mpicc/mpicxx/mpif90/mpif77.

IntelMPI requires a .mpd.conf file containing the variable "MPD_SECRETWORD=..." in your $HOME directory. To create this file use

echo "MPD_SECRETWORD=ABC123" > ~/.mpd.conf
chmod 600  ~/.mpd.conf

IntelMPI, like OpenMPI, has been built to support various communication methods and automatically uses the best method depending on how and where it is run.

mpirun -r ssh -np 16 ./a.out

To explicitly specify the method you can use the following flags to use ethernet (tcp) and shared memory

mpirun -r ssh -np 2  -env I_MPI_DEVICE ssm  ./a.out

or the following flags for infiniband (rdma udapl) and shared memory

mpirun -r ssh -np 2  -env I_MPI_DEVICE rdssm  ./a.out

or the following flags using Intel's "DET" over ethernet (EXPERIMENTAL!!!) and shared memory

mpirun -r ssh -np 8 -env I_MPI_DEVICE rdssm:Det-eth0 -env I_MPI_USE_DYNAMIC_CONNECTIONS 0 ./a.out

For hybrid openMP/MPI runs, set OMP_NUM_THREADS to the desired number of OpenMP threads per MPI process and specify the number of MPI processes per node on the mpirun command line with -ppn <num>. E.g.

    export OMP_NUM_THREADS=4
    mpirun -r ssh -np 6 -ppn 2 -env I_HYBRID_DEVICE ssm ./a.out

would start a total of 6 MPI processes each with 4 threads, with each node running 2 MPI processes. Your script should request 3 nodes in this case.

For more information on these an other flags see Intel's Documentation page especially the "Getting Started (Linux)" Guide.