Difference between revisions of "GPC Quickstart"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 41: Line 41:
  
 
To use the Infiniband interconnect for MPI communications, the MVAPICH2 implementation
 
To use the Infiniband interconnect for MPI communications, the MVAPICH2 implementation
has been installed and tested for both the intel V11 and GCC v4.1 compilers.
+
has been installed and tested for both the Intel V11 and GCC v4.1 compilers.
  
 
Currently the only way to compile, link, and test an MPI code using MVAPICH2 is to use  
 
Currently the only way to compile, link, and test an MPI code using MVAPICH2 is to use  
Line 52: Line 52:
  
 
Once in the interactive session you will need to source one of the following to setup the appropriate
 
Once in the interactive session you will need to source one of the following to setup the appropriate
environment variables depending on if you want to compile with the intel or gcc compilers.
+
environment variables depending on if you want to compile with the Intel or gcc compilers.
  
 +
====INTEL====
 +
<pre>
 +
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/iccvars.sh  intel64 &> /dev/null
 +
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null
 +
source /scinet/gpc/mpi_ib/mvapich2_icc/bin/mvapichvars.sh
 +
</pre>
 +
 +
====GCC====
 +
<pre>
 +
source /scinet/gpc/mpi_ib/mvapich2_gcc/bin/mvapichvars.sh
 +
</pre>
  
 +
MVAPCIH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.
  
 +
Once you have compiled your MPI code and would like to test it, use the following command
 +
with <tt>$PROCS</tt> being the number of processors to run on and <tt>a.out</tt> being
 +
your code.
  
 +
<pre>
 +
mpirun_rsh -np $PROCS -hostfile $PBS_NODEFILE ./a.out
 +
</pre>
  
 +
To run your MPI-Infiniband job in a non-interactive queue you can use queue
 +
submission script as follows
  
 +
<pre>
 +
#!/bin/bash
 +
#PBS -l nodes=2:ib:ppn=8,walltime=1:00:00,os=centos53computeibA
 +
#PBS -N testib
  
 +
# INTEL & MVAPICH2 ENVIRONMENT VARIABLES
 +
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null
 +
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null
 +
source /scinet/gpc/mpi_ib/mvapich2/bin/mvapichvars.sh
  
 +
# GO TO DIRECTORY SUBMITTED FROM
 +
cd $PBS_O_WORKDIR
  
 +
# MPIRUN COMMAND
 +
mpirun_rsh -np 16 -hostfile $PBS_NODEFILE ./a.out
 +
 +
</pre>
  
  
 
===Performance Tools===
 
===Performance Tools===

Revision as of 11:49, 16 June 2009

General Purpose Cluster (GPC)
Installed June 2009
Operating System Linux
Interconnect 1/4 on Infiniband, rest on GigE
Ram/Node 16 Gb
Cores/Node 8
Login/Devel Node gpc-login1 (142.150.188.51)
Vendor Compilers icc (C) ifort (fortran) icpc (C++)
Queue Submission LoadLeveller

The General Purpose Cluster is an extremely large cluster (ranked Nth in the world, and fastest in Canada) and is where most simulations are done at SciNet. It is an IBM iDataPlex cluster based on Intel's Nehalem architecture (one of the first in the world to make use of the new chips). The GPC will consist of 3,780 nodes with a total of 30,240 2.5GHz cores, with 16GB RAM per node (2GB per core). One quarter of the cluster will be interconnected with non-blocking 4x-DDR Infiniband while the rest of the nodes are connected with gigabit ethernet.

Log In

The login node for the GPC cluster is gpc-login1.

Compile/Devel Nodes

Compilers

The intel compilers are icc/icpc/ifort for C/C++/Fortran. For MPI jobs, the scripts mpicc/mpicpc/mpifort are wrappers to the compilers which ensure the MPI header files and libraries are correctly included and linked to.

Submission Script

#
# LoadLeveler submission script for SciNet GPC
#
don't know what goes here yet

# Submit the job
#
#@ queue

MPI over Infiniband

To use the Infiniband interconnect for MPI communications, the MVAPICH2 implementation has been installed and tested for both the Intel V11 and GCC v4.1 compilers.

Currently the only way to compile, link, and test an MPI code using MVAPICH2 is to use an interactive queue session, using the os image "centos53develibA" as follows from a GPC login-node.

qsub -l nodes=2:ib:ppn=8,walltime=12:00:00,os=centos53develibA -I

Once in the interactive session you will need to source one of the following to setup the appropriate environment variables depending on if you want to compile with the Intel or gcc compilers.

INTEL

source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/iccvars.sh  intel64 &> /dev/null
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null 
source /scinet/gpc/mpi_ib/mvapich2_icc/bin/mvapichvars.sh 

GCC

source /scinet/gpc/mpi_ib/mvapich2_gcc/bin/mvapichvars.sh

MVAPCIH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.

Once you have compiled your MPI code and would like to test it, use the following command with $PROCS being the number of processors to run on and a.out being your code.

mpirun_rsh -np $PROCS -hostfile $PBS_NODEFILE ./a.out

To run your MPI-Infiniband job in a non-interactive queue you can use queue submission script as follows

#!/bin/bash
#PBS -l nodes=2:ib:ppn=8,walltime=1:00:00,os=centos53computeibA
#PBS -N testib

# INTEL & MVAPICH2 ENVIRONMENT VARIABLES
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null 
source /scinet/gpc/mpi_ib/mvapich2/bin/mvapichvars.sh

# GO TO DIRECTORY SUBMITTED FROM
cd $PBS_O_WORKDIR

# MPIRUN COMMAND 
mpirun_rsh -np 16 -hostfile $PBS_NODEFILE ./a.out


Performance Tools