Difference between revisions of "GPC Quickstart"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 20: Line 20:
  
 
===Compile/Devel Nodes===
 
===Compile/Devel Nodes===
 +
 +
From a login node you can ssh to gpc-f101n001 and gpc-f101n002.
  
 
===Compilers===
 
===Compilers===
Line 28: Line 30:
  
 
<pre>
 
<pre>
 +
#
 +
# MOAB/Torque submission script for SciNet GPC
 
#
 
#
# LoadLeveler submission script for SciNet GPC
+
#PBS -l nodes=2:ppn=8,walltime=1:00:00,os=centos53computeA
#
+
#PBS -N test
don't know what goes here yet
+
 
 +
# SOURCE YOUR ENVIRONMENT VARIABLES
 +
source /scratch/user/.bashrc
 +
 
 +
# GO TO DIRECTORY SUBMITTED FROM
 +
cd $PBS_O_WORKDIR
 +
 
 +
# MPIRUN COMMAND
 +
mpirun -np 16 -hostfile $PBS_NODEFILE ./a.out
  
# Submit the job
 
 
#
 
#
#@ queue
 
 
</pre>
 
</pre>
  
Line 43: Line 53:
 
has been installed and tested for both the Intel V11 and GCC v4.1 compilers.
 
has been installed and tested for both the Intel V11 and GCC v4.1 compilers.
  
Currently the only way to compile, link, and test an MPI code using MVAPICH2 is to use
+
You will need to source one of the following to setup the appropriate
an interactive queue session, using the os image "centos53develibA" as follows from a
 
GPC login-node.
 
 
 
<pre>
 
qsub -l nodes=2:ib:ppn=8,walltime=12:00:00,os=centos53develibA -I
 
</pre>
 
 
 
Once in the interactive session you will need to source one of the following to setup the appropriate
 
 
environment variables depending on if you want to compile with the Intel or gcc compilers.
 
environment variables depending on if you want to compile with the Intel or gcc compilers.
  
Line 66: Line 68:
 
</pre>
 
</pre>
  
MVAPCIH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.
+
MVAPICH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.
 +
 
 +
Currently you can compile and link your MPI code on the development nodes
 +
gpc-f101n001 and gpc-f101n002 however you will not be able to interactively test
 +
as these nodes are not connected with Infiniband.  You can alternatively
 +
compile, link, and test an MPI code using an interactive queue session,
 +
using the os image "centos53develibA" as follows.
 +
 
 +
<pre>
 +
qsub -l nodes=2:ib:ppn=8,walltime=12:00:00,os=centos53develibA -I
 +
</pre>
  
 
Once you have compiled your MPI code and would like to test it, use the following command
 
Once you have compiled your MPI code and would like to test it, use the following command

Revision as of 11:34, 18 June 2009

General Purpose Cluster (GPC)
Installed June 2009
Operating System Linux
Interconnect 1/4 on Infiniband, rest on GigE
Ram/Node 16 Gb
Cores/Node 8
Login/Devel Node gpc-login1 (142.150.188.51)
Vendor Compilers icc (C) ifort (fortran) icpc (C++)
Queue Submission LoadLeveller

The General Purpose Cluster is an extremely large cluster (ranked Nth in the world, and fastest in Canada) and is where most simulations are done at SciNet. It is an IBM iDataPlex cluster based on Intel's Nehalem architecture (one of the first in the world to make use of the new chips). The GPC will consist of 3,780 nodes with a total of 30,240 2.5GHz cores, with 16GB RAM per node (2GB per core). One quarter of the cluster will be interconnected with non-blocking 4x-DDR Infiniband while the rest of the nodes are connected with gigabit ethernet.

Log In

The login node for the GPC cluster is gpc-login1.

Compile/Devel Nodes

From a login node you can ssh to gpc-f101n001 and gpc-f101n002.

Compilers

The intel compilers are icc/icpc/ifort for C/C++/Fortran. For MPI jobs, the scripts mpicc/mpicpc/mpifort are wrappers to the compilers which ensure the MPI header files and libraries are correctly included and linked to.

Submission Script

# 
# MOAB/Torque submission script for SciNet GPC
#
#PBS -l nodes=2:ppn=8,walltime=1:00:00,os=centos53computeA
#PBS -N test

# SOURCE YOUR ENVIRONMENT VARIABLES
source /scratch/user/.bashrc

# GO TO DIRECTORY SUBMITTED FROM
cd $PBS_O_WORKDIR

# MPIRUN COMMAND 
mpirun -np 16 -hostfile $PBS_NODEFILE ./a.out

#

MPI over Infiniband

To use the Infiniband interconnect for MPI communications, the MVAPICH2 implementation has been installed and tested for both the Intel V11 and GCC v4.1 compilers.

You will need to source one of the following to setup the appropriate environment variables depending on if you want to compile with the Intel or gcc compilers.

INTEL

source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/iccvars.sh  intel64 &> /dev/null
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null 
source /scinet/gpc/mpi_ib/mvapich2_icc/bin/mvapichvars.sh 

GCC

source /scinet/gpc/mpi_ib/mvapich2_gcc/bin/mvapichvars.sh

MVAPICH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.

Currently you can compile and link your MPI code on the development nodes gpc-f101n001 and gpc-f101n002 however you will not be able to interactively test as these nodes are not connected with Infiniband. You can alternatively compile, link, and test an MPI code using an interactive queue session, using the os image "centos53develibA" as follows.

qsub -l nodes=2:ib:ppn=8,walltime=12:00:00,os=centos53develibA -I

Once you have compiled your MPI code and would like to test it, use the following command with $PROCS being the number of processors to run on and a.out being your code.

mpirun_rsh -np $PROCS -hostfile $PBS_NODEFILE ./a.out

To run your MPI-Infiniband job in a non-interactive queue you can use a submission script as follows, remembering to source the appropriate environment variables.

#!/bin/bash
#PBS -l nodes=2:ib:ppn=8,walltime=1:00:00,os=centos53computeibA
#PBS -N testib

# INTEL & MVAPICH2 ENVIRONMENT VARIABLES
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null
source /scinet/gpc/compilers/intel/Compiler/11.0/081/bin/ifortvars.sh intel64 &> /dev/null 
source /scinet/gpc/mpi_ib/mvapich2_icc/bin/mvapichvars.sh

# GO TO DIRECTORY SUBMITTED FROM
cd $PBS_O_WORKDIR

# MPIRUN COMMAND 
mpirun_rsh -np 16 -hostfile $PBS_NODEFILE ./a.out


Performance Tools