GPC Quickstart
General Purpose Cluster (GPC) | |
---|---|
Installed | June 2009 |
Operating System | Linux |
Interconnect | 1/4 on Infiniband, rest on GigE |
Ram/Node | 16 Gb |
Cores/Node | 8 |
Login/Devel Node | gpc-login1 (142.150.188.51) |
Vendor Compilers | icc (C) ifort (fortran) icpc (C++) |
Queue Submission | Moab/Torque |
The General Purpose Cluster is an extremely large cluster (ranked 16th in the world at its inception, and fastest in Canada) and is where most simulations are to be done at SciNet. It is an IBM iDataPlex cluster based on Intel's Nehalem architecture (one of the first in the world to make use of the new chips). The GPC consists of 3,780 nodes with a total of 30,240 2.5GHz cores, with 16GB RAM per node (2GB per core). One quarter of the cluster will be interconnected with non-blocking 4x-DDR Infiniband while the rest of the nodes are connected with gigabit ethernet.
Log In
The login node for the GPC cluster is gpc-login1 (142.150.188.51).
Compile/Devel Nodes
From a login node you can ssh to gpc-f101n001 and gpc-f101n002, these are the exact same hardware as the compute nodes (8core Nehalem with 16GB RAM).
Environment Variables
A modules system is used to handle environment variables associated with different compilers, MPI versions, libraries etc. To see all the options available type
module avail
To load a module
module load intel
These commands should go in your .bashrc files and/or in your submission scripts to make sure you are using the correct packages.
Compilers
The intel compilers are icc/icpc/ifort for C/C++/Fortran. For MPI jobs, the scripts mpicc/mpicpc/mpifort are wrappers to the compilers which ensure the MPI header files and libraries are correctly included and linked to.
Submission Script
The GPC is using MOAB/Torque as the queue manger. A sample submission script is shown below with the #PBS directives at the top and the rest being what will be executed on the compute node.
#!/bin/bash # MOAB/Torque submission script for SciNet GPC # #PBS -l nodes=2:ppn=8,walltime=1:00:00,os=centos53computeA #PBS -N test # SOURCE YOUR ENVIRONMENT VARIABLES source /scratch/user/.bashrc # GO TO DIRECTORY SUBMITTED FROM cd $PBS_O_WORKDIR # MPIRUN COMMAND mpirun -np 16 -hostfile $PBS_NODEFILE ./a.out #
MPI over Infiniband
To use the Infiniband interconnect for MPI communications, the MVAPICH2 implementation has been installed and tested for both the Intel V11 and GCC v4.1 compilers.
You will need to source one of the following to setup the appropriate environment variables depending on if you want to compile with the Intel or gcc compilers.
INTEL
module load mvapich2 intel
GCC
module load mvapich2_gcc
MVAPICH2 uses the wrappers mpicc/mpicxx/mpif90/mpif77 for the compilers.
Currently you can compile and link your MPI code on the development nodes gpc-f101n001 and gpc-f101n002 however you will not be able to interactively test as these nodes are not connected with Infiniband. You can alternatively compile, link, and test an MPI code using an interactive queue session, using the os image "centos53develibA" as follows.
qsub -l nodes=2:ib:ppn=8,walltime=12:00:00,os=centos53develibA -I
Once you have compiled your MPI code and would like to test it, use the following command with $PROCS being the number of processors to run on and a.out being your code.
mpirun_rsh -np $PROCS -hostfile $PBS_NODEFILE ./a.out
To run your MPI-Infiniband job in a non-interactive queue you can use a submission script as follows, remembering to source the appropriate environment variables.
#!/bin/bash #PBS -l nodes=2:ib:ppn=8,walltime=1:00:00,os=centos53computeibA #PBS -N testib # INTEL & MVAPICH2 ENVIRONMENT VARIABLES module load intel mpvapich2 # GO TO DIRECTORY SUBMITTED FROM cd $PBS_O_WORKDIR # MPIRUN COMMAND mpirun_rsh -np 16 -hostfile $PBS_NODEFILE ./a.out