Difference between revisions of "GPC Quickstart"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
m (1 revision)
Line 37: Line 37:
 
#@ queue
 
#@ queue
 
</pre>
 
</pre>
 +
 +
===MPI over Infiniband===
 +
 +
To use the Infiniband interconnect for MPI communications, the mvapich2 implementation
 +
has been installed and tested for both the intel V11 and GCC v4.1 compilers.
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
  
 
===Performance Tools===
 
===Performance Tools===

Revision as of 11:26, 16 June 2009

General Purpose Cluster (GPC)
Installed June 2009
Operating System Linux
Interconnect 1/4 on Infiniband, rest on GigE
Ram/Node 16 Gb
Cores/Node 8
Login/Devel Node gpc-login1 (142.150.188.51)
Vendor Compilers icc (C) ifort (fortran) icpc (C++)
Queue Submission LoadLeveller

The General Purpose Cluster is an extremely large cluster (ranked Nth in the world, and fastest in Canada) and is where most simulations are done at SciNet. It is an IBM iDataPlex cluster based on Intel's Nehalem architecture (one of the first in the world to make use of the new chips). The GPC will consist of 3,780 nodes with a total of 30,240 2.5GHz cores, with 16GB RAM per node (2GB per core). One quarter of the cluster will be interconnected with non-blocking 4x-DDR Infiniband while the rest of the nodes are connected with gigabit ethernet.

Log In

The login node for the GPC cluster is gpc-login1.

Compile/Devel Nodes

Compilers

The intel compilers are icc/icpc/ifort for C/C++/Fortran. For MPI jobs, the scripts mpicc/mpicpc/mpifort are wrappers to the compilers which ensure the MPI header files and libraries are correctly included and linked to.

Submission Script

#
# LoadLeveler submission script for SciNet GPC
#
don't know what goes here yet

# Submit the job
#
#@ queue

MPI over Infiniband

To use the Infiniband interconnect for MPI communications, the mvapich2 implementation has been installed and tested for both the intel V11 and GCC v4.1 compilers.






Performance Tools