P7 Linux Cluster

From oldwiki.scinet.utoronto.ca
Revision as of 12:09, 19 May 2011 by Northrup (talk | contribs)
Jump to navigation Jump to search
P7 Cluster (P7)
Tesla S2070 3qtr.gif
Installed May 2011
Operating System Linux (RHEL 6.0)
Interconnect Infiniband
Ram/Node 128 Gb
Cores/Node 32 (128 Threads)
Login/Devel Node p7n01 (from login.scinet)
Vendor Compilers xlc/xlf
Queue Submission Torque

Specifications

The P7 Cluster consists of 5 IBMPower 755 Servers each with 4x 8core 3.3GHz Power7 CPUs and 128GB Ram.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to p7n01 which is currently the gateway/devel node for this cluster.

Compiler/Devel Node

Two compilers are available, gcc/g++/gfortran version 4.4.4 is the default with RHEL 6.0 and is available by default.

To use the IBM Power specific compilers xlc/xlc++/xlf you need to load the following modules

$ module load vacpp xlf

NOTE: Be sure to use "-q64" if using the IBM compilers.

MPI

OpenMPI is available for both compilers

$ module openmpi/1.5.3-gcc-v4.4.4
$ module openmpi/1.5.3-ibm-11.1+13.1


IBM's POE is installed but due to current problems with loadleveler/lapi/poe it is not recommended for use.

Submit a Job

Currently a vary basic torque queuing system has been setup and as such a job can be submitted in the regular torque way as discussed in the GPC

Create a script

#!/bin/bash 
# P7 script
#PBS -l nodes=1:ppn=128,walltime=1:00:00
#PBS -N P7test

cd $PBS_O_WORKDIR

mpirun -np 128 ./a.out

Then submit

qsub example.sh

Interactive sessions can also be facilitated with

qsub -I -l nodes=1:ppn=128,walltime=1:00:00

To see running jobs use

qstat 

and to cancel a running or queued job

qdel JOBID