P7 Linux Cluster

From oldwiki.scinet.utoronto.ca
Revision as of 22:31, 24 April 2012 by Northrup (talk | contribs)
Jump to navigation Jump to search
P7 Cluster (P7)
IBM755.jpg
Installed May 2011
Operating System Linux (RHEL 6.0)
Number of Nodes 5
Interconnect Infiniband (2 DDR/node )
Ram/Node 128 Gb
Cores/Node 32 (128 Threads)
Login/Devel Node p701 (from login.scinet)
Vendor Compilers xlc/xlf
Queue Submission LoadLeveler

Specifications

The P7 Cluster consists of 5 IBM Power 755 Servers each with 4x 8core 3.3GHz Power7 CPUs and 128GB Ram. Similar to the Power 6, the Power 7 utilizes Simultaneous Multi Threading (SMT), but extends the design from 2 threads per core to 4. This allows the 32 physical cores to support up to 128 threads which in many cases can lead to significant speedups.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to p701 which is currently the gateway/devel node for this cluster. It is recommended that you modify your .bashrc files to distinguish between the TCS, P7, and GPC to avoid module confusion, an example configuration is given here.

Compiler/Devel Node

From p701 you can compile, do short tests, and submit your jobs to the queue.

Software

GNU Compilers

gcc/g++/gfortran version 4.4.4 is the default with RHEL 6.0 and is available by default.

IBM Compilers

To use the IBM Power specific compilers xlc/xlc++/xlf you need to load the following modules

$ module load vacpp xlf

NOTE: Be sure to use "-q64" when using the IBM compilers.

MPI

IBM's POE is available and will work with both the IBM and GNU compilers.

$ module load pe


Submit a Job

The current Scheduler is IBM's LoadLeveler similar to the implementation on the TCS, however be sure to include the @environment flags shown below in that sample script as they are different and necessary to get full performance.

#!/bin/bash
#===================================
# P7 Load Leveler Submission Script
#===================================
#
# Don't change these parameters unless you really know what you are doing
#
#@ environment = MP_INFOLEVEL=0; MP_USE_BULK_XFER=yes; MP_BULK_MIN_MSG_SIZE=64K; \
#                MP_EAGER_LIMIT=64K; LAPI_DEBUG_ENABLE_AFFINITY=no
#
#===================================
# Avoid core dumps
# @ core_limit   = 0
#===================================
# Job specific
#===================================
#
# @ job_name = myjob
# @ job_type = parallel
# @ class = verylong
# @ output = $(jobid).out
# @ error = $(jobid).err
# @ wall_clock_limit = 2:00:00
#
# @ node = 2
# @ tasks_per_node = 128
#
# @ queue
#
#===================================

./my_code 

To submit a job

llsubmit myjob.ll

To show running jobs use

llq

To cancel a job use

llcancel JOBID