Difference between revisions of "Using the TCS"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
 
=About=
 
=About=
 +
The Tightly-coupled Capability System (TCS) is a cluster of IBM Power 6 nodes intended for jobs that scale well to at least 32 processes and require high bandwidth and large memory. It was installed at SciNet in late 2008 and is operating in "friendly-user" mode during winter 2009
 +
 +
==Node Names==   
 +
* node tcs-f02n01 is node # 1 in frame/rack #2
 +
* entire list of 104 nodes can be seen with llstatus
 +
 +
tcs-fxxnyy where xx is the frame/rack number and
 
==Node Specs==
 
==Node Specs==
 +
 +
There are 102 compute nodes each with:
 +
* 32 Power6 cores (4.7GHz); each core is 2-way multi-threaded using SMT (simultaneous multithreading)
 +
* 128GB of RAM except for tcs-f11n03 and n04 which have 256GB
 +
* each node has 4 Infiniband interfaces which should automatically be used by the MPI libraries; gigE interfaces are used for rsh access and GPFS token traffic
 
=User Access=
 
=User Access=

Revision as of 12:35, 23 April 2009

About

The Tightly-coupled Capability System (TCS) is a cluster of IBM Power 6 nodes intended for jobs that scale well to at least 32 processes and require high bandwidth and large memory. It was installed at SciNet in late 2008 and is operating in "friendly-user" mode during winter 2009

Node Names

  • node tcs-f02n01 is node # 1 in frame/rack #2
  • entire list of 104 nodes can be seen with llstatus

tcs-fxxnyy where xx is the frame/rack number and

Node Specs

There are 102 compute nodes each with:

  • 32 Power6 cores (4.7GHz); each core is 2-way multi-threaded using SMT (simultaneous multithreading)
  • 128GB of RAM except for tcs-f11n03 and n04 which have 256GB
  • each node has 4 Infiniband interfaces which should automatically be used by the MPI libraries; gigE interfaces are used for rsh access and GPFS token traffic

User Access