Difference between revisions of "Accelerator Research Cluster"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 16: Line 16:
  
 
First login via ssh with your scinet account at <tt>login.scinet.utoronto.ca</tt>, and from there you can proceed to <tt>arc01</tt> which  
 
First login via ssh with your scinet account at <tt>login.scinet.utoronto.ca</tt>, and from there you can proceed to <tt>arc01</tt> which  
is currently the gateway machine.
+
is currently the gateway/devel node for this cluster.
  
 
==Compile/Devel/Compute Nodes==
 
==Compile/Devel/Compute Nodes==
Line 27: Line 27:
  
 
=== [[ GPU_Devel_Nodes | Nehalem (x86_64) & GPU ]]===
 
=== [[ GPU_Devel_Nodes | Nehalem (x86_64) & GPU ]]===
You can log into the head node '''<tt>arc01</tt>''' directly to compile and
+
You can log into the devel node '''<tt>arc01</tt>''' directly to compile and
interactively test, and from there submit jobs to the other 7 nodes.
+
interactively test, and from there submit jobs to the other 7 x86_64/GPU nodes.
 
 
* '''<tt>arc01</tt>''' - login node & nfs server, GigE connected
 
* '''<tt>arc[02-08]</tt>''' 2x NVIDIA M2070 GPU, Infiniband connected
 
  
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.

Revision as of 11:01, 8 April 2011


Accelerator Research Cluster (ARC)
Installed June 2010
Operating System Linux (RHEL 5.4)
Interconnect Infiniband,GigE
Login/Devel Node cell-srv01 (from login.scinet)

The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing 16 NVIDIA M2070 GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.

Please note that this cluster is not a production cluster and is only accessible to selected users.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to arc01 which is currently the gateway/devel node for this cluster.

Compile/Devel/Compute Nodes

Cell

You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.

See the Cell Devel Info page for Cell specific details.

Nehalem (x86_64) & GPU

You can log into the devel node arc01 directly to compile and interactively test, and from there submit jobs to the other 7 x86_64/GPU nodes.

See the GPU Devel Info page for GPU specific details.