Difference between revisions of "Accelerator Research Cluster"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 7: Line 7:
 
|interconnect=Infiniband,GigE
 
|interconnect=Infiniband,GigE
 
}}
 
}}
The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs.
+
The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing 16 NVIDIA M2070 GPUs.
 
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
 
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
 
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
 
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
Line 27: Line 27:
  
 
=== [[ GPU_Devel_Nodes | Nehalem (x86_64) & GPU ]]===
 
=== [[ GPU_Devel_Nodes | Nehalem (x86_64) & GPU ]]===
You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly however each
+
You can log into the head node '''<tt>arc01</tt>''' directly to compile and
of the nodes have differing configurations.
+
interactively test, and from there submit jobs to the other 7 nodes.
  
* '''<tt>cell-srv01</tt>''' - login node & nfs server, GigE connected
+
* '''<tt>arc01</tt>''' - login node & nfs server, GigE connected
* '''<tt>cell-srv[02-05]</tt>''' - no GPU, GigE connected
+
* '''<tt>arc[02-08]</tt>''' 2x NVIDIA M2070 GPU, Infiniband connected
* '''<tt>cell-srv[06-07]</tt>''' - 1x NVIDIA 9800GT GPU, Infiniband connected
 
* '''<tt>cell-srv08</tt>''' - 2x NVIDIA 9800GT GPU, GigE connected
 
  
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.

Revision as of 11:00, 8 April 2011


Accelerator Research Cluster (ARC)
Installed June 2010
Operating System Linux (RHEL 5.4)
Interconnect Infiniband,GigE
Login/Devel Node cell-srv01 (from login.scinet)

The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing 16 NVIDIA M2070 GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.

Please note that this cluster is not a production cluster and is only accessible to selected users.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to arc01 which is currently the gateway machine.

Compile/Devel/Compute Nodes

Cell

You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.

See the Cell Devel Info page for Cell specific details.

Nehalem (x86_64) & GPU

You can log into the head node arc01 directly to compile and interactively test, and from there submit jobs to the other 7 nodes.

  • arc01 - login node & nfs server, GigE connected
  • arc[02-08] 2x NVIDIA M2070 GPU, Infiniband connected

See the GPU Devel Info page for GPU specific details.