Difference between revisions of "Accelerator Research Cluster"
Line 25: | Line 25: | ||
See the [[ Cell_Devel_Nodes | Cell Devel Info ]] page for Cell specific details. | See the [[ Cell_Devel_Nodes | Cell Devel Info ]] page for Cell specific details. | ||
− | === Nehalem (x86_64) === | + | === Nehalem (x86_64) & GPU=== |
You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly however each | You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly however each | ||
of the nodes have differing configurations. | of the nodes have differing configurations. |
Revision as of 16:12, 13 April 2010
Accelerator Research Cluster (ARC) | |
---|---|
Installed | June 2010 |
Operating System | Linux (RHEL 5.4) |
Interconnect | Infiniband,GigE |
Login/Devel Node | cell-srv01 (from login.scinet) |
The Accelerator Research Cluster is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.
Compile/Devel/Compute Nodes
Cell
You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.
See the Cell Devel Info page for Cell specific details.
Nehalem (x86_64) & GPU
You can log into any of 8 nodes cell-srv[01-08] directly however each of the nodes have differing configurations.
- cell-srv01 - login node & nfs server, GigE connected
- cell-srv[02-05] - no GPU, GigE connected
- cell-srv[06-07] - 1x NVIDIA 9800GT GPU, Infiniband connected
- cell-srv08 - 2x NVIDIA 9800GT GPU, GigE connected
See the GPU Devel Info page for GPU specific details.
Local Disk
This test cluster currently cannot see the global /home and /scratch space so you will have to copy (scp,etc..) your code to a separate local/home dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.