Difference between revisions of "Accelerator Research Cluster"
Line 26: | Line 26: | ||
=== Nehalem (x86_64) === | === Nehalem (x86_64) === | ||
− | You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly. | + | You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly however each |
+ | of the nodes have differing configurations. | ||
− | '''<tt>cell-srv01- | + | * '''<tt>cell-srv01</tt>''' - login node & nfs server, GigE connected |
+ | * '''<tt>cell-srv[02-05]</tt>''' - no GPU, GigE connected | ||
+ | * '''<tt>cell-srv[06-07]</tt>''' - 1x NVIDIA 9800GT GPU, Infiniband connected | ||
+ | * '''<tt>cell-srv08</tt>''' - 2x NVIDIA 9800GT GPU, GigE connected | ||
− | + | See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details. | |
− | See [[ GPU_Devel_Nodes | GPU Devel Info ]] page for | ||
==Local Disk== | ==Local Disk== | ||
This test cluster currently cannot see the global <tt>/home</tt> and <tt>/scratch</tt> space so you will have to copy (scp,etc..) your code to a separate local<tt>/home</tt> dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system. | This test cluster currently cannot see the global <tt>/home</tt> and <tt>/scratch</tt> space so you will have to copy (scp,etc..) your code to a separate local<tt>/home</tt> dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system. |
Revision as of 15:59, 13 April 2010
Accelerator Research Cluster (ARC) | |
---|---|
Installed | June 2010 |
Operating System | Linux (RHEL 5.4) |
Interconnect | Infiniband,GigE |
Login/Devel Node | cell-srv01 (from login.scinet) |
The Accelerator Research Cluster is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.
Compile/Devel/Compute Nodes
Cell
You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.
See the Cell Devel Info page for Cell specific details.
Nehalem (x86_64)
You can log into any of 8 nodes cell-srv[01-08] directly however each of the nodes have differing configurations.
- cell-srv01 - login node & nfs server, GigE connected
- cell-srv[02-05] - no GPU, GigE connected
- cell-srv[06-07] - 1x NVIDIA 9800GT GPU, Infiniband connected
- cell-srv08 - 2x NVIDIA 9800GT GPU, GigE connected
See the GPU Devel Info page for GPU specific details.
Local Disk
This test cluster currently cannot see the global /home and /scratch space so you will have to copy (scp,etc..) your code to a separate local/home dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.