Difference between revisions of "Accelerator Research Cluster"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
(Created page with '{{Infobox Computer |image=center|300px|thumb |name=Cell Development Cluster |installed=June 2010 |operatingsystem= Linux |logi…')
 
Line 1: Line 1:
 
{{Infobox Computer
 
{{Infobox Computer
|image=[[Image:300px-Cell_Broadband_Engine_Processor.jpg|center|300px|thumb]]
+
|name=Accelerator Research Cluster (ARC)
|name=Cell Development Cluster
 
 
|installed=June 2010
 
|installed=June 2010
|operatingsystem= Linux
+
|operatingsystem= Linux (RHEL 5.4)
 
|loginnode= cell-srv01 (from <tt>login.scinet</tt>)
 
|loginnode= cell-srv01 (from <tt>login.scinet</tt>)
|numberofnodes=14
+
|numberofnodes=8+14
|rampernode=32 Gb
+
|interconnect=Infiniband,GigE
|corespernode=2 PPU + 16 SPU
 
|interconnect=Infiniband
 
|vendorcompilers=ppu-gcc, spu-gcc
 
 
}}
 
}}
  
 
The Accelerator Research Cluster is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs.
 
The Accelerator Research Cluster is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs.
 
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
 
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node.
+
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
  
 
===Login===
 
===Login===
Line 26: Line 22:
 
You can log into any of 12 nodes '''<tt>blade[03-14]</tt>''' directly to compile/test/run Cell
 
You can log into any of 12 nodes '''<tt>blade[03-14]</tt>''' directly to compile/test/run Cell
 
specific or OpenCL codes.
 
specific or OpenCL codes.
 +
 +
See the [[ Cell_Devel_Nodes | Cell Devel Info ]] page for Cell specific details.
  
 
=== Nehalem (x86_64) ===
 
=== Nehalem (x86_64) ===
You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly, however they
+
You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly.
are not configured to work with the Cell blades yet.
+
 
 +
'''<tt>cell-srv01-08]</tt>''
 +
 
 +
 
 +
See [[ GPU_Devel_Nodes | GPU Devel Info ]] page for Cell specific details.
  
 
==Local Disk==
 
==Local Disk==
  
 
This test cluster currently cannot see the global <tt>/home</tt> and <tt>/scratch</tt> space so you will have to copy (scp,etc..) your code to a separate local<tt>/home</tt> dedicated for this cluster.  Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.
 
This test cluster currently cannot see the global <tt>/home</tt> and <tt>/scratch</tt> space so you will have to copy (scp,etc..) your code to a separate local<tt>/home</tt> dedicated for this cluster.  Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.

Revision as of 15:35, 13 April 2010


Accelerator Research Cluster (ARC)
Installed June 2010
Operating System Linux (RHEL 5.4)
Interconnect Infiniband,GigE
Login/Devel Node cell-srv01 (from login.scinet)

The Accelerator Research Cluster is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.

Compile/Devel/Compute Nodes

Cell

You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.

See the Cell Devel Info page for Cell specific details.

Nehalem (x86_64)

You can log into any of 8 nodes cell-srv[01-08] directly.

'cell-srv01-08]


See GPU Devel Info page for Cell specific details.

Local Disk

This test cluster currently cannot see the global /home and /scratch space so you will have to copy (scp,etc..) your code to a separate local/home dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.