Difference between revisions of "Accelerator Research Cluster"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
m
 
(15 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
{| style="border-spacing: 8px; width:100%"
 +
| valign="top" style="cellpadding:1em; padding:1em; border:2px solid; background-color:#f6f674; border-radius:5px"|
 +
'''WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to [https://docs.scinet.utoronto.ca https://docs.scinet.utoronto.ca]'''
 +
|}
 +
 
{{Infobox Computer
 
{{Infobox Computer
 +
|image=[[Image:Tesla S2070 3qtr.gif|center|200px|thumb]]
 
|name=Accelerator Research Cluster (ARC)
 
|name=Accelerator Research Cluster (ARC)
|installed=June 2010
+
|installed=June 2010, April 2011
|operatingsystem= Linux (RHEL 5.4)
+
|operatingsystem= Linux (Centos 6.2)
|loginnode= cell-srv01 (from <tt>login.scinet</tt>)
+
|loginnode= arc01 (from login.scinet)
|numberofnodes=8+14
+
|nnodes=8(x86)+4x4(GPU)+14(Cell)
|interconnect=Infiniband,GigE
+
|interconnect=DDR Infiniband
 
}}
 
}}
 +
The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing 16 NVIDIA M2070 GPUs.
 +
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
 +
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node each containing two NVIDIA M2070 (Fermi) GPU's each with 6GB of RAM.
  
The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing NVIDIA GPUs.
+
Please note that this cluster is not a production cluster and is only accessible to selected users.  
The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and
 
32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node and 3 contain Nvidia 9800GT video cards.
 
  
 
===Login===
 
===Login===
  
First login via ssh with your scinet account at <tt>login.scinet.utoronto.ca</tt>, and from there you can proceed to <tt>cell-srv01</tt> which  
+
First login via ssh with your scinet account at '''<tt>login.scinet.utoronto.ca</tt>''', and from there you can proceed to '''<tt>arc01</tt>''' which  
is currently the gateway machine.
+
is currently the gateway/devel node for this cluster.
  
 
==Compile/Devel/Compute Nodes==
 
==Compile/Devel/Compute Nodes==
  
=== Cell ===
+
=== [[ Cell_Devel_Nodes | Cell ]] ===
 
You can log into any of 12 nodes '''<tt>blade[03-14]</tt>''' directly to compile/test/run Cell
 
You can log into any of 12 nodes '''<tt>blade[03-14]</tt>''' directly to compile/test/run Cell
 
specific or OpenCL codes.
 
specific or OpenCL codes.
Line 25: Line 32:
 
See the [[ Cell_Devel_Nodes | Cell Devel Info ]] page for Cell specific details.
 
See the [[ Cell_Devel_Nodes | Cell Devel Info ]] page for Cell specific details.
  
=== Nehalem (x86_64) & GPU===
+
=== [[ GPU_Devel_Nodes |  GPU ]]===
You can log into any of 8 nodes '''<tt>cell-srv[01-08]</tt>''' directly however each
+
You can log into the devel node '''<tt>arc01</tt>''' directly to compile and
of the nodes have differing configurations.
+
interactively test, and from there submit jobs to the other 7 x86_64/GPU nodes.
 
 
* '''<tt>cell-srv01</tt>''' - login node & nfs server, GigE connected
 
* '''<tt>cell-srv[02-05]</tt>''' - no GPU, GigE connected
 
* '''<tt>cell-srv[06-07]</tt>''' - 1x NVIDIA 9800GT GPU, Infiniband connected
 
* '''<tt>cell-srv08</tt>''' - 2x NVIDIA 9800GT GPU, GigE connected
 
  
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.
 
See the [[ GPU_Devel_Nodes | GPU Devel Info ]] page for GPU specific details.
 
==Local Disk==
 
 
This test cluster currently cannot see the global <tt>/home</tt> and <tt>/scratch</tt> space so you will have to copy (scp,etc..) your code to a separate local<tt>/home</tt> dedicated for this cluster.  Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.
 

Latest revision as of 19:25, 31 August 2018

WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to https://docs.scinet.utoronto.ca


Accelerator Research Cluster (ARC)
Tesla S2070 3qtr.gif
Installed June 2010, April 2011
Operating System Linux (Centos 6.2)
Number of Nodes 8(x86)+4x4(GPU)+14(Cell)
Interconnect DDR Infiniband
Login/Devel Node arc01 (from login.scinet)

The Accelerator Research Cluster (ARC) is a technology evaluation cluster with a combination of 14 IBM PowerXCell 8i "Cell" nodes and 8 Intel x86_64 "Nehalem" nodes containing 16 NVIDIA M2070 GPUs. The QS22's each have two 3.2GHz "IBM PowerXCell 8i CPU's, where each CPU has 1 Power Processing Unit (PPU) and 8 Synergistic Processing Units (SPU), and 32GB of RAM per node. The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node each containing two NVIDIA M2070 (Fermi) GPU's each with 6GB of RAM.

Please note that this cluster is not a production cluster and is only accessible to selected users.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to arc01 which is currently the gateway/devel node for this cluster.

Compile/Devel/Compute Nodes

Cell

You can log into any of 12 nodes blade[03-14] directly to compile/test/run Cell specific or OpenCL codes.

See the Cell Devel Info page for Cell specific details.

GPU

You can log into the devel node arc01 directly to compile and interactively test, and from there submit jobs to the other 7 x86_64/GPU nodes.

See the GPU Devel Info page for GPU specific details.