Difference between revisions of "GPU Devel Nodes"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 5: Line 5:
 
|operatingsystem= Linux
 
|operatingsystem= Linux
 
|loginnode= cell-srv01 (from <tt>login.scinet</tt>)
 
|loginnode= cell-srv01 (from <tt>login.scinet</tt>)
|numberofnodes=14
+
|numberofnodes=8
|rampernode=32 Gb  
+
|rampernode=48 Gb  
|corespernode=2 PPU + 16 SPU
+
|corespernode=8
|interconnect=Infiniband
+
|interconnect=Infiniband,GigE
|vendorcompilers=ppu-gcc, spu-gcc
+
|vendorcompilers=gcc,nvcc
 
}}
 
}}
  

Revision as of 16:08, 13 April 2010

Cell Development Cluster
300px-Cell Broadband Engine Processor.jpg
Installed June 2010
Operating System Linux
Interconnect Infiniband,GigE
Ram/Node 48 Gb
Cores/Node 8
Login/Devel Node cell-srv01 (from login.scinet)
Vendor Compilers gcc,nvcc

The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node with 3 containing NVIDIA 9800GT GPUs.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.

Compile/Devel/Compute Nodes

Nehalem (x86_64)

You can log into any of 8 nodes cell-srv[01-08] directly however each of the nodes have differing configurations.

  • cell-srv01 - login node & nfs server, GigE connected
  • cell-srv[02-05] - no GPU, GigE connected
  • cell-srv[06-07] - 1x NVIDIA 9800GT GPU, Infiniband connected
  • cell-srv08 - 2x NVIDIA 9800GT GPU, GigE connected

Local Disk

This test cluster currently cannot see the global /home and /scratch space so you will have to copy (scp,etc..) your code to a separate local/home dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.

Programming Frameworks

Currently there are two programming frameworks to use, NVIDIA's CUDA framework or OpenCL.

CUDA

/usr/local/cuda/

OpenCL

Demos, examples, and build details available in

Compilers

  • nvcc -- Nvidia GCC compiler

MPI

Still a work in progress.


Documentation

  • CUDA
    • http...
  • OpenCL
    • http...

Further Info

User Codes

Please discuss put any relevant information/problems/best practices you have encountered when using/developing for CELL and/or OpenCL