GPU Devel Nodes
GPU Development Cluster | |
---|---|
Installed | June 2010 |
Operating System | Linux |
Interconnect | Infiniband,GigE |
Ram/Node | 48 Gb |
Cores/Node | 8 |
Login/Devel Node | cell-srv01 (from login.scinet) |
Vendor Compilers | gcc,nvcc |
The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node with 3 containing NVIDIA 9800GT GPUs.
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.
Compile/Devel/Compute Nodes
Nehalem (x86_64)
You can log into any of 8 nodes cell-srv[01-08] directly however the nodes have differing configurations as follows:
- cell-srv01 - login node & nfs server, GigE connected
- cell-srv[02-05] - no GPU, GigE connected
- cell-srv[06-07] - 1x NVIDIA 9800GT GPU, Infiniband connected
- cell-srv08 - 2x NVIDIA 9800GT GPU, GigE connected
Local Disk
This test cluster currently cannot see the global /home and /scratch space so you will have to copy (scp,etc..) your code to a separate local/home dedicated for this cluster. Also initially you will probably want to copy/modify your scinet .bashrc, .bashrc_profile, and .ssh directory onto this system.
Programming Frameworks
Currently there are two programming frameworks to use, NVIDIA's CUDA framework or OpenCL.
CUDA
/usr/local/cuda/
OpenCL
- Its there somewhere
Compilers
- nvcc -- Nvidia GCC compiler
MPI
Still a work in progress.
Documentation
- CUDA
- google "CUDA"
- OpenCL
- see above
Further Info
User Codes
Please discuss put any relevant information/problems/best practices you have encountered when using/developing for CUDA and/or OpenCL