GPU Devel Nodes
GPU Development Cluster | |
---|---|
Installed | June 2010 |
Operating System | Linux |
Interconnect | Infiniband,GigE |
Ram/Node | 48 Gb |
Cores/Node | 8 |
Login/Devel Node | cell-srv01 (from login.scinet) |
Vendor Compilers | gcc,nvcc |
The Intel nodes have two 2.53GHz 4core Xeon X5550 CPU's with 48GB of RAM per node with 3 containing NVIDIA 9800GT GPUs.
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to cell-srv01 which is currently the gateway machine.
Access to these machines is currently controlled. Please email support@scinet.utoronto.ca for access.
Compile/Devel/Compute Nodes
Nehalem (x86_64)
You can log into any of 8 nodes cell-srv[01-08] directly however the nodes have differing configurations as follows:
- cell-srv01 - login node & nfs server, GigE connected
- cell-srv[02-05] - no GPU, GigE connected
- cell-srv[06-07] - 1x NVIDIA 9800GT GPU, Infiniband connected
- cell-srv08 - 2x NVIDIA 9800GT GPU, GigE connected
Software
The same software installed on the GPC is available on ARC using the same modules framework. See here for full details.
Programming Frameworks
Currently there are two programming frameworks to use, NVIDIA's CUDA framework or OpenCL.
CUDA
The current CUDA Toolkits in use are 3.0 (default) and 3.1. To use it just add the following module
module load cuda
or
module load cuda/cuda-3.1
The CUDA driver is installed locally, however CUDA is installed in.
/project/scinet/arc/cuda-3.0/ /project/scinet/arc/cuda-3.1/
OpenCL
As of 3.0, OpenCL is included in the CUDA Toolkit so loading the CUDA module is all the is required.
Compilers
- nvcc -- Nvidia compiler
MPI
The GPC MPI packages can be used on this system. See the GPC section on MPI for more details.
Documentation
- CUDA
- google "CUDA"
- OpenCL
- see above
Further Info
User Codes
Please discuss put any relevant information/problems/best practices you have encountered when using/developing for CUDA and/or OpenCL