GPU Devel Nodes
GPU Development Cluster | |
---|---|
Installed | April 2011 |
Operating System | Linux RHEL 5.4 |
Interconnect | Infiniband |
Ram/Node | 48 Gb |
Cores/Node | 8 with 2xGPUs |
Login/Devel Node | arc01 (from login.scinet) |
Vendor Compilers | nvcc (gcc,icc) |
Queue Submission | Torque |
There are 8 Intel nodes each with two quad core Xeon X5550 2.67GHz CPUs with 48GB of RAM per node. Each node has two NVIDIA Tesla M2070 GPUs with CUDA Capability 2.0 (Fermi) each with 448 CUDA Cores @ 1.15GHz and 6 GB of RAM.
Nodes
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to arc01 which is the GPU development node.
Access to these machines is currently controlled. Please email support@scinet.utoronto.ca for access.
Devel
As mentioned arc01 is the head/develop node for interactive use. This node is for compiling, short testing, and submitting batch jobs to the compute nodes. It is a shared resource so treat it accordingly and use the queue and compute nodes for long are large computations.
Compute
To access the other 7 compute nodes with GPU's you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 8 cores and 2 GPUs, with a limit of 2 nodes per job and a maximum walltime of 48 hours.
For an interactive job use
qsub -l nodes=1:ppn=8:gpus=2,walltime=48:00:00 -I
or for a batch job use
qsub script.sh
where scirpt.sh is <source lang="bash">
- !/bin/bash
- Torque submission script for SciNet ARC
- PBS -l nodes=2:ppn=8,gpus=2,walltime=1:00:00
- PBS -N GPUtest
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; -np = nodes*ppn
mpirun -np 16 ./a.out </source>
Software
The same software installed on the GPC is available on ARC using the same modules framework. See here for full details.
Programming Frameworks
Currently there are two programming frameworks to use, NVIDIA's CUDA framework or OpenCL.
CUDA
The current installed CUDA Toolkits are 3.0, 3.1, 3.2 (default) and 4.0RC2. To use 3.2 just add the following module
module load cuda/3.2
Note that to use the full 6GB or memory per GPU, CUDA 3.2 or newer must be used.
The CUDA driver is installed locally, however the CUDA Toolkits are installed in.
/project/scinet/arc/cuda-$VERSION/
The environment variable $SCINET_CUDA_INSTALL is set when a cuda module is loaded and it points to the install location. This is useful when setting up makefiles and if you use the NVIDIA_SDK build evironment, modify the NVIDIA_SDK/C/common/common.mk file accordingly.
CUDA_INSTALL_PATH = $SCINET_CUDA_INSTALL
OpenCL
As of 3.0, OpenCL is included in the CUDA Toolkit so loading the CUDA module is all that is required.
Compilers
- nvcc -- Nvidia compiler
MPI
The GPC MPI packages can be used on this system. See the GPC section on MPI for more details.
Driver Version
The current NVIDIA driver version installed is 270.40.
Documentation
- CUDA
- google "CUDA"
- OpenCL
- see above
Further Info
User Codes
Please discuss and put any relevant information/problems/best practices you have encountered when using/developing for CUDA and/or OpenCL