Gravity
Gravity | |
---|---|
Installed | December 2012 |
Operating System | Linux Centos 6.4 |
Number of Nodes | 49 (588 cpu cores, 50176 gpu cores) |
Interconnect | QDR Infiniband |
Ram/Node | 32 Gb |
Cores/Node | 12 with 2x M2090 GPUs |
Login/Devel Node | gravity01 (from login.scinet) |
Vendor Compilers | nvcc,pgcc,icc,gcc |
Queue Submission | Torque |
The Gravity cluster, consists of 49 x86_64 nodes each with two hex core Intel Xeon (Sandybridge) E5-2620 2.0GHz CPUs with 32GB of RAM per node. Each node has two NVIDIA Tesla M2090 GPUs with CUDA Capability 2.0 (Fermi) each with 512 CUDA Cores and 6 GB of RAM. The nodes are interconnected with 3:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 588 x86_64 cores with 1,568 GB of system RAM and 98 GPUs with 588 GB GPU RAM total.
NB - gravity is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.
Note that SciNet has a mailing lists for people interested in GPGPU computing. To receive information on courses, workshop and other GPGPU related events, sign up at https://support.scinet.utoronto.ca/mailman/listinfo/scinet-gpgpu.
Nodes
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to gravity01 which is the GPU development node.
Access to these machines is currently controlled. Please email support@scinet.utoronto.ca for access.
Devel
As mentioned gravity01 is the head/develop node for interactive use. This node is for compiling, short testing, and submitting batch jobs to the compute nodes. It is a shared resource so treat it accordingly and use the queue and compute nodes for long are large computations.
ARC Experimental (ARCX) Xeon Phi/ Tesla K20
A separate devel node, arcX, with a single Intel Xeon Phi and a NVIDIA Tesla K20 is also available for testing these newer technologies. For full details see the Xeon Phi / Tesla K20 wiki page.
Compute
To access the other 48 compute nodes with GPU's you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 12 cores and 2 GPUs, and a maximum walltime of 12 hours.
For an interactive job use
qsub -l nodes=1:ppn=12:gpus=2,walltime=12:00:00 -q gravity -I
or for a batch job use
qsub script.sh
where script.sh is <source lang="bash">
- !/bin/bash
- Torque submission script for Gravity
- PBS -l nodes=2:ppn=12:gpus=2,walltime=1:00:00
- PBS -N GPUtest
- PBS -q gravity
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; -np = nodes*ppn
mpirun -np 24 ./a.out </source>
To check running jobs on the gpu nodes only use
showq -w class=gravity
Important note:
A bug in the torque scheduler currently sets the environment variable CUDA_VISIBLE_DEVICES to an incorrect value. Loading any one of the cuda modules will correct this, so be to do this in your job script or in your interactive jobs.
Software
The same software installed on the GPC is available on Gravity using the same modules framework. See here for full details.
Programming Frameworks
See the SciNet ARC page details on GPU specific software environment.