Sandy

From oldwiki.scinet.utoronto.ca
Revision as of 14:24, 25 February 2013 by Northrup (talk | contribs) (Created page with "{{Infobox Computer |image=center|300px|thumb |name=Sandy |installed=February 2013 |operatingsystem= Linux Centos 6.2 |loginnode= gravity01...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Sandy
Ibm idataplex dx360 m4.jpg
Installed February 2013
Operating System Linux Centos 6.2
Number of Nodes 76
Interconnect QDR Infiniband
Ram/Node 64 Gb
Cores/Node 16
Login/Devel Node gravity01 (from login.scinet)
Vendor Compilers icc,gcc
Queue Submission Torque

The Sandybrdige (Sandy) cluster, consists of 76 x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 1216 x86_64 cores with 1,568 GB of system RAM and 98 GPUs with 588 GB GPU RAM total.

NB - Sandy is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.

Nodes

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to gravity01 which is the GPU development node.

Access to these machines is currently controlled. Please email support@scinet.utoronto.ca for access.

Devel

As mentioned gravity01 is the head/develop node for interactive use. This node is for compiling, short testing, and submitting batch jobs to the compute nodes. It is a shared resource so treat it accordingly and use the queue and compute nodes for long are large computations.

Compute

To access the other 48 compute nodes with GPU's you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 12 cores and 2 GPUs, and a maximum walltime of 12 hours.

For an interactive job use

qsub -l nodes=1:ppn=12:gpus=2,walltime=12:00:00 -q gravity -I

or for a batch job use

qsub script.sh 

where script.sh is <source lang="bash">

  1. !/bin/bash
  2. Torque submission script for Gravity
  3. PBS -l nodes=2:ppn=12:gpus=2,walltime=1:00:00
  4. PBS -N GPUtest
  5. PBS -q gravity

cd $PBS_O_WORKDIR

  1. EXECUTION COMMAND; -np = nodes*ppn

mpirun -np 24 ./a.out </source>

To check running jobs on the gpu nodes only use

showq -w class=gravity

Software

The same software installed on the GPC is available on Gravity using the same modules framework. See here for full details.

Programming Frameworks

See the SciNet ARC page details on GPU specific software environment.