Sandy
Sandy | |
---|---|
Installed | February 2013 |
Operating System | Linux Centos 6.2 |
Number of Nodes | 76 |
Interconnect | QDR Infiniband |
Ram/Node | 64 Gb |
Cores/Node | 16 |
Login/Devel Node | gpc0[1-4] (from login.scinet) |
Vendor Compilers | icc,gcc |
Queue Submission | Torque |
The Sandybrdige (Sandy) cluster, consists of 76 x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 1216 x86_64 cores with 4,864 GB of total RAM.
NB - Sandy is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.
Nodes
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to the normal GPC devel nodes gpc0[1-4].
Compute
To access the sandybridge compute nodes you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 16 cores and a maximum walltime of 48 hours.
For an interactive job use
qsub -l nodes=1:ppn=16,walltime=12:00:00 -q sandy -I
or for a batch job use
qsub script.sh
where script.sh is <source lang="bash">
- !/bin/bash
- Torque submission script for Gravity
- PBS -l nodes=2:ppn=16,walltime=1:00:00
- PBS -N sandytest
- PBS -q sandy
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; -np = nodes*ppn
mpirun -np 32 ./a.out </source>
To check running jobs on the sandy nodes only use
showq -w class=sandy
Software
The same software installed on the GPC is available on Sandy using the same modules framework. See here for full details.