Difference between revisions of "Sandy"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 26: Line 26:
 
===Compilers===
 
===Compilers===
  
The Sandy nodes are fully compatible with all software/modules built for the standard GPC nodes, see [[GPC_Quickstart#Compilers | GPC Quickstart Compilers ]] however as they a newer architecture they also have added CPU instructions that your program may benefit from. To ensure that you are using these sandy specific optimizations use the following Intel compiler flags with the latest Intel compiler when you compile specifically for the sandy nodes.
+
The Sandy nodes are fully compatible with all software/modules built for the standard GPC nodes, see [[GPC_Quickstart#Compilers | GPC Quickstart Compilers ]]; however as they a newer architecture they also have added CPU instructions that your program may benefit from. To ensure that you are using these sandy specific optimizations use the following Intel compiler flags with the latest Intel compiler when you compile specifically for the sandy nodes.
  
 
<pre>
 
<pre>

Revision as of 13:16, 26 May 2014

Sandy
Ibm idataplex dx360 m4.jpg
Installed February 2013
Operating System Linux Centos 6.2
Number of Nodes 76
Interconnect QDR Infiniband
Ram/Node 64 Gb
Cores/Node 16
Login/Devel Node gpc0[1-4] (from login.scinet)
Vendor Compilers icc,gcc
Queue Submission Torque

The Sandybrdige (Sandy) cluster, consists of 76 x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 1216 x86_64 cores with 4,864 GB of total RAM.

NB - Sandy is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.

Nodes

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to the normal GPC devel nodes gpc0[1-4].

Compilers

The Sandy nodes are fully compatible with all software/modules built for the standard GPC nodes, see GPC Quickstart Compilers ; however as they a newer architecture they also have added CPU instructions that your program may benefit from. To ensure that you are using these sandy specific optimizations use the following Intel compiler flags with the latest Intel compiler when you compile specifically for the sandy nodes.

$ module load intel/14.0.1

Optimize your code for the SANDY nodes using of at the following compiler flags:

   -O3 -march=core-avx-i
  • More questions about compiling? See the FAQ.
  • NOTE: Code compiled using these option not be backwards compatible with the regular GPC nodes.

Compute

To access the sandybridge compute nodes you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 16 cores and a maximum walltime of 48 hours.

For an interactive job use

qsub -l nodes=1:ppn=16,walltime=12:00:00 -q sandy -I

or for a batch job use

qsub script.sh 

where script.sh is <source lang="bash">

  1. !/bin/bash
  2. Torque submission script for Sandy
  3. PBS -l nodes=2:ppn=16,walltime=1:00:00
  4. PBS -N sandytest
  5. PBS -q sandy

cd $PBS_O_WORKDIR

  1. EXECUTION COMMAND; -np = nodes*ppn

mpirun -np 32 ./a.out </source>

To check running jobs on the sandy nodes only use

showq -w class=sandy

Software

The same software installed on the GPC is available on Sandy using the same modules framework. See here for full details.