Difference between revisions of "Sandy"
(Created page with "{{Infobox Computer |image=center|300px|thumb |name=Sandy |installed=February 2013 |operatingsystem= Linux Centos 6.2 |loginnode= gravity01...") |
|||
Line 4: | Line 4: | ||
|installed=February 2013 | |installed=February 2013 | ||
|operatingsystem= Linux Centos 6.2 | |operatingsystem= Linux Centos 6.2 | ||
− | |loginnode= | + | |loginnode= gpc0[1-4] (from <tt>login.scinet</tt>) |
|nnodes=76 | |nnodes=76 | ||
|rampernode=64 Gb | |rampernode=64 Gb | ||
Line 22: | Line 22: | ||
=== Login === | === Login === | ||
− | First login via ssh with your scinet account at <tt>login.scinet.utoronto.ca</tt>, and from there you can proceed to | + | First login via ssh with your scinet account at <tt>login.scinet.utoronto.ca</tt>, and from there you can proceed to the normal GPC devel nodes '''<tt>gpc0[1-4]</tt>''' |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=== Compute === | === Compute === | ||
− | To access the | + | To access the sandybridge compute nodes you need to use the queue, similar to the standard GPC compute nodes. |
− | Currently the nodes are scheduled by complete node, | + | Currently the nodes are scheduled by complete node, 16 cores and a maximum walltime of 48 hours. |
For an interactive job use | For an interactive job use | ||
<pre> | <pre> | ||
− | qsub -l nodes=1:ppn= | + | qsub -l nodes=1:ppn=16,walltime=12:00:00 -q sandy -I |
</pre> | </pre> | ||
Line 52: | Line 43: | ||
# Torque submission script for Gravity | # Torque submission script for Gravity | ||
# | # | ||
− | #PBS -l nodes=2:ppn= | + | #PBS -l nodes=2:ppn=16,walltime=1:00:00 |
− | #PBS -N | + | #PBS -N sandytest |
− | #PBS -q | + | #PBS -q sandy |
cd $PBS_O_WORKDIR | cd $PBS_O_WORKDIR | ||
# EXECUTION COMMAND; -np = nodes*ppn | # EXECUTION COMMAND; -np = nodes*ppn | ||
− | mpirun -np | + | mpirun -np 32 ./a.out |
</source> | </source> | ||
− | To check running jobs on the | + | To check running jobs on the sandy nodes only use |
<pre> | <pre> | ||
− | showq -w class= | + | showq -w class=sandy |
</pre> | </pre> | ||
== Software == | == Software == | ||
− | The same software installed on the GPC is available on | + | The same software installed on the GPC is available on Sandy using the same modules framework. |
See [[GPC_Quickstart#Modules_and_Environment_Variables | here]] for full details. | See [[GPC_Quickstart#Modules_and_Environment_Variables | here]] for full details. | ||
− | |||
− | |||
− | |||
− |
Revision as of 14:27, 25 February 2013
Sandy | |
---|---|
Installed | February 2013 |
Operating System | Linux Centos 6.2 |
Number of Nodes | 76 |
Interconnect | QDR Infiniband |
Ram/Node | 64 Gb |
Cores/Node | 16 |
Login/Devel Node | gpc0[1-4] (from login.scinet) |
Vendor Compilers | icc,gcc |
Queue Submission | Torque |
The Sandybrdige (Sandy) cluster, consists of 76 x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet GPFS filesystems. In total this cluster contains 1216 x86_64 cores with 1,568 GB of system RAM and 98 GPUs with 588 GB GPU RAM total.
NB - Sandy is a user-contributed system acquired through a CFI LOF to a specific PI. Policies regarding use by other groups are under development and subject to change at any time.
Nodes
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to the normal GPC devel nodes gpc0[1-4]
Compute
To access the sandybridge compute nodes you need to use the queue, similar to the standard GPC compute nodes. Currently the nodes are scheduled by complete node, 16 cores and a maximum walltime of 48 hours.
For an interactive job use
qsub -l nodes=1:ppn=16,walltime=12:00:00 -q sandy -I
or for a batch job use
qsub script.sh
where script.sh is <source lang="bash">
- !/bin/bash
- Torque submission script for Gravity
- PBS -l nodes=2:ppn=16,walltime=1:00:00
- PBS -N sandytest
- PBS -q sandy
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; -np = nodes*ppn
mpirun -np 32 ./a.out </source>
To check running jobs on the sandy nodes only use
showq -w class=sandy
Software
The same software installed on the GPC is available on Sandy using the same modules framework. See here for full details.