GPC Quickstart
General Purpose Cluster (GPC) | |
---|---|
Installed | June 2009 |
Operating System | Linux |
Interconnect | 1/4 on Infiniband, rest on GigE |
Ram/Node | 16 Gb |
Cores/Node | 8 |
Login/Devel Node | gpc01..gpc04 (from login.scinet) |
Vendor Compilers | icc (C) ifort (fortran) icpc (C++) |
Queue Submission | Moab/Torque |
The General Purpose Cluster is an extremely large cluster (ranked 16th in the world at its inception, and fastest in Canada) and is where most simulations are to be done at SciNet. It is an IBM iDataPlex cluster based on Intel's Nehalem architecture (one of the first in the world to make use of the new chips). The GPC consists of 3,780 nodes with a total of 30,240 2.5GHz cores, with 16GB RAM per node (2GB per core). Approximately one quarter of the cluster is interconnected with non-blocking 4x-DDR InfiniBand while the rest of the nodes are connected with gigabit ethernet.
Login
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to the Development nodes to compile/test your code.
Compile/Devel Nodes
From a scinet login node you can ssh to gpc01..gpc04. These nodes have the same hardware configuration as most of the compute nodes -- 8 Nehalem processing cores with 16GB RAM and Gigabit ethernet. You can compile and test your codes on these nodes. To interactively test on more than 8 processors, or to test your code over an InfiniBand connection, you can submit an interactive job request.
Your home directory is in /home/USER; you have 10GB there that is backed up. This directory cannot be written to by the compute nodes! Thus, to run jobs, you'll use the /scratch/USER directory. Here, there is a large amount of disk space, but it is not backed up. Thus it makes sense to keep your codes in /home, compile there, and then run them in the /scratch directory.
Modules and Environment Variables
To use most packages on the SciNet machines - including any of the compilers - , you will have to use the `modules' command. The command module load some-package will set your environment variables (PATH, LD_LIBRARY_PATH, etc) to include the default version of that package. module load some-package/specific-version will load a specific version of that package. This makes it very easy for different users to use different versions of compilers, MPI versions, libraries etc.
Note that to use even the gcc compilers you will have to do
module load gcc
but in fact you probably should use the intel compilers installed on this system as they usually produce faster code (and sometimes, much faster.)
A list of the installed software is available in Software & Libraries and can be seen on the system by typing
module avail
To load a module (for example, the default version of the intel compilers)
module load intel
To unload a module
module unload intel
To unload all modules
module purge
These commands should go in your .bashrc files and/or in your submission scripts to make sure you are using the correct packages.
Compilers
The intel compilers are icc/icpc/ifort for C/C++/Fortran, and are available with the default module "intel". The intel compilers are recommended over the GNU compilers. Documentation about icpc is available at http://software.intel.com/en-us/articles/intel-software-technical-documentation/. The Intel compilers accept many of the options that the GNU compilers accept, but tend to produce faster programs on our system. If, for some reason, you really need the GNU compilers, the latest version of the GNU compiler collection (currently 4.4.0) is available by loading the "gcc" module, with gcc/g++/gfortran for C/C++/Fortran. Note that f77/g77 is not supported.
To ensure that the intel compilers are in your PATH and their libraries are in your LD_LIBRARY_PATH, use the command
module load intel
This should likely go in your .bashrc file so that it will automatically be loaded.
Optimize your code for the GPC machine using of at least the following compiler flags:
-O3 -xHost
(or -O3 -march=native for the GNU compilers).
- If your program uses openmp, add -openmp (-fopenmp for GNU compilers).
- If you get the warning feupdatreenv is not implemented, add -limf to the link line.
- If you need to link in the MKL libraries, you are well advised to use the Intel(R) Math Kernel Library Link Line Advisor: http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/ for help in devising the list of libraries to link with your code.
MPI
SciNet currently provides multiple MPI libraries for the GPC; OpenMPI, and IntelMPI. We currently recommend OpenMPI as the default, as it quite reliably demonstrates good performance on both the infiniband and ethernet networks. For full details and options see the complete MPI section.
The MPI libraries are compiled with both the gnu compiler suite and the intel compiler suite. To use (for instance) the intel-compiled OpenMPI libraries, which we recommend as the default (and use for most of our examples here), use
module load openmpi intel
in your .bashrc. Other combinations behave similarly.
The MPI libraries define the wrappers mpicc/mpicxx/mpif90/mpif77 as wrappers around the appropriate compilers, which ensure the appropriate include and library directories and used in the compilation and linking steps.
We currently recommend the Intel + OpenMPI combination. However, if you require the GNU compilers as well as MPI, you would want to find the most recent openmpi module available with `gcc' in the version name. This will enable development and runtime with gcc/g++/gfortran and OpenMPI. You can make this your default by putting the module load line in your ~/.bashrc file.
For mixed OpenMP/MPI code using Intel MPI, add the compilation flag -mt_mpi for full thread-safety.
Submitting A Batch Job
The SciNet machines are shared systems, and jobs that are to run on them are submitted to a queue; the scheduler then orders the jobs in order to make the best use of the machine, and has them launched when resources become availble. The intervention of the scheduler can mean that the jobs aren't quite run in a first-in first-out order.
The maximum wallclock time for a job in the queue is 48 hours; computations that will take longer than this must be broken into 48-hour chunks and run as several jobs. The usual way to do this is with checkpoints, writing out the complete state of the computation every so often in such a way that a job can be restarted from this state information and continue on from where it left off. Generating checkpoints is a good idea anyway, as in the unlikely event of a hardware failure during your run, it allows you to restart without having lost much work.
There are limits to how many jobs you can submit. If your group has a default account, up to 32 nodes at a time for 48 hours per job on the GPC cluster are allowed to be queued. This is a total limit, e.g., you could request 64 nodes for 24 hours. Jobs of users with an LRAC or NRAC allocation will run at a higher priority than others while their resources last. Because of the group-based allocation, it is conceivable that your jobs won't run if your colleagues have already exhausted your group's limits.
Note that scheduling big jobs greatly affects the queuer and other users, so you have to talk to us first to run massively parallel jobs (> 2048 cores). We will help make sure that your jobs start and run efficiently.
If your job should run in fewer than 48 hours, specify that in your script -- your job will start sooner. (It's easier for the scheduler to fit in a short job than a long job). On the downside, the job will be killed automatically by the queue manager software at the end of the specified wallclock time, so if you guess wrong you might lose some work. So the standard procedure is to estimate how long your job will take and add 10% or so.
You interact with the queuing system through the queue/resource manager, Moab and Torque. To see all the jobs in the queue use
showq
To submit your own job, you must write a script which describes the job and how it is to be run (a sample script follows) and submit it to the queue, using the command
qsub SCRIPT-FILE-NAME
where you will replace SCRIPT-FILE-NAME with the file containing the submission script. This will return a job ID, for example 31415, which is used to identify the jobs. Information about a queued job can be found using
checkjob JOB-ID
and jobs can be canceled with the command
canceljob JOB-ID
Again, these commands have many options, which can be read about on their man pages.
Much more information on the queueing system is available on our queue page.
Batch Submission Script: MPI
A sample submission script is shown below for an mpi job using ethernet with the #PBS directives at the top and the rest being what will be executed on the compute node.
<source lang="bash">
- !/bin/bash
- MOAB/Torque submission script for SciNet GPC (ethernet)
- PBS -l nodes=2:ppn=8,walltime=1:00:00
- PBS -N test
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; -np = nodes*ppn
mpirun -np 16 -hostfile $PBS_NODEFILE ./a.out </source>
The lines that begin #PBS are commands that are parsed and interpreted by qsub at submission time, and control administrative things about your job. In this example, the script above requests two nodes, using 8 processors per node, for a wallclock time of one hour. (The resources required by the job are listed on the #PBS -l line.) Other options can be given in other #PBS lines, such as #PBS -N, which sets the name of the job.
The rest of the script is run as a bash script at run time. A bash shell on the first node of the two nodes that are requested executes these commands as a normal bash script, just as if you had run this as a shell script from the terminal. The only difference is that PBS sets certain environment variables that you can use in the script. $PBS_O_WORKDIR is set to be the directory that the command was 'submitted' from - eg, /scratch/USER/SOMEDIRECTORY - and $PBS_NODEFILE is the name of a file which contains all the nodes on which programs should execute. Using these environment variables, the script then uses the mpirun command to launch the job. Assumed here is that the user has a line like
module load openmpi intel
in their .bashrc.
- Note: The different versions of MPI require different commands to launch the run, and thus different scripts. The above script is specific for the openmpi module. For the intelmpi module, the last line of the script should read
mpirun -r ssh -np 16 -env I_MPI_DEVICE ssm ./a.out
Submitting Collections of Serial Jobs
A SciNet-approved method for running collections of serial jobs is outlined in the FAQ.
Batch Submission Script: OpenMP
For running OpenMP jobs, the procedure is similar as for MPI jobs:
<source lang="bash">
- !/bin/bash
- MOAB/Torque submission script for SciNet GPC (OpenMP)
- PBS -l nodes=1:ppn=8,walltime=1:00:00
- PBS -N test
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=8 ./a.out </source>
Note that in some circumstances it can be more efficient to run (say) two jobs each running on four threads than one job running on eight threads. In that case you can use the same `ampersand-and-wait' technique outlined for serial jobs in the FAQ for less-than-eight-core OpenMP jobs.
Hybrid MPI/OpenMP jobs
Using Intel MPI
Here is how to run hybrid codes using intelmpi::
http://software.intel.com/en-us/articles/hybrid-applications-intelmpi-openmp/
Make sure you compile with the -mt_mpi option to the compilers to use the thread safe libraries. Set the environment variable I_MPI_PIN_DOMAIN
$ export I_MPI_PIN_DOMAIN=omp
This will set the process pinning domain size to be equal to OMP_NUM_THREADS (which you should set to the desired number of threads per mpi process). Therefore, each MPI process can create $OMP_NUM_THREADS number of children threads for running within the corresponding domain. If OMP_NUM_THREADS is not set, each node is treated as a separate domain (which will allow as many threads per MPI processes as there are cores).
In addition, when invoking mpirun, you should add the argument "-ppn X", where X is the number of MPI processes per node. For example:
mpirun -r ssh -ppn 2 -np 8 ....
would start 2 mpi processes per node for a total of 8 processes, so mpirun will try to run mpi processes on 4 nodes (OMP_NUM_THREADS is then probably best set at 4). Your job script should still ask for these 4 nodes with the line
#PBS -l nodes=4:ppn=8,walltime=....
(ppn=8 is not a mistake here; the ppn parameter has a different meaning for PBS and for mpirun)
The ppn parameter to mpirun is very important! Without it, eight mpi jobs would get bunched on the first node in this example, leaving 3 nodes unused.
NOTE: In order to pin OpenMP threads inside the domain, use the corresponding OpenMP feature by setting the KMP_AFFINITY environment variable, see Compiler User and Reference Guide.
The IntelMPI manual is referenced on the front page of our wiki:
http://software.intel.com/sites/products/documentation/hpc/mpi/linux/reference_manual.pdf
For the above example of a total of 8 processes on 4 nodes, you could use the following script:
#!/bin/bash # MOAB/Torque submission script for SciNet GPC (hybrid job) # #PBS -l nodes=4:ppn=8,walltime=1:00:00 #PBS -N test # DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from cd $PBS_O_WORKDIR # SET THE NUMBER OF THREADS PER PROCESS: export OMP_NUM_THREADS=4 # PIN THE MPI DOMAINS ACCORDING TO OMP export I_MPI_PIN_DOMAIN=omp # EXECUTION COMMAND; -np = nodes*ppn mpirun -r ssh -ppn 2 -np 8 ./a.out
Using Open MPI
For mixed MPI/OpenMP jobs using OpenMPI, which is the default for many users, the procedure is similar, but details differ.
- Request the number of nodes in the PBS script.
- Set OMP_NUM_THREADS to the number of threads per MPI process.
- In addition to the -np parameter for mpirun, add the argument --bynode, so that the mpi processes are not bunched up.
So for example, to start a total of 8 processes on 4 nodes, you could use the following script
#!/bin/bash # MOAB/Torque submission script for SciNet GPC (hybrid job) # #PBS -l nodes=4:ppn=8,walltime=1:00:00 #PBS -N test # DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from cd $PBS_O_WORKDIR # SET THE NUMBER OF THREADS PER PROCESS: export OMP_NUM_THREADS=4 # EXECUTION COMMAND; -np = nodes*processes_per_nodes; --byhost forces a round robin of nodes. mpirun -np 8 --bynode -hostfile $PBS_NODEFILE ./a.out
Submitting an Interactive Job
It is sometimes convenient to run a job interactively; this can be very handy for debugging purposes. In this case, you type a qsub command which submits an interactive job to the queue; when the scheduler selects this job to run, then it starts a shell running on the first node of the job, which connects to your terminal. You can then type any series of commands (for instance, the same commands listed as in the batch submission script above) to run a job interactively.
For example, to start the same sort of job as in the batch submission script above, but interactively, one would type
$ qsub -I -l nodes=2:ppn=8,walltime=1:00:00
This is exactly the #PBS -l line in the batch script above (which requests all 8 processors on each of 2 nodes for one hour), but prepended with a -I for `interactive'. When this job begins, your terminal will now show you as being logged in to one of the compute nodes, and one can type in any shell command, run mpirun, etc. When you exit the shell, the job will end. Interactive jobs can be used with any of the GPC queues however, there is a short high turnover queue called debug which can be especially useful when the system is busy.
Ethernet vs. Infiniband
About 1/4 of the GPC (862 nodes or 6896 cores) is connected with a high bandwidth low-latency fabric called InfiniBand. Many jobs which require tight coupling to scale well greatly benefit from this interconnect; other types of jobs, which have relatively modest communications, do not require this and run fine on Gigabit ethernet.
Jobs which require the InfiniBand for good performance can request the nodes that have the `ib' feature in the #PBS -l line,
#PBS -l nodes=2:ib:ppn=8,walltime=1:00:00
Because there are a limited number of these nodes, your job will start running faster if you do not request them (e.g. if you use the scripts as shown above), as this increases the number of nodes available to run your job. In fact, the InfiniBand nodes are to be used only for jobs that are known to scale well and will benefit from this type of interconnect. The MPI libraries provided by SciNet automatically correctly use either the InfiniBand or ethernet interconnect depending on which nodes your job runs on.
Memory Configuration
16G
There are 3756 nodes which have 16G of memory, and is the primary configuration in the GPC. These nodes will be used by default.
18G
There are 24 Infiniband nodes which have 18G of memory. These nodes have a fully populated memory configuration that maximizes memory bandwidth. To request these nodes use:
qsub -l nodes=2:ib:m18g:ppn=8,walltime=1:00:00
32G
There are 84 Infiniband nodes which have 32G of memory. To request these nodes use:
qsub -l nodes=2:ib:m32g:ppn=8,walltime=1:00:00
128G
There are two stand-alone large memory (128GB) nodes, gpc-lrgmem01 and gpc-lrgmem02 which are primarily to be used for data analysis of runs. They have 16 cores and are intel machines running linux, but they are not the same architecture (Nehalem) as the GPC compute nodes, so codes may have to be compiled separately for these machines. They can be accessed using a specific largemem queue.
qsub -l nodes=2:ppn=8,walltime=1:00:00 -q largemem -I
Ram Disk
On the GPC nodes, there is a `ram disk' available - up to half of the memory on the node may be used as a temporary file system. This is particularly useful for use in the early stages of migrating destop-computing codes to a High Performance Computing platform such as the GPC. It is much faster than real disk and does not require network traffic; however, each node sees its own ramdisk and cannot see files on that of other nodes. This is a very easy way to cache writes (by writing them to fast ram disk instead of slow `real' disk); and then one would periodically copy the files to files on /scratch or /project so that they are available after the job has completed.
To use the ramdisk, create and read to / write from files in /dev/shm/.. just as one would to (eg) /scratch/USER/. Only the amount of RAM needed to store the files will be taken up by the temporary file system; thus if you have 8 serial jobs each requiring 1 GB of RAM, and 1GB is taken up by various OS services, you would still have approximately 7GB available to use as ramdisk on a 16GB node. However, if you were to write 8 GB of data to the RAM disk, this would exceed available memory and your job would likely crash.
NOTE: it is very important to delete your files from ram disk at the end of your job. If you do not do this, the next user to use that node will have less RAM available than they might expect, and this might kill their jobs.
Managing jobs on the Queuing system
Information on checking available resources, starting, viewing, managing and canceling jobs on Moab/Torque