FAQ
The Basics
Who do I contact for support?
Who do I contact if I have problems or questions about how to use the SciNet systems?
Answer:
E-mail <support@scinet.utoronto.ca>
In your email, please include the following information:
- your username on SciNet
- the cluster that your question pertains to (GPC or TCS; SciNet is not a cluster!),
- any relevant error messages
- the commands you typed before the errors occured
- the path to your code (if applicable)
- the location of the job scripts (if applicable)
- the directory from which it was submitted (if applicable)
- a description of what it is supposed to do (if applicable)
- if your problem is about connecting to SciNet, the type of computer you are connecting from.
Note that your password should never, never, never be to sent to us, even if your question is about your account.
Try to avoid sending email only to specific individuals at SciNet. Your chances of a quick reply increase significantly if you email our team!
What does code scaling mean?
Answer:
Please see A Performance Primer
What do you mean by throughput?
Answer:
Please see A Performance Primer.
Here is a simple example:
Suppose you need to do 10 computations. Say each of these runs for 1 day on 8 cores, but they take "only" 18 hours on 16 cores. What is the fastest way to get all 10 computations done - as 8-core jobs or as 16-core jobs? Let us assume you have 2 nodes at your disposal. The answer, after some simple arithmetic, is that running your 10 jobs as 8-core jobs will take 5 days, whereas if you ran them as 16-core jobs it would take 7.5 days. Take your own conclusions...
I changed my .bashrc/.bash_profile and now nothing works
The default startup scripts provided by SciNet, and guidelines for them, can be found here. Certain things - like sourcing /etc/profile and /etc/bashrc are required for various SciNet routines to work!
If the situation is so bad that you cannot even log in, please send email support.
Could I have my login shell changed to (t)csh?
The login shell used on our systems is bash. While the tcsh is available on the GPC and the TCS, we do not support it as the default login shell at present. So "chsh" will not work, but you can always run tcsh interactively. Also, csh scripts will be executed correctly provided that they have the correct "shebang" #!/bin/tcsh at the top.
How can I run Matlab / IDL / Gaussian / my favourite commercial software at SciNet?
Answer:
Because SciNet serves such a disparate group of user communities, there is just no way we can buy licenses for everyone's commercial package. The only commercial software we have purchased is that which in principle can benefit everyone -- fast compilers and math libraries (Intel's on GPC, and IBM's on TCS).
If your research group requires a commercial package that you already have or are willing to buy licenses for, contact us at support@scinet and we can work together to find out if it is feasible to implement the packages licensing arrangement on the SciNet clusters, and if so, what is the the best way to do it.
Note that it is important that you contact us before installing commercially licensed software on SciNet machines, even if you have a way to do it in your own directory without requiring sysadmin intervention. It puts us in a very awkward position if someone is found to be running unlicensed or invalidly licensed software on our systems, so we need to be aware of what is being installed where.
Do you have a recommended ssh program that will allow scinet access from Windows machines?
Answer:
The SSH for Windows users programs we recommend are:
- PuTTY - this is a terminal for windows that connects via ssh. It is a quick install and will get you up and running quickly.
To set up your passphrase protected ssh key with putty, see here. - CygWin - this is a whole linux-like environment for windows, which also includes an X window server so that you can display remote windows on your desktop. Make sure you include the openssh and X window system in the installation for full functionality. This is recommended if you will be doing a lot of work on Linux machines, as it makes a very similar environment available on your computer.
To set up your ssh keys, following the Linux instruction on the Ssh keys page. - MobaXterm is a tabbed ssh client with some Cygwin tools, including ssh and X, all wrapped up into one executable.
To set up your ssh keys, following the Linux instruction on the Ssh keys page.
My ssh key does not work! WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Answer:
- If this doesn't work, you should be able to login using your password, and investigate the problem. For example, if during a login session you get an message similar to the one below, just follow the instruction and delete the offending key on line 3 (you can use vi to jump to that line with ESC plus : plus 3). That only means that you may have logged in from your home computer to SciNet in the past, and that key is obsolete.
$ ssh USERNAME@login.scinet.utoronto.ca @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@**@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@**@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 53:f9:60:71:a8:0b:5d:74:83:52:**fe:ea:1a:9e:cc:d3. Please contact your system administrator. Add correct host key in /home/<user>/.ssh/known_hosts to get rid of this message. Offending key in /home/<user>/.ssh/known_hosts:3 RSA host key for login.scinet.utoronto.ca <http://login.scinet.utoronto.ca <http://login.scinet.utoronto.ca>> has changed and you have requested
- If you get the message below you may need to logout of your gnome session and log back in since the ssh-agent needs to be
restarted with the new passphrase ssh key.
$ ssh USERNAME@login.scinet.utoronto.ca Agent admitted failure to sign using the key.
Can't forward X: "Warning: No xauth data; using fake authentication data", or "X11 connection rejected because of wrong authentication."
I used to be able to forward X11 windows from SciNet to my home machine, but now I'm getting these messages; what's wrong?
Answer:
This very likely means that ssh/xauth can't update your ${HOME}/.Xauthority file.
The simplest pssible reason for this is that you've filled your 10GB /home quota and so can't write anything to your home directory. Use
$ module load extras $ diskUsage
to check to see how close you are to your disk usage on ${HOME}.
Alternately, this could mean your .Xauthority file has become broken/corrupted/confused some how, in which case you can delete that file, and when you next log in you'll get a similar warning message involving creating .Xauthority, but things should work.
How come I can not login to TCS?
Answer:
A SciNet account doesn't automatically entitle you to TCS access. At a minimum, TCS jobs need to run on at least 32 cores (64 preferred because of Simultaneous Multi Threading - SMT - on these nodes) and need the large memory (4GB/core) and bandwidth on the system. Essentially you need to be able to explain why the work can't be done on the GPC.
How can I reset the password for my Compute Canada account?
Answer:
You can reset your password for your Compute Canada account here:
https://ccdb.computecanada.org/security/forgot
How can I change or reset the password for my SciNet account?
Answer:
To reset your password at SciNet please e-mail <support@scinet.utoronto.ca>
If you know your old password and want to change it, that can be done here:
https://portal.scinet.utoronto.ca/
Why am I getting the error "Permission denied (publickey,gssapi-with-mic,password)"?
This error can pop up in a variety of situations: when trying to log in, or when after a job has finished, when the error and output files fail to be copied (there are other possible reasons for this failure as well -- see My GPC job died, telling me:Copy Stageout Files Failed). In most cases, the "Permission denioed" error is caused by incorrect permission of the (hidden) .ssh directory. Ssh is used for logging in as well as for the copying of the standard error and output files after a job.
For security reasons, the directory .ssh should only be writable and readable to you, but yours has read permission for everybody, and thus it fails. You can change this by
chmod 700 ~/.ssh
And to be sure, also do
chmod 600 ~/.ssh/id_rsa ~/.ssh/id_rsa.pub ~/authorized_keys
ERROR:102: Tcl command execution failed? when loading modules
Modules sometimes require other modules to be loaded first. Module will let you know if you didn’t. For example:
$ module purge $ module load python python/2.6.2(11):ERROR:151: Module ’python/2.6.2’ depends on one of the module(s) ’gcc/4.4.0’ python/2.6.2(11):ERROR:102: Tcl command execution failed: prereq gcc/4.4.0 $ gpc-f103n084-$ module load gcc python $
Compiling your Code
How can I get g77 to work?
The fortran 77 compilers on the GPC are ifort and gfortran. We have dropped support for g77. This has been a conscious decision. g77 (and the associated library libg2c) were completely replaced six years ago (Apr 2005) by the gcc 4.x branch, and haven't undergone any updates at all, even bug fixes, for over five years. If we would install g77 and libg2c, we would have to deal with the inevitable confusion caused when users accidentally link against the old, broken, wrong versions of the gcc libraries instead of the correct current versions.
If your code for some reason specifically requires five-plus-year-old libraries, availability, compatibility, and unfixed-known-bug problems are only going to get worse for you over time, and this might be as good an opportunity as any to address those issues.
A note on porting to gfortran or ifort:
While gfortran and ifort are rather compatible with g77, one important difference is that by default, gfortran does not preserve local variables between function calls, while g77 does. Preserved local variables are for instance often used in implementations of quasi-random number generators. Proper fortran requires to declare such variables as SAVE but not all old code does this. Luckily, you can change gfortran's default behavior with the flag -fno-automatic. For ifort, the corresponding flag is -noautomatic.
Where is libg2c.so?
libg2c.so is part of the g77 compiler, for which we dropped support. See #How can I get g77 to work on the GPC? for our reasons.
Autoparallelization does not work!
I compiled my code with the -qsmp=omp,auto option, and then I specified that it should be run with 64 threads - with
export OMP_NUM_THREADS=64
However, when I check the load using llq1 -n, it shows a load on the node of 1.37. Why?
Answer:
Using the autoparallelization will only get you so far. In fact, it usually does not do too much. What is helpful is to run the compiler with the -qreport option, and then read the output listing carefully to see where the compiler thought it could parallelize, where it could not, and the reasons for this. Then you can go back to your code and carefully try to address each of the issues brought up by the compiler. We emphasize that this is just a rough first guide, and that the compilers are still not magical! For more sophisticated approaches to parallelizing your code, email us at <support@scinet.utoronto.ca> to set up an appointment with one of our technical analysts.
How do I link against the Intel Math Kernel Library?
If you need to link in the Intel Math Kernel Library (MKL) libraries, you are well advised to use the Intel(R) Math Kernel Library Link Line Advisor: http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/ for help in devising the list of libraries to link with your code.
Note that this give the link line for the command line. When using this in Makefiles, replace $MKLPATH by ${MKLPATH}.
Note too that, unless the integer arguments you will be passing to the MKL libraries are actually 64-bit integers, rather than the normal int or INTEGER types, you want to specify 32-bit integers (lp64) .
Can the compilers on the login nodes be disabled to prevent accidentally using them?
Answer:
You can accomplish this by modifying your .bashrc to not load the compiler modules. See Important .bashrc guidelines.
"relocation truncated to fit: R_X86_64_PC32": Huh?
What does this mean, and why can't I compile this code?
Answer:
Welcome to the joys of the x86 architecture! You're probably having trouble building arrays larger than 2GB, individually or together. Generally, you have to try to use the medium or large x86 `memory model'. For the intel compilers, this is specified with the compile options
-mcmodel=medium -shared-intel
"feupdateenv is not implemented and will always fail"
How do I get rid of this and what does it mean?
Answer:
First note that, as ominous as it sounds, this is really just a warning, and has to do with the intel math library. You can ignore it (unless you really are trying to manually change the exception handlers for floating point exceptions such as divide by zero), or take the safe road and get rid off it by linking with the intel math functions library:
-limf
See also #How do I link against the Intel Math Kernel Library?
Cannot find rdmacm library when compiling on GPC
I get the following error building my code on GPC: "ld: cannot find -lrdmacm". Where can I find this library?
Answer:
This library is part of the MPI libraries; if your compiler is having problems picking it up, it probably means you are mistakenly trying to compile on the login nodes (scinet01..scinet04). The login nodes aren't part of the GPC; they are for logging into the data centre only. From there you must go to the GPC or TCS development nodes to do any real work.
Why do I get this error when I try to compile: "icpc: error #10001: could not find directory in which /usr/bin/g++41 resides" ?
You are trying to compile on the login nodes. As described in the wiki ( https://support.scinet.utoronto.ca/wiki/index.php/GPC_Quickstart#Login ), or in the users guide you would have received with your account, Scinet supports two main clusters, with very different architectures. Compilation must be done on the development nodes of the appropriate cluster (in this case, gpc01-04). Thus, log into gpc01, gpc02, gpc03, or gpc04, and compile from there.
Testing your Code
Can I run a something for a short time on the development nodes?
I am in the process of playing around with the mpi calls in my code to get it to work. I do a lot of tests and each of them takes a couple of seconds only. Can I do this on the development nodes?
Answer:
Yes, as long as it's very brief (a few minutes). People use the development nodes for their work, and you don't want to bog it down for people, and testing a real code can chew up a lot more resources than compiling, etc. The procedures differ depending on what machine you're using.
TCS
On the TCS you can run small MPI jobs on the tcs02 node, which is meant for development use. But even for this test run on one node, you'll need a host file -- a list of hosts (in this case, all tcs-f11n06, which is the `real' name of tcs02) that the job will run on. Create a file called `hostfile' containing the following:
tcs-f11n06 tcs-f11n06 tcs-f11n06 tcs-f11n06
for a 4-task run. When you invoke "poe" or "mpirun", there are runtime arguments that you specify pointing to this file. You can also specify it in an environment variable MP_HOSTFILE, so, if your file is in your /scratch directory, say ${SCRATCH}/hostfile, then you would do
export MP_HOSTFILE=${SCRATCH}/hostfile
in your shell. You will also need to create a .rhosts file in your home director, again listing tcs-f11n06 so that poe can start jobs. After that you can simply run your program. You can use mpiexec:
mpiexec -n 4 my_test_program
adding -hostfile /path/to/my/hostfile if you did not set the environment variable above. Alternatively, you can run it with the poe command (do a "man poe" for details), or even by just directly running it. In this case the number of MPI processes will by default be the number of entries in your hostfile.
GPC
On the GPC one can run short test jobs on the GPC development nodes gpc01..gpc04; if they are single-node jobs (which they should be) they don't need a hostfile. Even better, though, is to request an interactive job and run the tests either in regular batch queue or using a short high availability debug queue that is reserved for this purpose.
How do I run a longer (but still shorter than an hour) test job quickly ?
Answer
On the GPC there is a high turnover short queue called debug that is designed for this purpose. You can use it by adding
#PBS -q debug
to your submission script.
Running your jobs
My job can't write to /home
My code works fine when I test on the development nodes, but when I submit a job, or even run interactively in the development queue on GPC, it fails. What's wrong?
Answer:
As discussed elsewhere, /home is mounted read-only on the compute nodes; you can only write to /home from the login nodes and devel nodes. (The largemem nodes on GPC, in this respect, are more like devel nodes than compute nodes). In general, to run jobs you can read from /home but you'll have to write to /scratch (or, if you were allocated space through the LRAC/NRAC process, on /project). More information on SciNet filesytems can be found on our Data Management page.
OpenMP on the TCS
How do I run an OpenMP job on the TCS?
Answer:
Please look at the TCS Quickstart page.
Can I can use hybrid codes consisting of MPI and openMP on the GPC?
Answer:
Yes. Please look at the GPC Quickstart page.
How do I run serial jobs on GPC?
Answer:
So it should be said first that SciNet is a parallel computing resource, and our priority will always be parallel jobs. Having said that, if you can make efficient use of the resources using serial jobs and get good science done, that's good too, and we're happy to help you.
The GPC nodes each have 8 processing cores, and making efficient use of these nodes means using all eight cores. As a result, we'd like to have the users take up whole nodes (eg, run multiples of 8 jobs) at a time.
It depends on the nature of your job what the best strategy is. Several approaches are presented on the serial run wiki page.
Why can't I request only a single cpu for my job on GPC?
Answer:
On GPC, computers are allocated by the node - that is, in chunks of 8 processors. If you want to run a job that requires only one processor, you need to bundle the jobs into groups of 8, so as to not be wasting the other 7 for 48 hours. See serial run wiki page.
How do I run serial jobs on TCS?
Answer: You don't.
But in the queue I found a user who is running jobs on GPC, each of which is using only one processor, so why can't I?
Answer:
The pradat* and atlaspt* jobs, amongst others, are jobs of the ATLAS high energy physics project. That they are reported as single cpu jobs is an artifact of the moab scheduler. They are in fact being automatically bundled into 8-job bundles but have to run individually to be compatible with their international grid-based systems.
How do I use the ramdisk on GPC?
To use the ramdisk, create and read to / write from files in /dev/shm/.. just as one would to (eg) ${SCRATCH}. Only the amount of RAM needed to store the files will be taken up by the temporary file system; thus if you have 8 serial jobs each requiring 1 GB of RAM, and 1GB is taken up by various OS services, you would still have approximately 7GB available to use as ramdisk on a 16GB node. However, if you were to write 8 GB of data to the RAM disk, this would exceed available memory and your job would likely crash.
It is very important to delete your files from ram disk at the end of your job. If you do not do this, the next user to use that node will have less RAM available than they might expect, and this might kill their jobs.
More details on how to setup your script to use the ramdisk can be found on the Ramdisk wiki page.
How can I automatically resubmit a job?
Commonly you may have a job that you know will take longer to run than what is permissible in the queue. As long as your program contains checkpoint or restart capability, you can have one job automatically submit the next. In the following example it is assumed that the program finishes before the 48 hour limit and then resubmits itself by logging into one of the development nodes.
<source lang="bash">
- !/bin/bash
- MOAB/Torque example submission script for auto resubmission
- SciNet GPC
- PBS -l nodes=1:ppn=8,walltime=48:00:00
- PBS -N my_job
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
- YOUR CODE HERE
./run_my_code
- RESUBMIT 10 TIMES HERE
num=$NUM if [ $num -lt 10 ]; then
num=$(($num+1)) ssh gpc01 "cd $PBS_O_WORKDIR; qsub ./script_name.sh -v NUM=$num";
fi </source>
qsub script_name.sh -v
You can alternatively use Job dependencies through the queuing system which will not start one job until another job has completed.
If your job can't be made to automatically stop before the 48 hour queue window, but it does write out checkpoints, you can use the timeout command to stop the program while you still have time to resubmit; for instance
<source lang="bash">
timeout 2850m ./run_my_code argument1 argument2
</source>
will run the program for 47.5 hours (2850 minutes), and then send it SIGTERM to exit the program.
How can I pass in arguments to my submission script?
If you wish to make your scripts more generic you can use qsub's ability to pass in environment variables to pass in arguments to your script. The following example shows a case where an input and an output file are passed in on the qsub line. Multiple variables can be passed in using the qsub "-v" option and comma delimited.
<source lang="bash">
- !/bin/bash
- MOAB/Torque example of passing in arguments
- SciNet GPC
- PBS -l nodes=1:ppn=8,walltime=48:00:00
- PBS -N my_job
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
- YOUR CODE HERE
./run_my_code -f $INFILE -o $OUTFILE </source>
qsub script_name.sh -v INFILE=input.txt,OUTFILE=outfile.txt
How can I run a job longer than 48 hours?
Answer:
The SciNet queues have a queue limit of 48 hours. This is pretty typical for systems of its size in Canada and elsewhere, and larger systems commonly have shorter limits. The limits are there to ensure that every user gets a fair share of the system (so that no one user ties up lots of nodes for a long time), and for safety (so that if one memory board in one node fails in the middle of a very long job, you haven't lost a months' worth of work).
Since many of us have simulations that require more than that much time, most widely-used scientific applications have "checkpoint-restart" functionality, where every so often the complete state of the calculation is stored as a checkpoint file, and one can restart a simulation from one of these. In fact, these restart files tend to be quite useful for a number of purposes.
If your job will take longer, you will have to submit your job in multiple parts, restarting from a checkpoint each time. In this way, one can run a simulation much longer than the queue limit. In fact, one can even write job scripts which automatically re-submit themselves until a run is completed, using automatic resubmission.
Why did showstart say it would take 3 hours for my job to start before, and now it says my job will start in 10 hours?
Answer:
Please look at the How do priorities work/why did that job jump ahead of mine in the queue? page.
How do priorities work/why did that job jump ahead of mine in the queue?
Answer:
The queueing system used on SciNet machines is a Priority Queue. Jobs enter the queue at the back of the queue, and slowly make their way to the front as those ahead of them are run; but a job that enters the queue with a higher priority can `cut in line'.
The main factor which determines priority is whether or not the user (or their PI) has an LRAC or NRAC allocation. These are competitively allocated grants of computer time; there is a call for proposals towards the end of every calendar year. Users with an allocation have high priorities in an attempt to make sure that they can use the amount of computer time the committees granted them. Their priority decreases as they approach their allotted usage over the current window of time; by the time that they have exhausted that allotted usage, their priority is the same as users with no allocation (unallocated, or `default' users). Unallocated users have a fixed, low, priority.
This priority system is called `fairshare'; the scheduler attempts to make sure everyone has their fair share of the machines, where the share that's fair has been determined by the allocation committee. The fairshare window is a rolling window of two weeks; that is, any time you have a job in the queue, the fairshare calculation of its priority is given by how much of your allocation of the machine has been used in the last 14 days.
A particular allocation might have some fraction of GPC - say 4% of the machine (if the PI had been allocated 10 million CPU hours on GPC). The allocations have labels; (called `Resource Allocation Proposal Identifiers', or RAPIs) they look something like
abc-123-ab
where abc-123 is the PIs CCRI, and the suffix specifies which of the allocations granted to the PI is to be used. These can be specified on a job-by-job basis. On GPC, one adds the line
#PBS -A RAPI
to your script; on TCS, one uses
# @ account_no = RAPI
If the allocation to charge isn't specified, a default is used; each user has such a default, which can be changed at the same portal where one changes one's password,
https://portal.scinet.utoronto.ca/
A jobs priority is determined primarily by the fairshare priority of the allocation it is being charged to; the previous 14 days worth of use under that allocation is calculated and compared to the allocated fraction (here, 5%) of the machine over that window (here, 14 days). The fairshare priority is a decreasing function of the allocation left; if there is no allocation left (eg, jobs running under that allocation have already used 379,038 CPU hours in the past 14 days), the priority is the same as that of a user with no granted allocation. (This last part has been the topic of some debate; as the machine gets more utilized, it will probably be the case that we allow RAC users who have greatly overused their quota to have their priorities to drop below that of unallocated users, to give the unallocated users some chance to run on our increasingly crowded system; this would have no undue effect on our allocated users as they still would be able to use the amount of resources they had been allocated by the committees.) Note that all jobs charging the same allocation get the same fairshare priority.
There are other factors that go into calculating priority, but fairshare is the most significant. Other factors include
- amount of time waiting in queue (measured in units of the requested runtime). A job that requests 1 hour in the queue and has been waiting 2 days will get a bump in its priority larger than a job that requests 2 days and has been waiting the same time.
- User adjustment of priorities ( See below ).
The major effect of these subdominant terms is to shuffle the order of jobs running under the same allocation.
How do we manage job priorities within our research group?
Answer:
Obviously, managing shared resources within a large group - whether it is conference funding or CPU time - takes some doing.
It's important to note that the fairshare periods are intentionally kept quite short - just two weeks long. (These exact numbers subject to change as the year goes on and we better understand use patterns, but they're unlikely to change radically). So, for example, let us say that in your resource allocation you have about 10% of the machine. Then for someone to use up the whole two week amount of time in 2 days, they'd have to use 70% of the machine in those two days - which is unlikely to happen by accident. If that does happen, those using the same allocation as the person who used 70% of the machine over the two days will suffer by having much lower priority for their jobs, but only for the next 12 days - and even then, if there are idle cpus they'll still be able to compute.
There will be online tools for seeing how the allocation is being used, and those people who are in charge in your group will be able to use that information to manage the users, telling them to dial it down or up. We know that managing a large research group is hard, and we want to make sure we provide you the information you need to do your job effectively.
One way for users within a group to manage their priorities within the group is with user-adjusted priorities; this is described in more detail on the Scheduling System page.
How do I charge jobs to my NRAC/LRAC allocation?
Answer:
Please see the accounting section of Moab page.
How does one check the amount of used CPU-hours in a project, and how does one get statistics for each user in the project?
Answer:
This information is available on the scinet portal,https://portal.scinet.utoronto.ca, See also SciNet Usage Reports.
How does the Infiniband Upgrade affect my 2012 NRAC allocation ?
The NRAC allocations for the current (2012) year that were based on ethernet and infiniband will carry over, however the allocation will be on the full GPC, not just the subsection. So if you were allocated 500 hours on Infiniband your fairshare allocation will still be 500 hours, just 500 out or 30,000, instead of 500 out of 7,000. If you received two allocations, one on gigE and one on IB, they will simply be combined. This should benefit all users as the desegregation of the GPC provides a greater pool of nodes increasing the probability of your job to run.
Monitoring jobs in the queue
Why hasn't my job started?
Answer:
Use the moab command
checkjob -v jobid
and the last couple of lines should explain why a job hasn't started.
Please see Job Scheduling System (Moab) for more detailed information
How do I figure out when my job will run?
Answer:
Please see Job Scheduling System (Moab)
I submit my GPC job, and I get an email saying it was rejected
Answer:
This happens because the job you've submitted breaks one of the rules of the queues and is rejected. An email is sent with the JOBID, JOBNAME, and the reason it was rejected. The following is an example where a job requests more than 48 hours and was rejected.
PBS Job Id: 3462493.gpc-sched Job Name: STDIN job deleted Job deleted at request of root@gpc-sched MOAB_INFO: job was rejected - job violates class configuration 'wclimit too high for class 'batch_ib' (345600 > 172800)'
Jobs on the TCS or GPC may only run for 48 hours at a time; this restriction greatly increases responsiveness of the queue and queue throughput for all our users. If your computation requires longer than that, as many do, you will have to checkpoint your job and restart it after each 48-hour queue window. You can manually re-submit jobs, or if you can have your job cleanly exit before the 48 hour window, there are ways to automatically resubmit jobs .
Other rejections return a more cryptic error saying "job violates class configuration" such as follows:
PBS Job Id: 3462409.gpc-sched Job Name: STDIN job deleted Job deleted at request of root@gpc-sched MOAB_INFO: job was rejected - job violates class configuration 'user required by class 'batch''
The most common problems that result in this error are:
- Incorrect number of processors per node: Jobs on the GPC are scheduled per-node not per-core and since each node has 8 processor cores (ppn=8) the smallest job allowed is one node with 8 cores (nodes=1:ppn=8). For serial jobs users must bundle or batch them together in groups of 8. See How do I run serial jobs on GPC?
- No number of nodes specified: Jobs submitted to the main queue must request a specific number of nodes, either in the submission script (with a line like #PBS -l nodes=2:ppn=8) or on the command line (eg, qsub -l nodes=2:ppn=8,walltime=5:00:00 script.pbs). Note that for the debug queue, you can get away without specifying a number of nodes and a default of one will be assigned; for both technical and policy reasons, we do not enforce such a default for the main ("batch") queue.
- There is a 15 minute walltime minimum on all queues except debug and if you set your walltime less than this, it will be rejected.
How can I monitor my running jobs on TCS?
How can I monitor the load of TCS jobs?
Answer:
You can get more information with the command
/xcat/tools/tcs-scripts/LL/jobState.sh
which I alias as:
alias llq1='/xcat/tools/tcs-scripts/LL/jobState.sh'
If you run "llq1 -n" you will see a listing of jobs together with a lot of information, including the load.
Errors in running jobs
On GPC, `Job cannot be executed'
I get error messages like this trying to run on GPC:
PBS Job Id: 30414.gpc-sched Job Name: namd Exec host: gpc-f120n011/7+gpc-f120n011/6+gpc-f120n011/5+gpc-f120n011/4+gpc-f120n011/3+gpc-f120n011/2+gpc-f120n011/1+gpc-f120n011/0 Aborted by PBS Server Job cannot be executed See Administrator for help PBS Job Id: 30414.gpc-sched Job Name: namd Exec host: gpc-f120n011/7+gpc-f120n011/6+gpc-f120n011/5+gpc-f120n011/4+gpc-f120n011/3+gpc-f120n011/2+gpc-f120n011/1+gpc-f120n011/0 An error has occurred processing your job, see below. request to copy stageout files failed on node 'gpc-f120n011/7+gpc-f120n011/6+gpc-f120n011/5+gpc-f120n011/4+gpc-f120n011/3+gpc-f120n011/2+gpc-f120n011/1+gpc-f120n011/0' for job 30414.gpc-sched Unable to copy file 30414.gpc-sched.OU to USER@gpc-f101n084.scinet.local:/scratch/G/GROUP/USER/projects/sim-performance-test/runtime/l/namd/8/namd.o30414 *** error from copy 30414.gpc-sched.OU: No such file or directory *** end error output
Try doing the following:
mkdir ${SCRATCH}/.pbs_spool ln -s ${SCRATCH}/.pbs_spool ~/.pbs_spool
This is how all new accounts are setup on SciNet.
/home on GPC for compute jobs is mounted as a read-only file system. PBS by default tries to spool its output files to ${HOME}/.pbs_spool which fails as it tries to write to a read-only file system. New accounts at SciNet get around this by having ${HOME}/.pbs_spool point to somewhere appropriate on /scratch, but if you've deleted that link or directory, or had an old account, you will see errors like the above.
On Feb 24, the input/output mechanism has been reconfigured to use a local ramdisk as the temporary location, which means that .pbs_spool is no longer needed and this error should not occur anymore.
I couldn't find the .o output file in the .pbs_spool directory as I used to
On Feb 24 2011, the temporary location of standard input and output files was moved from the shared file system ${SCRATCH}/.pbs_spool to the node-local directory /var/spool/torque/spool (which resides in ram). The final location after a job has finished is unchanged, but to check the output/error of running jobs, users will now have to ssh into the (first) node assigned to the job and look in /var/spool/torque/spool.
This alleviates access contention to the temporary directory, especially for those users that are running a lot of jobs, and reduces the burden on the file system in general.
Note that it is good practice to redirect output to a file rather than to count on the scheduler to do this for you.
My GPC job died, telling me `Copy Stageout Files Failed'
Answer:
When a job runs on GPC, the script's standard output and error are redirected to $PBS_JOBID.gpc-sched.OU and $PBS_JOBID.gpc-sched.ER in /var/spool/torque/spool on the (first) node on which your job is running. At the end of the job, those .OU and .ER files are copied to where the batch script tells them to be copied, by default $PBS_JOBNAME.o$PBS_JOBID and$PBS_JOBNAME.e$PBS_JOBID. (You can set those filenames to be something clearer with the -e and -o options in your PBS script.)
When you get errors like this:
An error has occurred processing your job, see below. request to copy stageout files failed on node
it means that the copying back process has failed in some way. There could be a few reasons for this. The first thing to make sure that your .bashrc does not produce any output, as the output-stageout is performed by bash and further output can cause this to fail. But it also could have just been a random filesystem error, or it could be that your job failed spectacularly enough to shortcircuit the normal job-termination process and those files just never got copied.
Write to <support@scinet.utoronto.ca> if your input/output files got lost, as we will probably be able to retrieve them for you (please supply at least the jobid, and any other information that may be relevant).
Mind you that it is good practice to redirect output to a file rather than depending on the job scheduler to do this for you.
IB Memory Errors, eg reg_mr Cannot allocate memory
Infiniband requires more memory than ethernet; it can use RDMA (remote direct memory access) transport for which it sets aside registered memory to transfer data.
In our current network configuration, it requires a _lot_ more memory, particularly as you go to larger process counts; unfortunately, that means you can't get around the "I need more memory" problem the usual way, by running on more nodes. Machines with different memory or network configurations may exhibit this problem at higher or lower MPI task counts.
Right now, the best workaround is to reduce the number and size of OpenIB queues, using XRC: with the OpenMPI, add the following options to your mpirun command:
-mca btl_openib_receive_queues X,128,256,192,128:X,2048,256,128,32:X,12288,256,128,32 -mca btl_openib_max_send_size 12288
With Intel MPI, you should be able to do
module load intelmpi/4.0.3.008 mpirun -genv I_MPI_FABRICS=shm:ofa -genv I_MPI_OFA_USE_XRC=1 -genv I_MPI_OFA_DYNAMIC_QPS=1 -genv I_MPI_DEBUG=5 -np XX ./mycode
to the same end.
For more information see GPC MPI Versions.
Answer:
To maximize the amount of memory available for compute jobs, the compute nodes have a less complete system image than the development nodes. In particular, since interactive graphics libraries like matplotlib and gnuplot are usually used interactively, the libraries for their use are included in the devel nodes' image but not the compute nodes.
Many of these extra libraries are, however, available in the "extras" module. So adding a "module load extras" to your job submission script - or, for overkill, to your .bashrc - should enable these scripts to run on the compute nodes.
Data on SciNet disks
When will the 2011 NRAC disk space allocation be ready?
Answer:
We're still working on expanding our storage capacity to meet the 2011 NRAC requirements. It may take a few more months, but when it becomes available we'll make an announcement.
How do I find out my disk usage?
Answer:
The standard unix/linux utilities for finding the amount of disk space used by a directory are very slow, and notoriously inefficient on the GPFS filesystems that we run on the SciNet systems. There are utilities that very quickly report your disk usage:
The /scinet/gpc/bin/diskUsage command, available on the login nodes, datamovers and the GPC devel nodes, provides information in a number of ways on the home, scratch, and project file systems. For instance, how much disk space is being used by yourself and your group (with the -a option), or how much your usage has changed over a certain period ("delta information") or you may generate plots of your usage over time. This information is only updated hourly!
More information about these filesystems is available at the Data_Management.
How do I transfer data to/from SciNet?
Answer:
All incoming connections to SciNet go through relatively low-speed connections to the login.scinet gateways, so using scp to copy files the same way you ssh in is not an effective way to move lots of data. Better tools are described in our page on Data Transfer.
My group works with data files of size 1-2 GB. Is this too large to transfer by scp to login.scinet.utoronto.ca ?
Answer:
Generally, occasion transfers of data less than 10GB is perfectly acceptible to so through the login nodes. See Data Transfer.
How can I check if I have files in /scratch that are scheduled for automatic deletion?
Answer:
Please see Storage At SciNet
How to allow my supervisor to manage files for me using ACL-based commands?
Answer:
Please see File/Ownership Management
Keep 'em Coming!
Next question, please
Send your question to <support@scinet.utoronto.ca>; we'll answer it asap!