Essentials

From oldwiki.scinet.utoronto.ca
Revision as of 11:45, 27 August 2009 by Cloken (talk | contribs)
Jump to navigation Jump to search

Access to the SciNet systems

Access to the SciNet systems is via ssh only. Ssh to login.scinet to use the GPC or TCS:

 ssh -l USER login.scinet.utoronto.ca

From here you can view your directories, see the queue on the GPC using showq, log into one of four GPC development nodes, gpc01..gpc04, or either of the TCS development nodes, tcs01 or tcs02.

Users can also transfer files into or out of the datacentre via the login nodes, using scp, or rsync over ssh. However, because these machines are used by everyone who needs to use the SciNet systems, be considerate; do not run scripts or programs that will take more than a few minutes or a few MB of memory on these systems.

Please talk to us at <support@scinet.utoronto.ca> if you need to do very large file transfers.

Note that the login machines are not the same architecture as either the GPC or TCS nodes; you should not compile programs on the login machines that you expect to use on the GPC or TCS clusters.

SciNet Firewall

Important note about logging in: The SciNet firewall monitors for too many attempted connections, and will shut down all access (including previously working connections) from your IP address if more than four connection attempts (successful or not) are made within the space of a few minutes. In that case, you will be locked out of the system for an hour. Be patient in attempting new logins!

Availability of the SciNet systems

The SciNet systems are still very new, and will require significant maintenance and reconfiguration in these early days. You could reasonably expect that one of the TCS, the GPC, or their shared filesystems may need to be taken offline for maintenance one day per week. We appreciate your patience -- we're working hard to make the SciNet machines as fast and useful as possible!

Default Limits on the SciNet systems

The default allocations on the SciNet machines allow up to 16 jobs in the queue, running on a total of 32 nodes at a time for 48 hours per job on the GPC cluster; and for those who have also applied to use the more specialized TCS resource, up to 8 jobs in the queue running on a total of 2 nodes at a time, again with a 48 hour wallclock limit per job. Users who need more than this amount of resources must apply for it through the [1] account allocation / LRAC/NRAC process.

Usage Policy

All users agreed to various conditions when they requested an account; e.g. accounts must not be shared, computing resources are to be used efficiently and only for research etc. Please see the Scinet Usage Policy for the full details.

Contact information

Any questions and problem reports should be addressed to <support@scinet.utoronto.ca>