Essentials

From oldwiki.scinet.utoronto.ca
Revision as of 11:47, 20 August 2010 by Ljdursi (talk | contribs) (Point to ssh page)
Jump to navigation Jump to search

Access to the SciNet systems

Access to the SciNet systems is via SSH only. To use the GPC or TCS, first ssh to the data centre through login.scinet.utoronto.ca:

 ssh -l USER login.scinet.utoronto.ca

From here you can view your directories, see the queue on the GPC using showq, and log into one of four GPC development nodes, gpc01..gpc04, or either of the TCS development nodes, tcs01 or tcs02.

However, because the login nodes are used by everyone who needs to use the SciNet systems, be considerate; do not run scripts or programs that will take more than a few minutes or a few MB of memory on these systems.

Users can transfer small files (at most about 10GB) into or out of the datacentre via the login nodes, using scp, or rsync over ssh. Large data transfers, however, should be done via the datamover1 node. This node can initiate both incoming and outgoing transfers, and since it is on a 10 Gbps link to the University of Toronto, it is the fastest - and recommended - way to transfer data. Note that datamover1 is not accessible from the outside, so you must still login to login.scinet.utoronto.ca and then ssh to the data mover node. See the data transfer section of the Data Management page for more details.

Please talk to us at <support@scinet.utoronto.ca> if you need to do very large file transfers, Note that the login machines are not the same architecture as either the GPC or TCS nodes; you should not compile programs on the login machines that you expect to use on the GPC or TCS clusters.

Note also that access to the TCS is not enabled by default. We ask that people justify the need for this highly specialized machine. Contact us explaining the nature of your work if you want access to the TCS. In particular, applications should scale well to 64 processes/threads to run on this system.

SciNet Firewall

Important note about logging in: The SciNet firewall monitors for too many attempted connections, and will shut down all access (including previously working connections) from your IP address if more than four connection attempts (successful or not) are made within the space of a few minutes. In that case, you will be locked out of the system for an hour. Be patient in attempting new logins!

Default Limits on the SciNet systems

The default allocation on the SciNet GPC cluster allows a research group to use a maximum of 32 nodes at a time for 48 hours per job with no more than 32 individual jobs at a time; and for those who have also applied to use the more specialized TCS resource, up to 8 jobs in the queue running on a total of 2 nodes at a time, again with a 48 hour wallclock limit per job. Users who need more than this amount of resources must apply for it through the [1] account allocation / LRAC/NRAC process.

Usage Policy

All users agreed to various conditions when they requested an account; e.g. accounts must not be shared, computing resources are to be used efficiently and only for research etc. Please see the Scinet Usage Policy for the full details.

Suggested Further Reading

Contact information

Any questions and problem reports should be addressed to <support@scinet.utoronto.ca>. Please provide as much relevant information as possible in your email.

9 July 2010, 13:25 (UTC)