Essentials

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search

Access to the SciNet systems

Access to the SciNet systems is via ssh only. Ssh to login.scinet to use the GPC or TCS:

 ssh -l USER login.scinet.utoronto.ca

From here you can view your directories, see the queue on the GPC using showq, log into one of four GPC development nodes, gpc01..gpc04, or either of the TCS development nodes, tcs01 or tcs02.

However, because these machines are used by everyone who needs to use the SciNet systems, be considerate; do not run scripts or programs that will take more than a few minutes or a few MB of memory on these systems.

Users can transfer small files (at most about 10GB) into or out of the datacentre via the login nodes, using scp, or rsync over ssh. Large data transfers, however, should be done via the datamover1 node. This node can initiate both incoming and outgoing transfers, and since it is on a 10 Gbps link to the University of Toronto, it is the fastest - and recommended - way to transfer data. Note that datamover1 is not accessible from the outside, so you must still login to login.scinet.utoronto.ca and then ssh to the data mover node.

Please talk to us at <support@scinet.utoronto.ca> if you need to do very large file transfers.

Note that the login machines are not the same architecture as either the GPC or TCS nodes; you should not compile programs on the login machines that you expect to use on the GPC or TCS clusters.

Note also that access to the TCS is not enabled by default. We ask that people justify the need for this highly specialized machine. Contact us explaining the nature of your work if you want access to the TCS. In particular, applications should scale well to 64 processes/threads to run on this system.

SciNet Firewall

Important note about logging in: The SciNet firewall monitors for too many attempted connections, and will shut down all access (including previously working connections) from your IP address if more than four connection attempts (successful or not) are made within the space of a few minutes. In that case, you will be locked out of the system for an hour. Be patient in attempting new logins!

Availability of the SciNet systems

The SciNet systems are still very new, and will require significant maintenance and reconfiguration in these early days. You could reasonably expect that one of the TCS, the GPC, or their shared filesystems may need to be taken offline for maintenance one day per week. We appreciate your patience -- we're working hard to make the SciNet machines as fast and useful as possible!

Default Limits on the SciNet systems

The default allocation on the SciNet GPC cluster allows a research group to use a maximum of 32 nodes at a time for 48 hours per job with no more than 32 individual jobs at a time; and for those who have also applied to use the more specialized TCS resource, up to 8 jobs in the queue running on a total of 2 nodes at a time, again with a 48 hour wallclock limit per job. Users who need more than this amount of resources must apply for it through the [1] account allocation / LRAC/NRAC process.

Usage Policy

All users agreed to various conditions when they requested an account; e.g. accounts must not be shared, computing resources are to be used efficiently and only for research etc. Please see the Scinet Usage Policy for the full details.

File/Ownership Management

  • By default, at SciNet, users within the same group have read permission to each other's files (not write)
  • You may use access control list (ACL) to allow your supervisor (or another user within your group) to manage files for you (i.e., create, move, rename, delete), while still retaining your access as the original owner of the files/directories.
  • For example, to allow <supervisor> to manage files in /project/group/<owner>, issue the following commands as the <owner> account from a shell:
$ setfacl -d -m user:<supervisor>:rwx /project/group/<owner>
(every *new* file/directory inside <owner> will inherit <supervisor> ownership by default from now on)

$ setfacl -d -m user:<owner>:rwx /project/group/<owner>
(but will also inherit <owner> ownership, ie, ownership of both by default)

$ setfacl -Rm user:<supervisor>:rwx /project/group/<owner>
(recursively modify all *existing* files/directories inside <owner> to also be rwx by <supervisor>)

Suggested Further Reading

Contact information

Any questions and problem reports should be addressed to <support@scinet.utoronto.ca>. Please provide as much relevant information as possible in your email.