Oldwiki.scinet.utoronto.ca:System Alerts
System Status: UP
GPC Upgrade to Infiniband - what you need to know
The GPC network has been upgraded to a low-latency, high-bandwidth Infiniband network throughout the cluster. Several significant benefits over the old ethernet/infiniband mixed setup are expected, including:
- better I/O performance for all jobs
- better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband),
- for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs.
NOTE 1: Our wiki is NOT completely up-to-date after this recent change. For the time being, you should first check this current page and the temporary Infiniband Upgrade page for anything related to networks and queueing.
NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand.
NOTE 3: Though we have been testing the new system since last night, a change of this magnitude is likely to result in some teething problems so please bear with us over the next few days. Please report any issues/problems that are not explained/resolved after reading this current page or our Infiniband Upgrade page to support@scinet.utoronto.ca.
Thu 19 Apr 2012 19:43:46 EDT