Difference between revisions of "Oldwiki.scinet.utoronto.ca:System Alerts"
Line 1: | Line 1: | ||
− | == System Status: <span style="color:# | + | == System Status: <span style="color:#00ff00">'''UP'''</span>== |
------------------------------ | ------------------------------ | ||
− | Thu Apr 19 | + | Thu 19 Apr 2012 19:43:46 EDT: System Status |
− | + | The GPC interconnect has now been upgraded so that there is low-latency, | |
+ | high-bandwidth Infiniband (IB) networking throughout the cluster. This | ||
+ | is expected to result in several significant benefits for users | ||
+ | including: better I/O performance for all jobs, better job performance | ||
+ | for any multi-node ethernet jobs (they can now make use of IB) and, for | ||
+ | IB users, improved queue throughput (there are now 4x as many IB nodes) | ||
+ | as well as the ability to run larger IB jobs. | ||
− | + | Though we have been testing the new system since last night, a change of | |
− | + | this magnitude is likely to result in some teething problems so please | |
− | + | bear with us over the next few days. Please report any issues/problems | |
− | + | that are not explained/resolved after reading this current page or our | |
− | + | temporary IB upgrade page | |
− | + | <https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade> | |
− | + | to support@scinet.utoronto.ca. | |
− | |||
− | + | NOTE that our online documentation is NOT completely up-to-date after | |
+ | this recent change. For the time being, you should first check this | ||
+ | current page and | ||
+ | <https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade> | ||
+ | for anything related to networks and queueing. | ||
− | + | NOTE: The temporary mpirun settings that were recommended are no longer in effect, as all MPI traffic is now going over InfiniBand. | |
− | |||
− | |||
([[Previous_messages:|Previous messages]]) | ([[Previous_messages:|Previous messages]]) |
Revision as of 19:52, 19 April 2012
System Status: UP
Thu 19 Apr 2012 19:43:46 EDT: System Status
The GPC interconnect has now been upgraded so that there is low-latency, high-bandwidth Infiniband (IB) networking throughout the cluster. This is expected to result in several significant benefits for users including: better I/O performance for all jobs, better job performance for any multi-node ethernet jobs (they can now make use of IB) and, for IB users, improved queue throughput (there are now 4x as many IB nodes) as well as the ability to run larger IB jobs.
Though we have been testing the new system since last night, a change of this magnitude is likely to result in some teething problems so please bear with us over the next few days. Please report any issues/problems that are not explained/resolved after reading this current page or our temporary IB upgrade page <https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade> to support@scinet.utoronto.ca.
NOTE that our online documentation is NOT completely up-to-date after this recent change. For the time being, you should first check this current page and <https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade> for anything related to networks and queueing.
NOTE: The temporary mpirun settings that were recommended are no longer in effect, as all MPI traffic is now going over InfiniBand.