Difference between revisions of "Oldwiki.scinet.utoronto.ca:System Alerts"
Line 1: | Line 1: | ||
− | == System Status: <span style="color:# | + | == System Status: <span style="color:#00bb11">'''UP'''</span>== |
− | - | + | <div style="background-color:#000000; color:#ffffff; padding: 1em"> |
− | + | <h2 style="color:#ffffff">GPC Upgrade to Infiniband - what you need to know</h2> | |
− | The GPC | + | The GPC network has been upgraded to a low-latency, |
− | high-bandwidth Infiniband | + | high-bandwidth Infiniband network throughout the cluster. Several significant benefits over the old ethernet/infiniband mixed setup are expected, |
− | + | including: | |
− | including: better I/O performance for all jobs | + | *better I/O performance for all jobs |
− | for | + | *better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband), |
− | + | *for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs. | |
− | |||
− | Though we have been testing the new system since last night, a change of | + | NOTE 1: Our wiki is NOT completely up-to-date after |
+ | this recent change. For the time being, you should first check this | ||
+ | current page and the temporary [https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade] page | ||
+ | for anything related to networks and queueing. | ||
+ | |||
+ | NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand. | ||
+ | |||
+ | NOTE 3: Though we have been testing the new system since last night, a change of | ||
this magnitude is likely to result in some teething problems so please | this magnitude is likely to result in some teething problems so please | ||
bear with us over the next few days. Please report any issues/problems | bear with us over the next few days. Please report any issues/problems | ||
that are not explained/resolved after reading this current page or our | that are not explained/resolved after reading this current page or our | ||
− | + | [https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade] page | |
− | + | to support@scinet.utoronto.ca. | |
− | to support@scinet.utoronto.ca. | + | </div> |
− | |||
− | |||
− | |||
− | |||
− | < | ||
− | |||
− | |||
− | |||
+ | Thu 19 Apr 2012 19:43:46 EDT | ||
([[Previous_messages:|Previous messages]]) | ([[Previous_messages:|Previous messages]]) |
Revision as of 20:29, 19 April 2012
System Status: UP
GPC Upgrade to Infiniband - what you need to know
The GPC network has been upgraded to a low-latency, high-bandwidth Infiniband network throughout the cluster. Several significant benefits over the old ethernet/infiniband mixed setup are expected, including:
- better I/O performance for all jobs
- better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband),
- for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs.
NOTE 1: Our wiki is NOT completely up-to-date after this recent change. For the time being, you should first check this current page and the temporary Infiniband Upgrade page for anything related to networks and queueing.
NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand.
NOTE 3: Though we have been testing the new system since last night, a change of this magnitude is likely to result in some teething problems so please bear with us over the next few days. Please report any issues/problems that are not explained/resolved after reading this current page or our Infiniband Upgrade page to support@scinet.utoronto.ca.
Thu 19 Apr 2012 19:43:46 EDT