Difference between revisions of "Oldwiki.scinet.utoronto.ca:System Alerts"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
== System Status: <span style="color:#00ff00">'''UP'''</span>==
+
== System Status: <span style="color:#00bb11">'''UP'''</span>==
  
------------------------------
+
<div style="background-color:#000000; color:#ffffff; padding: 1em">
Thu 19 Apr 2012 19:43:46 EDT: System Status
+
<h2 style="color:#ffffff">GPC Upgrade to Infiniband - what you need to know</h2>
  
The GPC interconnect has now been upgraded so that there is low-latency,  
+
The GPC network has been upgraded to a low-latency,  
high-bandwidth Infiniband (IB) networking throughout the cluster. This
+
high-bandwidth Infiniband network throughout the cluster. Several significant benefits over the old ethernet/infiniband mixed setup are expected,
is expected to result in several significant benefits for users
+
including:  
including: better I/O performance for all jobs, better job performance  
+
*better I/O performance for all jobs
for any multi-node ethernet jobs (they can now make use of IB) and, for  
+
*better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband),
IB users, improved queue throughput (there are now 4x as many IB nodes)  
+
*for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs.
as well as the ability to run larger IB jobs.
 
  
Though we have been testing the new system since last night, a change of  
+
NOTE 1: Our wiki is NOT completely up-to-date after
 +
this recent change. For the time being, you should first check this
 +
current page and the temporary [https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade]  page
 +
for anything related to networks and queueing.
 +
 
 +
NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand.
 +
 
 +
NOTE 3: Though we have been testing the new system since last night, a change of  
 
this magnitude is likely to result in some teething problems so please  
 
this magnitude is likely to result in some teething problems so please  
 
bear with us over the next few days.  Please report any issues/problems  
 
bear with us over the next few days.  Please report any issues/problems  
 
that are not explained/resolved after reading this current page or our  
 
that are not explained/resolved after reading this current page or our  
temporary IB upgrade page
+
[https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade]  page
<https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade>
+
to support@scinet.utoronto.ca.
to support@scinet.utoronto.ca.
+
</div>
 
 
NOTE that our online documentation is NOT completely up-to-date after
 
this recent change. For the time being, you should first check this
 
current page and
 
<https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade>  
 
for anything related to networks and queueing.
 
 
 
NOTE: The temporary mpirun settings that were recommended are no longer in effect, as all MPI traffic is now going over InfiniBand.
 
  
 +
Thu 19 Apr 2012 19:43:46 EDT
  
 
([[Previous_messages:|Previous messages]])
 
([[Previous_messages:|Previous messages]])

Revision as of 20:29, 19 April 2012

System Status: UP

GPC Upgrade to Infiniband - what you need to know

The GPC network has been upgraded to a low-latency, high-bandwidth Infiniband network throughout the cluster. Several significant benefits over the old ethernet/infiniband mixed setup are expected, including:

  • better I/O performance for all jobs
  • better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband),
  • for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs.

NOTE 1: Our wiki is NOT completely up-to-date after this recent change. For the time being, you should first check this current page and the temporary Infiniband Upgrade page for anything related to networks and queueing.

NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand.

NOTE 3: Though we have been testing the new system since last night, a change of this magnitude is likely to result in some teething problems so please bear with us over the next few days. Please report any issues/problems that are not explained/resolved after reading this current page or our Infiniband Upgrade page to support@scinet.utoronto.ca.

Thu 19 Apr 2012 19:43:46 EDT

(Previous messages)