Difference between revisions of "Oldwiki.scinet.utoronto.ca:System Alerts"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
 
== System Status: <span style="color:#00bb11">'''UP'''</span>==
 
== System Status: <span style="color:#00bb11">'''UP'''</span>==
  
<div style="background-color:#000000; color:#ffffff; padding: 1em">
+
The Apr 19 upgrade of the GPC to a low-latency, high-bandwidth Infiniband network throughout the cluster is now reflected in (most of) the wiki.  The appropriate way to request nodes in job scripts for the new setup (which will coincide with the old way for many users) is described on the [[GPC_Quickstart#QDR_vs._DDR_Infiniband|GPC_Quickstart]] page.
<h2 style="color:#ffffff">GPC Upgrade to Infiniband - what you need to know</h2>
 
  
The GPC network has been upgraded to a low-latency,
+
Wed 24 Apr 2012 12:47:46 EDT
high-bandwidth Infiniband network throughout the cluster.  Several significant benefits over the old ethernet/infiniband mixed setup are expected,
 
including:
 
*better I/O performance for all jobs
 
*better job performance for what used to be multi-node ethernet jobs (as they will now make use of Infiniband),
 
*for users that were already using Infiniband, improved queue throughput (there are now 4x as many available nodes), and the ability to run larger IB jobs.
 
 
 
NOTE 1: Our wiki is NOT completely up-to-date after
 
this recent change. For the time being, you should first check this
 
current page and the temporary [https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade]  page
 
for anything related to networks and queueing.
 
 
 
NOTE 2: The temporary mpirun settings that were recommended for multinode ethernet runs are no longer in effect, as all MPI traffic is now going over InfiniBand.
 
 
 
NOTE 3: Though we have been testing the new system since last night, a change of
 
this magnitude (3,000 adapter cards installed, 5km of copper cable, 35km of fibre optic cable) is likely to result in some teething problems so please
 
bear with us over the next few days.  Please report any issues/problems
 
that are not explained/resolved after reading this current page or our
 
[https://support.scinet.utoronto.ca/wiki/index.php/Infiniband_Upgrade Infiniband Upgrade]  page
 
to support@scinet.utoronto.ca. 
 
</div>
 
 
 
Thu 19 Apr 2012 19:43:46 EDT
 
  
 
([[Previous_messages:|Previous messages]])
 
([[Previous_messages:|Previous messages]])

Revision as of 12:48, 25 April 2012

System Status: UP

The Apr 19 upgrade of the GPC to a low-latency, high-bandwidth Infiniband network throughout the cluster is now reflected in (most of) the wiki. The appropriate way to request nodes in job scripts for the new setup (which will coincide with the old way for many users) is described on the GPC_Quickstart page.

Wed 24 Apr 2012 12:47:46 EDT

(Previous messages)