Difference between revisions of "Oldwiki.scinet.utoronto.ca:System Alerts"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
== System Status: <span style="color:#00dd00">'''UP'''</span>, but with filesystem problems - scratch went <span style="color:#ff0000">'''DOWN'''</span> ==
+
== System Status: <span style="color:#00dd00">'''UP'''</span> ==
 +
 
 +
Filesystems are back.  Please resubmit you jobs.
 +
 
 +
Thu Jan 19 12:31:34 EST 2012
 +
 
 +
-------------------------
  
 
System still apparently unstable, with consequent loss of /scratch.  Jobs may have died.  Being mounted again.
 
System still apparently unstable, with consequent loss of /scratch.  Jobs may have died.  Being mounted again.
Line 20: Line 26:
  
 
---------------------------
 
---------------------------
 
All systems are back up.  Please resubmit your jobs.
 
 
Thu Jan 19 10:50:49 EST 2012
 
 
----------------------------
 
  
  

Revision as of 13:32, 19 January 2012

System Status: UP

Filesystems are back. Please resubmit you jobs.

Thu Jan 19 12:31:34 EST 2012


System still apparently unstable, with consequent loss of /scratch. Jobs may have died. Being mounted again.

Thu Jan 19 12:10:15 EST 2012


System Temporary Change:

Due to some changes we are making to the GPC GigE nodes, if you run multinode ethernet MPI jobs (IB multinode jobs are fine), you will need to explicitly request the ethernet interface in your mpirun:

For Openmpi -> mpirun --mca btl self,sm,tcp

For IntelMPI -> mpirun -env I_MPI_FABRICS shm:tcp

There is no need to do this if you run on IB, or if you run single node mpi jobs on the ethernet (GigE) nodes. Please check GPC_MPI_Versions for more details.

Thu Jan 19 11:12:55 EST 2012



Chiller failure, all systems automatically shut down. We'll keep you informed in this space.

Thu 19 Jan 2012 07:54:17 EST


(Previous messages)