Previous System News

From oldwiki.scinet.utoronto.ca
Revision as of 20:06, 3 April 2018 by Rzon (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The current month's changes can be found on the wiki front page.

Updated in 2017

  • Dec 4, 2017 : scratchtcs decommissioned.
  • Dec 1, 2017, 00:00 am to 06:00 am: The connection to the data center will be down for a scheduled network maintenance. Jobs will continue to run, but login sessions will be terminated at midnight.
  • Nov 28, 2017, 12:00 noon: The GPC will be reduced from 30,912 to 16,800 cores to make room for the installation of Niagara.
  • Sept 29, 2017: The TCS was decommissioned on Sept. 29, 2017
  • Mar 3: GPC: Version 7.0 of Allinea Forge (DDT Debugger, MAP, Performance Reports) installed as a module.
  • Jan 26: New larger (1.8PB) $SCRATCH storage brought online.
  • Oct 24: P8: 2 new Power 8 Development Nodes, P8 , with 4x Nvidia P100 (Pascal) GPUs, available for users.
  • Sept 19: KNL: Intel Knights Landing Development Nodes, KNL , available for users.
  • Sept 13: GPC: Version 6.1 of Allinea Forge (DDT Debugger, MAP, Performance Reports) installed as a module.
  • Sept 13: GPC: Version 17.0.0 of the Intel Compiler and Tools are installed as modules.
  • Aug 20: P8: Power 8 Development Nodes, P8 , with 2x Nvidia K80, GPUs available for users.


Updated In 2016

  • May 3: GPC: Versions 15.0.6 and 16.0.3 of the Intel Compilers are installed as modules.

Updated In Sept. 2016

  • Feb 12: GPC: Version 6.0 of Allinea Forge (DDT Debugger, MAP, Performance Reports) installed as a module.
  • Jan 11: The 2016 Resource Allocations for compute cycles are now in effect.
  • Nov 23: The quota for home directories has been increased from 10 GB to 50 GB.
  • Nov 23, GPC: Two Visualization Nodes, viz01 and viz02, are being setup. They are 8-core Nehalem nodes with 2 graphics cards each, 64 GB of memory, and about 60GB of local hard disk. For now, you can directly log into viz01 to try it out. We would value users' feedback and request for suitable software, help with visualization projects etc.
  • Nov 16: ARC being decommissioned. During a transition period, the ARC head node and two compute nodes will be kept up. Users are encouraged to start using Gravity instead.
  • Nov 12, GPC: The number of GPC devel nodes has been doubled from 4 to 8, and the new ones can be accessed using gpc0[5-8].
  • Sept 7, GPC: The number of nodes with 32 GB of RAM has been increased from 84 to 205.
  • July 24, GPC: GCC 5.2.0 with Coarray Fortran support, installed as a module.

Updated in June 2015:

  • Jun 29, BGQ: ddt/5.0.1 installed as a module.
  • Jun 30, BGQ: NetCDF 4.3.3.1, using HDF5 1.8.14, installed
  • Jun 29, GPC: NetCDF 4.3.3.1, using HDF5 1.8.14, installed

Updated in April 2015:

  • Apr 21, GPC: intel/15.0.2 installed as a module.
  • Apr 21, GPC: intelmpi/5.0.3.048 installed as a module.
  • Apr 14, GPC: Haskell compiler installed as a module.
  • Apr 14, GPC: git-annex/5.20150219 installed as a module.
  • Apr 13, GPC: Midnight Commander installed as a module.
  • Apr 7, GPC: Stacks/1.29 installed as a module.
  • Apr 6, GPC: Gromacs 4.6.7 installed as a module.
  • Apr 6, GPC: Gromacs 5.0.4 installed as an experimental module

Updated in March 2015:

  • Mar 18, BGQ: module hdf5/1812-v18-mpich2-gcc is deprecated; please use hdf5/1814-v18-mpich2-gcc instead.
  • Mar 13, BGQ: module fftw/3.3.3-gcc4.8.1 is deprecated; please use fftw/3.3.4-gcc4.8.1 instead.
  • Mar 12, GPC & BGQ: Namd 2.10 installed as modules.
  • Mar 05, BGQ: modules bgqgcc/4.8.1 and mpich2/gcc-4.8.1 now fixed for V1R2M2.
  • Mar 04, GPC: OpenBLAS 0.2.13 installed as modules.

Updated in February 2015:

  • Feb 26, BGQ software stack upgraded to version V1R2M2.
  • Feb 24, GPC: Allinea Performance Reports available in module ddt/5.0.
  • Feb 20, GPC: Allinea Forge (DDT & MAP) available as module ddt/5.0.
  • Feb 6, GPC: intel/15.0 module is deprecated -- use 15.0.1 instead.
  • Feb 5, GPC, BGQ: git/1.9.5 is available as a module.

Updated in January 2015:

  • Jan 22, BGQ now a single 4-rack system. bgqdev-fen1 is the single login/devel/submission node.
  • Jan 19, GPC: emacs 24.4 available as a module.
  • Jan 14, GPC: kernel upgraded to 2.6.32-504.3.3.el6.x86_64. Its base OS remains unchanged.
  • Jan 10, GPC: Cmake 3.1.0 available as a module.
  • Jan 7, GPC: ROOT 6.0.02 installed as a module.
  • Jan 6, GPC: Ruby 1.9.3 installed as a module.

Updated in December 2014:

  • Dec 12, BGQ: devel system upgraded to 2 full racks (32,768 cores).

Updated in November 2014:

  • Nov 7: Archive jobs are on hold because the system is nearing capacity. They will run only once they are reviewed and released by SciNet staff (HPSS)
  • Nov 6: Python 2.7.8, a popular scripting environment, installed as module (GPC)

Updated in October 2014:

  • Oct 30: BGQ devel system upgraded from half a rack (8,192 cores) to 1 full rack (16,384 cores)
  • Oct 17: R 3.1.1, a statistical package, installed as a module (GPC)
  • Oct 7: Bedtools, a powerful toolset for genome arithmetic, installed as a module (GPC)
  • Oct 2: Parallel Debugger DDT upgraded to version 4.2.1 (TCS/P7)

Updated in September 2014:

  • Sept 25: Two new "Haswell" test nodes available, hasw01 and haws02, 2x E5-2660 v3 @ 2.60GHz (20 cores total) nodes with 128GB RAM.
  • Sept 10: Job arrays re-enabled (GPC/ARC/GRAVITY/SANDY)
  • Sept 10: Email notifications by the scheduler re-enabled (GPC/ARC/GRAVITY/SANDY).
  • Sept 9: Scheduler upgraded (GPC/ARC/GRAVITY/SANDY).
  • Sept 2: Intel Compilers 15.0 and IntelMPI 5.0 installed (GPC)

Updated in July 2014:

  • Jul 25: qsub now checks your submission scripts (GPC/ARC/GRAVITY/SANDY).
  • Jul 17: Paraview 4.1 server installed (GPC)
  • Jul 16: VNC now works on arc01 as well as gpc01,2,3,4.

Updated in May 2014:

  • May 26: gcc 4.9.0 installed as an experimental module (module load use.experimental gcc/4.9.0).
  • May 23: CUDA 6.0 installed as a module and Nvidia driver updated to 331.67 on ARC and GRAVITY
  • May 21: Allinea DDT/MAP updated on the GPC to version 4.2.1.
  • May 15: 4 nodes have been upgraded with more memory (2 with 128GB, 2 with 256GB), see GPC Memory Configuration for usage details.

Updated in Mar 2014:

  • Mar 12: Petsc 3.4.4, the Portable, Extensible Toolkit for Scientific computation, installed (GPC)
  • Mar 11: Hpn-ssh, high-performance enabled ssh, installed (BGQ)
  • Mar 7: Vapor 2.3.0, Tge Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers, installed (P7)
  • Mar 4: Python 3.3.4 installed (GPC)
  • Mar 3: Ffmpeg v2.1.3, an audio and video software solution, installed (GPC)

Updated in Feb 2014:

  • Feb 28: Cuda 5.5 module installed (ARC/Gravity)
  • Feb 28: New version of valgrind modules installed (valgrind/3.9.0_intelmpi and valgrind/3.9.0_openmpi) (GPC)
  • Feb 25: New version of CP2K (latest trunk) installed as a module (GPC).
  • Feb 19: Ray 2.3.1 installed as a module (GPC)
  • Feb 19: Ghostscript added to the Xlibraries module (GPC)

Updated in Jan 2014:

  • Jan 24: As a precaution, emails by the Moab/Torque scheduler have been disabled because of a potential security vulnerability (GPC).

Updated in Dec 2013:

  • discovar, a genome assember, installed as a module on GPC.
  • allpaths-lg, a short read genome assembler, installed as a module on GPC.
  • gamess (version of May 1, 2013) installed as a module on GPC.
  • HDF4 file format library version 4.2.6 installed on TCS.
  • Newest IBM compilers (xlf 14.1 and xlc 12.1) now the default on TCS.
  • zlib and slib compression libraries installed as module 'compression' on TCS.
  • cmake version 2.8.12.1 installed as a module on BGQ.
  • BGQ: serial and parallel HDF5 v1.8.12 libraries installed as modules on BGQ.

Updated in Nov 2013:

Updated in Oct 2013:

  • Oct 30, Parallel netCDF 1.3.1 modules for intelmpi and openmpi installed on GPC, TCS, P7 and BGQ
  • Oct 30, GDAL 1.9.2 installed as a module on GPC
  • Oct 24, MemP 1.0.3 a memory profiling tool installed on BGQ
  • Oct 22, Gcc 4.8.1 installed on P7
  • Oct 15, Rsync 3.1.0 installed as a module on GPC
  • Oct 15, User-space MySQL module installed on GPC
  • Oct 7, Python 2.7.5 module installed on P7

Updated in Sep 2013:

  • Sep 13 FFTW 3.3.3 with openmpi support installed as a module on GPC
  • Sep 13: IntelMPI 4.1.1.036 installed as a module on GPC
  • Sep 12: Paraview server 2.14.1 installed as a module on GPC
  • Sep 11: Intel compiler 14.0.0 available as a module on GPC.
  • Sep 11: Armadillo 3.910.0, a c++ linear algebra library, available as a module on GPC
  • Sep 10: git-annex 1.8.4, a tool to manage files using git, available as a module on GPC
  • Sep 10: cmake 2.8.8 module installed on TCS
  • Sep 4: Storage offloading from BGQ to HPSS enabled

Updated in Aug 2013:

  • Aug 26: CP2K, a molecular simulations package, installed as a module on GPC and BGQ
  • Aug 15, Vim editor v7.4.5 module on GPC
  • Aug 6, GROMACS v4.6.3 module on GPC

Updated in Jul 2013:

  • July 22, Latest version of quantum espresso, a ab initio electronic-structure package, available as module espresso/trunk on GPC and BGQ
  • July 18, MAP 4.1 available on the GPC as part of the ddt/4.1 module. This version can also do IO profiling.
  • July 18, DDT 4.1 installed (BGQ, GPC, P7, TCS).
  • July 12, Ray de-novo assembler v2.2.0 module (GPC)
  • July 4, GDB 7.6 available as a module (GPC).

Updated in Jun 2013:

Updated in May 2013:

  • May 31, GCC 4.8.1 available on GPC as a module.
  • May 28, New version of GNU Parallel, 20130422, available as module on the GPC.
  • May 16, BGQ maximum job length reduced to 12 hours (bgq devel) and 24 hours (bgq production)
  • May 2, PetSc 3.3 installed as a module. Uses intel 13 and openmpi 1.6.4
  • May 1, OpenMPI 1.6.4 installed as module openmpi/intel/1.6.4.

Updated in Apr 2013:

  • Apr 11, GPC operating system upgraded from CentOS 6.3 to 6.4.
  • Apr 4, Allinea DDT 4.0 now available (and default version) for all our systems, with 128 task license.
  • Apr 4, Allinea MAP is now available on the GPC (part of the ddt module).

Updated in Mar 2013:

  • Mar 21, BGQ systems are running RHEL6.3 and V1R2M0 driver.
  • Mar 19, P7 xlf and vacpp compilers patched to latest versions, and the default modules refer to these latest versions.
  • Mar 18, Ray de-novo assembler v2.1.0 available on GPC
  • Mar 7, PGI compilers v13.2 available on ARC and GPC
  • Mar 7, Gnuplot 4.6.1 available on GPC
  • Mar 5, P7 Linux Cluster expanded by 3 nodes.

Updated in Feb 2013:

  • Feb 25, PetSc 3.2 available on TCS
  • Feb 21, FFTW 3.3.3 for intelmpi available on GPC
  • Feb 15, Armadillo 3.6.2 installed on GPC
  • Feb 7, Intel MPI 4.1 available on GPC
  • Feb 6, 2013: Intel compilers 13.1 available on GPC

Updated in Jan 2013:

  • Jan 8, 2013: GCC 4.7.2 available on GPC

Updated in Nov 2012:

  • Nov 13, 2012: GNU Parallel 20121022 installed on the GPC.
  • Nov 9, 2012: DDT upgraded to 3.2.1 (GPC,ARC,TCS,P7)
  • Nov 8, 2012: Gnuplot 4.6.1 installed on the P7.

Updated in Oct 2012:

  • Oct 19, 2012: Cuda 5.0 installed on ARC.
  • Oct 2, 2012: Perl-CPAN installed on the GPC.

Updated in Sep 2012:

  • Sep 19, 2012: Users now get an email alert when they reach 90% and 95% of their allowed disk usage, or of their allowed number of files.
  • Sep 5, 2012: GPC/HPSS: A parallel implementation of gzip called 'pigz' has been installed as part of the "extras" module.

Updated in Aug 2012:

  • Aug 15, 2012, ARC: PGI compilers for OpenACC and Cuda Fortran upgraded to 12.6 as module pgi/12.6, which is the new default
  • Aug 7, 2012, P7: new version of the IBM Fortran (14.1) and C/C++ compiler (12.1) available in non-default modules

Updates in Jul 2012:

  • Jul 13, 2012, TCS: new version of the IBM Fortran (14.1) and C/C++ compiler (12.1) available in non-default modules
  • Jul 11, 2012, TCS: new module gmake/3.82
  • Jul 10, 2012, GPC: new version of GNU parallel in module gnu-parallel/20120622
  • Jul 7, 2012, GPC: queues except debug got a minimum 15 minutes walltime
  • Jul 5, 2012, ARC: cluster now integrated into GPC scheduler
  • Jul 4, 2012, ARC: PGI compilers for OpenACC and Cuda Fortran installed

Updates in Jun 2012:

  • GPC: A versions of the Intel compilers are available as intel/12.1.5. Version 12.1.3 is still the default.
  • Scratch purging: the allowed time is still three months, but now files that were modified in the last three months will not get purged, even if they were never read in that period.
  • ARC: cuda/4.1 is now the default CUDA module. The module cuda/4.2 is available as well, and will work with the newer gcc 4.6.1 compiler.

Updates in May 2012:

  • GPC: a newer git version 1.7.10 is now available as a module (the default is still 1.7.1).
  • GPC: silo is installed as a module
  • GPC: gcc 4.7.0 available as module (version 4.6.1 is still the default)
  • HPSS: Jobs will now run automatically.
  • ARC: cuda 4.1 and 4.2 are available as modules (Note: 4.2 is not supported by the ddt debugger).
  • P7: ncl available as a module
  • P7: scons available as a module

Updates in Apr 2012:

  • GPC: The GPC has been upgraded to a low-latency, high-bandwidth Infiniband network throughout the cluster. The temporary mpirun settings that were recommended before for multinode ethernet runs, are no longer in effect, as all MPI traffic is now going over InfiniBand. For most cases, mpirun -np X [executable] will work.

Updates in Mar 2012:

  • New Blue Gene/Q system announced.

Updates in Feb 2012:

  • GPC: A new version of the Intel compiler suite has become the default module. The C/C++ and fortran compilers in this suite are at version 12.1.3, while the MKL library is at version 10.3.9.
  • GPC: New versions of parallel-netcdf and mpb have been installed.

Updates in Jan 2012:

  • The new Resource Allocations will take effect on January 9, for groups who were awarded an allocation.
  • On January 30th, CentOS 5 was phased out.
  • The "diskUsage" command has been improved and its output has been simplified.
  • GPC: Due to some changes we are making to the GigE nodes, if you run multinode ethernet MPI jobs, you will need to explicitly request the ethernet interface in your mpirun: For Openmpi: mpirun --mca btl self,sm,tcp; For IntelMPI: mpirun -env I_MPI_FABRICS shm:tcp.. There is no need to do this if you run on IB, or if you run single node mpi jobs on the ethernet (GigE) nodes.

Updates in December 2011:

  • GPC transition from CentOS 5 to CentOS 6 completed. A few nodes still have the old CentOS 5 for validation purposes.

Updates in Nov 2011:

  • Disks added to the scratch file system and scratch now spans both of our DDN controllers. The performance of /scratch should improve as a result of more spindles and the use of a second controller while the available space increased by about 40%.
  • The home, scratch, project and hpss file systems have been restructured (note: not all users have access to the latter two). As a consequence, users' files reside in different locations than before. The home and scratch file system are now group-based, and groups are furthermore clustered by the initial letter of the group name. For instance, the current home directory of user 'resu' in group 'puorg' is now /home/p/puorg/resu. The predefined variables $HOME, $SCRATCH, $PROJECT and $ARCHIVE point to the new directories.
  • The High-Performance Storage System (HPSS) goes into full production with a concurrent change in /project policies. Users with storage allocations greater than 5 TB will find all their former /project files will now reside in HPSS and their /project quotas will be reduced to 5 TB.

Updates in Oct 2011:

  • GPC: an OS update from CentOS 5.6 to CentOS 6 is being prepared, which will include updates to other programs (perl,gcc,python) as well. The ARC already uses the newer OS, and a few of the gpc nodes are using this as a test already, while we are in the process of porting all the modules to the new OS.

Updates in Sep 2011:

  • File system: In the near future, the home, scratch, project and hpss file systems will be restructured (note: not all users have access to the latter two). To facilitate the transition, we ask the user's cooperation in making sure all their scripts and applications only use relative paths, or use the predefined variables $HOME, $SCRATCH and $PROJECT.

Updates in Aug 2011:

  • GPC: an OS update from Centos 5.6 to CentOS 6 is being prepared, which will include updates to other programs (perl,gcc,python) as well. A few nodes are using this as a test already, and we are in the process of porting all the modules to the new OS. We encourage users willing to try the new environment out to contact us. Note that the ARC already uses the newer OS.
  • GPC: "Climate Data Operator" versions 1.4.6 and 1.5.1 are available as modules cdo/1.4.6 and cdo/1.5.1, respectively.
  • GPC: The "Climate Model Output Rewriter" is installed as module cmor/2.7.1.
  • GPC: a newer version of R can now be used by explicitly loading the module R/2.13.1, while R/2.11.1 remains the default.
  • GPC: ffmpeg has been added to the ImageMagick module.

Updates in Jul 2011:

  • Extensive updates and tightening of security measured were performed. Users were required to change there passwords and regenerate (pass-phrase protected) ssh keys if they used these. We also updated the operating system on the gpc to close the security hole.
  • GPC: nedit installed as a module.
  • P7: any user that has access to the power-6 cluster tcs, can now give the power-7 cluster (p7) a try.

Updates in Jun 2011:

  • HPSS, the new tape-backed storage system that expands the current storage capacity of SciNet, has entered its pilot phase. This means that the installation is complete, and select users are trying out the system. HPSS will be one of the ways in which storage allocation will be implemented.
  • New IBM Power-7 cluster: The P7 cluster currently consists of 5 IBM Power 755 servers (at least 3 more servers to be added later this year). Each has four 8-core 3.3GHz Power7 CPUs and 128GB RAM, and features 4-way Simultaneous MultiThreading giving 128 threads per server. Linux is the operating system. Both the GCC and IBM compilers are installed, as well as POE and OpenMPI. LoadLeveler is used as the scheduler. Instruction on usage are on the wiki, but you will first have to ask us if you want access (support@scinet.utoronto.ca).
  • GPC: The Berkeley compiler for Unified Parallel C (UPC) has been installed as the module upc. The compiler command is 'upcc'.
  • GPC: Bugs in the gnuplot module were fixed.
  • GPC: qhull support was added to octave.

Updates in May 2011:

  • DDT, a parallel debugging program from Allinea, has been installed on the GPC, TCS, and ARC. DDT stands for "Distributed Debugging Tool" and is available as the module "ddt". It supports debugging OpenMP, MPI and CUDA applications in a graphical environment.

Updates in Apr 2011:

  • GPC: Two versions of Goto Blas were installed, a single and multi-threaded one. They can be loaded as modules gotoblas/1.13-singlethreaded and gotoblas/1.13-multithreaded, respectively.
  • Accelerator Research Cluster (ARC): A 8-node GPU test-cluster has been setup with a total of 64 Nehalem CPUs and 16 GPUs (NVIDIA, Cuda capability 2.0).

Updates in Mar 2011:

  • TCS: The bug in the showstart command was fixed, and showstart may be used again to estimate the start time of your job.
  • GPC: Issues regarding simultaneously loading the gcc/4.4.0 and the Intel compiler modules were resolved.
  • GPC: A newer version of the gcc compiler suite, v4.6.0, has been installed. The default version is still 4.4.0.
  • GPC: Octave version 3.2.4 has been installed on the GPC. You should for now consider this an experimental module.

Updates in Feb 2011:

  • GPC: The temporary location of the standard error/output file for GPC jobs has changed.
  • TCS: The showstart command has been disabled as it appears to contain a bug that puts jobs in a 'hold' state.

Updates in Jan 2011:

  • Users can now request Network Switch Affinity for GPC ethernet jobs at runtime.
  • For groups who were allocated compute time in this RAC allocation round, the new RAPs took effect on Jan 17th.
  • File system servers were reconfigured to improve performance and stability. File access should be better, especially for writing.

Updates in Dec 2010:

  • Addition of ImageMagick software packages on GPC.
  • GPC: EncFS, an encrypted filesystem in user-space was installed. Works only on gpc01..04.
  • GPC: Versions 12 of the Intel compilers have been installed as module 'intel/intel-v12.0.0.084'.
  • GPC: The corresponding code analysis tools for these compilers are available as the module 'inteltools'.

Updates in Nov 2010:

  • A number of module names have been changed on the GPC.
  • GPC: A module for R was installed.
  • GPC: padb was installed as a module.
  • GPC: GNU parallel was installed as module 'gnu-parallel'.
  • TCS: CDO (Climate Data Operators) was installed as module 'cdo/1.4.6'
  • TCS: compilers xlf 13.1 and xlc 11.1 are available as modules as modules xlf/13.1 and vacpp/11.1, respectively.

Updates in Oct 2010:

  • Further enhancements to diskUsage. You may also generate plots of your usage over time (with the -plot option)
  • CPMD 3.13.2 installed on the GPC

Updates in Sept 2010:

  • The diskUsage command has been enhanced, and now you may get information on how much your usage has changed over a certain period with the -de option.
  • IntelMPI 3.x has been deprecated.
  • GPFS file system was upgraded to 3.3.0.6; Stricter /scratch quotas of 10TB were implemented; check yours with /scinet/gpc/bin/diskUsage.
  • GPC: The quantum chemistry software package NWChem 5.1.1 installed.
  • GPC: CPMD, a Carr-Parinello molecular dynamics package, was installed.
  • GPC: Gromacs 4.5.1 (single precision), a molecular simulation package was installed.

Updates in Aug 2010:

  • GPC: A number of versions of PetSc 3.1 were installed.
  • GPC: OpenSpeedShop v1.9.3.4 was installed.

Updates in Jul 2010:

  • Started the pilot project on Hierarchical Storage Management (HSM)
  • GPC: The intel module no longer automatically load the gcc module. Users which use both should have "module load gcc intel" in their .bashrc.
  • GPC: The default intel compiler v11.1 was changed to Update 6 (module intel/11.1.0.72).
  • TCS: OpenDX, a visualisation software package, was installed as a module.
  • GPC: MEEP, a finite difference simulation software for electromagnetic systems with mpi support, was installed.
  • GPC & TCS: A number of old modules has been deprecated.
  • Recurring file system issues were mitigated as much as possible.

Updates in Jun 2010:

  • Hyper-threading was enabled on GPC.