P8
P8 | |
---|---|
Installed | June 2016 |
Operating System | Linux (RHEL 7.2) le |
Number of Nodes | 2 Power8 with 2x NVIDIA K80 |
Interconnect | Infiniband ( EDR ) |
Ram/Node | 512 Gb |
Cores/Node | 32 (128 Threads) |
Login/Devel Node | p8t01,p8t02 (from login.scinet) |
Vendor Compilers | xlc/xlf |
Specifications
The P8 Test System consists of two IBM Power 822LC Servers each with 2x8core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous Multi Threading (SMT), but extends the design to 8 threads per core. This allows the 16 physical cores to support up to 128 threads which in many cases can lead to significant speedups. Each node has two NVIDIA Tesla K80 GPUs with CUDA Capability 3.7 (Kepler), consisting of 2xGK210 GPUs each with 12 GB of RAM, so each node has effectively 4 GPU's.
Compile/Devel/Test
First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to p8t01 and p8t02.
Software
GNU Compilers
gcc/g++/gfortran version 4.4.4 is the default with RHEL 6.3 and is available by default. Gcc 4.6.1 is available as a separate module. However, it is recommended to use the IBM compilers (see below).
IBM Compilers
To use the IBM Power specific compilers xlc/xlc++/xlf you need to load the following modules
$ module load vacpp xlf
NOTE: Be sure to use "-q64" when using the IBM compilers.
MPI
IBM's POE is available and will work with both the IBM and GNU compilers.
$ module load pe
The mpi wrappers for C, C++ and Fortran 77/90 are mpicc, mpicxx, and mpif77/mpif90, respectively (but mpcc, mpCC and mpfort should also work).
Note: To use the full C++ bindings of MPI (those in the MPI namespace) in c++ code, you need to add -cpp to the compilation command, and you need to add -Wl,--allow-multiple-definition to the link command if you are linking several object files that use the MPI c++ bindings.