SOSCIP GPU
SOSCIP GPU | |
---|---|
Installed | September 2017 |
Operating System | Ubuntu 16.04 le |
Number of Nodes | 14x Power 8 with 4x NVIDIA P100 |
Interconnect | Infiniband EDR |
Ram/Node | 512 GB |
Cores/Node | 2 x 10core (20 physical, 160 SMT) |
Login/Devel Node | sgc01 |
Vendor Compilers | xlc/xlf, nvcc |
SOSCIP
The SOSCIP GPU Cluster is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) resource located at theUniversity of Toronto's SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario [1].
Support Email
Please use <soscip-support@scinet.utoronto.ca> for SOSCIP GPU specific inquiries.
Specifications
The SOSCIP GPU Cluster consists of of 14 IBM Power 822LC "Minsky" Servers each with 2x10core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 20 physical cores to support up to 160 threads. Each node has 4x NVIDIA Tesla P100 GPUs each with 16GB of RAM with CUDA Capability 6.0 (Pascal) connected using NVlink.
Access and Login
In order to obtain access to the system, you must request access to the SOSCIP GPU Platform. Instructions will have been sent to your sponsoring faculty member via E-mail at the beginning of your SOSCIP project.
Access to the SOSCIP GPU Platform is provided through the BGQ login node, bgqdev.scinet.utoronto.ca using ssh, and from there you can proceed to the GPU development node sgc01-ib0 via ssh. Your user name and password is the same as it is for SciNet systems.
Filesystem
The filesystem is shared with the BGQ system. See here for details.
Job Submission
The SOSCIP GPU cluster uses SLURM as a job scheduler and jobs are scheduled by node, ie 20 cores and 4 GPUs each. Jobs are submitted from the development node sgc01. The maximum walltime per job is 12 hours (except in the 'long' queue, see below) with up to 8 nodes.
$ sbatch myjob.script
Where myjob.script is
#!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=20 # MPI tasks (needed for srun) #SBATCH --time=00:10:00 # H:M:S #SBATCH --gres=gpu:4 # Ask for 4 GPUs per node cd $SLURM_SUBMIT_DIR hostname nvidia-smi
More information about the sbatch command is found here.
You can query job information using
squeue
To see only your own jobs, run
squeue -u <userid>
Once your job is running, SLURM creates a file usually named slurm<jobid>.out in the directory from where you issued the sbatch command. This contains the console output from your job. You can monitor the output of your job by using the tail -f <file> command.
To cancel a job use
scancel $JOBID
Longer jobs
If your job takes more than 12 hours, the sbatch command will not let you submit your job. There is, however, a way to have jobs up to 24 hours long, by specifying "-p long" as an option (i.e., add #SBATCH -p long to your job script). The priority of such jobs may be throttled in the future if we see that the 'long' queue is having a negative efffect on turnover time in the queue.
Interactive
For an interactive session use
salloc --gres=gpu:4
After executing this command, you may have to wait in the queue until a system is available.
More information about the salloc command is here.
Automatic Re-submission and Job Dependencies
Commonly you may have a job that you know will take longer to run than what is permissible in the queue. As long as your program contains checkpoint or restart capability, you can have one job automatically submit the next. In the following example it is assumed that the program finishes before the time limit requested and then resubmits itself by logging into the development nodes. Job dependencies and a maximum number of job re-submissions are used to ensure sequential operation.
#!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=20 # MPI tasks (needed for srun) #SBATCH --time=00:10:00 # H:M:S #SBATCH --gres=gpu:4 # Ask for 4 GPUs per node cd $SLURM_SUBMIT_DIR : ${job_number:="1"} # set job_nubmer to 1 if it is undefined job_number_max=3 echo "hi from ${SLURM_JOB_ID}" #RUN JOB HERE # SUBMIT NEXT JOB if [[ ${job_number} -lt ${job_number_max} ]] then (( job_number++ )) next_jobid=$(ssh sgc01-ib0 "cd $SLURM_SUBMIT_DIR; /opt/slurm/bin/sbatch --export=job_number=${job_number} -d afterok:${SLURM_JOB_ID} thisscript.sh | awk '{print $4}'") echo "submitted ${next_jobid}" fi sleep 15 echo "${SLURM_JOB_ID} done"
Software Installed
IBM PowerAI
The PowerAI platform contains popular open machine learning frameworks such as Caffe, TensorFlow, and Torch. Run the module avail command for a complete listing. More information is available at this link: https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/. Release 4.0 is currently installed.
GNU Compilers
More recent versions of the GNU Compiler Collection (C/C++/Fortran) are provided in the IBM Advanced Toolchain with enhancements for the POWER8 CPU. To load the newer advance toolchain version use:
Advanced Toolchain V10.0
module load gcc/6.3.1
Advanced Toolchain V11.0
module load gcc/7.2.1
More information about the IBM Advanced Toolchain can be found here: https://developer.ibm.com/linuxonpower/advance-toolchain/
IBM XL Compilers
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run
module load xlc/13.1.5 module load xlf/15.1.5
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP 4.5 GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER8 CPU.
Information about the IBM XL Compilers can be found at the following links:
NVIDIA GPU Driver Version
The current NVIDIA driver version is 384.66
CUDA
The current installed CUDA Tookits is are version 8.0 and version 9.0
module load cuda/8.0
or
module load cuda/9.0
The CUDA driver is installed locally, however the CUDA Toolkit is installed in:
/usr/local/cuda-8.0 /usr/local/cuda-9.0
Note that the /usr/local/cuda directory is linked to the /usr/local/cuda-9.0 directory.
Documentation and API reference information for the CUDA Toolkit can be found here: http://docs.nvidia.com/cuda/index.html
OpenMPI
Currently OpenMPI has been setup on the 14 nodes connected over EDR Infiniband.
$ module load openmpi/2.1.1-gcc-5.4.0 $ module load openmpi/2.1.1-XL-13_15.1.5
Other Software
Other software packages can be installed onto the SOSCIP GPU Platform. It is best to try installing new software in your own home directory, which will give you control of the software (e.g. exact version, configuration, installing sub-packages, etc.).
In the following subsections are instructions for installing several common software packages.
Anaconda (Python)
Anaconda is a popular distribution of the Python programming language. It contains several common Python libraries such as SciPy and NumPy as pre-built packages, which eases installation.
Anaconda can be downloaded from here: https://www.anaconda.com/download/#linux
NOTE: Be sure to download the Power8 installer.
TIP: If you plan to use Tensorflow within Anaconda, download the Python 2.7 version of Anaconda
Keras
Keras (https://keras.io/) is a popular high-level deep learning software development framework. It runs on top of other deep-learning frameworks such as TensorFlow.
The easiest way to install Keras is to install Anaconda first, then install Keras by using using the pip command.
Keras uses TensorFlow underneath to run neural network models. Before running code using Keras, be sure to load the PowerAI TensorFlow module and the cuda module.
Numpy/Scipy
PyTorch
PyTorch is the Python implementation of the Torch framework for deep learning.
It is suggested that you use PyTorch within Anaconda.
There is currently no build of PyTorch for POWER8-based systems. You will need to compile it from source.
Obtain the source code from here: http://pytorch.org/
Before building PyTorch, make sure to load cuda by running
module load cuda/8.0
NOTE: Do not have the gcc modules loaded when building PyTorch. Use the default version of gcc (currently v5.4.0) included with the operating system. Build will fail with later versions of gcc.
TensorFlow (new versions and python3)
The TensorFlow which is included in PowerAI may not be the most recent version. Newer versions of TensorFlow are provided as prebuilt Python Wheels that users can use pip to install under user space. Custom Python wheels are stored in /scinet/sgc/Applications/TensorFlow_wheels. It is highly recommended to install custom TensorFlow wheels into a Python virtual environment.
Installing with Python2.7:
- Create a virtual environment tensorflow-1.8-py2 with packages installed with system:
virtualenv --python=python2.7 --system-site-packages tensorflow-1.8-py2
- Activate virtual environment:
source tensorflow-1.8-py2/bin/activate
- Install TensorFlow into the virtual environment: (A custom Numpy built with optimized OpenBLAS library can also be installed)
pip install --upgrade --force-reinstall /scinet/sgc/Libraries/numpy/numpy-1.14.3-cp27-cp27mu-linux_ppc64le.whl pip install /scinet/sgc/Applications/TensorFlow_wheels/tensorflow-1.8.0-cp27-cp27mu-linux_ppc64le.whl
Installing with Python3.5:
- Create a virtual environment tensorflow-1.8-py3 with packages installed with system:
virtualenv --python=python3.5 --system-site-packages tensorflow-1.8-py3
- Activate virtual environment:
source tensorflow-1.8-py3/bin/activate
- Install TensorFlow into the virtual environment: (A custom Numpy built with optimized OpenBLAS library can also be installed)
pip3 install --upgrade --force-reinstall /scinet/sgc/Libraries/numpy/numpy-1.14.3-cp35-cp35m-linux_ppc64le.whl pip3 install /scinet/sgc/Applications/TensorFlow_wheels/tensorflow-1.8.0-cp35-cp35m-linux_ppc64le.whl
Submitting jobs
The above myjob.script file needs to be modified to run custom TensorFlow. cuda/9.0 and cudnn/cuda9.0/7.0.5 modules need to be loaded. Virtual environment needs to be activated.
#!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=20 # MPI tasks (needed for srun) #SBATCH --time=00:10:00 # H:M:S #SBATCH --gres=gpu:4 # Ask for 4 GPUs per node module purge module load cuda/9.0 cudnn/cuda9.0/7.0.5 source tensorflow-1.8-py2/bin/activate #change this to the location where virtual environment is created cd $SLURM_SUBMIT_DIR python code.py
LINKS
DOCUMENTATION
- GPU Cluster Introduction: SOSCIP GPU Platform