Difference between revisions of "SOSCIP GPU"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
(Created page with "{{Infobox Computer |image=center|300px|thumb |name=P8 |installed=June 2016 |operatingsystem= Linux RHEL 7.2 le / Ubuntu 16.04 le |loginnode= p8t0[1-2] ...")
 
 
(117 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 +
__NOTOC__
 +
 +
{| style="border-spacing: 8px; width:100%"
 +
| valign="top" style="cellpadding:1em; padding:1em; border:2px solid; background-color:#f6f674; border-radius:5px"|
 +
'''WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to [https://docs.scinet.utoronto.ca https://docs.scinet.utoronto.ca]'''
 +
|}
 +
 
{{Infobox Computer
 
{{Infobox Computer
|image=[[Image:P8_s822.jpg|center|300px|thumb]]
+
|image=[[Image:S882lc.png|center|300px|thumb]]
|name=P8
+
|name=SOSCIP GPU
|installed=June 2016
+
|installed=September 2017
|operatingsystem= Linux RHEL 7.2 le / Ubuntu 16.04 le  
+
|operatingsystem= Ubuntu 16.04 le  
|loginnode= p8t0[1-2] / p8t0[3-4]
+
|loginnode= sgc01
|nnodes= 2x  Power8 with 2x NVIDIA K80,      2x Power 8 with  4x NVIDIA P100
+
|nnodes= 14x Power 8 with  4x NVIDIA P100
 
|rampernode=512 GB
 
|rampernode=512 GB
|corespernode= 2 x 8core (16 physical, 128 SMT)
+
|corespernode= 2 x 10core (20 physical, 160 SMT)
 
|interconnect=Infiniband EDR  
 
|interconnect=Infiniband EDR  
 
|vendorcompilers=xlc/xlf, nvcc
 
|vendorcompilers=xlc/xlf, nvcc
 
}}
 
}}
  
 +
== New Documentation Site ==
 +
Please visit the new documentation site: [https://docs.scinet.utoronto.ca/index.php/SOSCIP_GPU https://docs.scinet.utoronto.ca/index.php/SOSCIP_GPU] for updated information.
 +
 +
== SOSCIP ==
 +
 +
The SOSCIP GPU Cluster is a Southern Ontario Smart Computing Innovation Platform ([http://soscip.org/ SOSCIP]) resource located at theUniversity of Toronto's SciNet HPC facility. The SOSCIP  multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario [http://www.research.utoronto.ca/about/our-research-partners/soscip/].
 +
 +
== Support Email ==
 +
 +
Please use [mailto:soscip-support@scinet.utoronto.ca <soscip-support@scinet.utoronto.ca>] for SOSCIP GPU specific inquiries.
 +
 +
 +
<!--
 
== Specifications==
 
== Specifications==
  
The P8 Test System consists of  of 4 IBM Power 822LC Servers each with 2x8core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 16 physical cores to support up to 128 threads.  2 nodes have two NVIDIA Tesla K80 GPUs with CUDA Capability 3.7 (Kepler), consisting of 2xGK210 GPUs each with 12 GB of RAM connected using PCI-E, and 2 others have 4x NVIDIA Tesla P100 GPUs each wit h 16GB of RAM with CUDA Capability 6.0 (Pascal) connected using NVlink.
+
The SOSCIP GPU Cluster consists of  of 14 IBM Power 822LC "Minsky" Servers each with 2x10core 3.25GHz Power8 CPUs and 512GB Ram. Similar to Power 7, the Power 8 utilizes Simultaneous MultiThreading (SMT), but extends the design to 8 threads per core allowing the 20 physical cores to support up to 160 threads.  Each node has 4x NVIDIA Tesla P100 GPUs each with 16GB of RAM with CUDA Capability 6.0 (Pascal) connected using NVlink.
 +
 
 +
== Access and Login ==
 +
 
 +
In order to obtain access to the system, you must request access to the SOSCIP GPU Platform. Instructions will have been sent to your sponsoring faculty member via E-mail at the beginning of your SOSCIP project.
 +
 
 +
Access to the SOSCIP GPU Platform is provided through the BGQ login node, '''<tt> bgqdev.scinet.utoronto.ca </tt>''' using ssh, and from there you can proceed to the GPU development node '''<tt>sgc01-ib0</tt>''' via ssh. Your user name and password is the same as it is for SciNet systems.
 +
 
 +
== Filesystem ==
 +
 
 +
The filesystem is shared with the BGQ system.  See [https://wiki.scinet.utoronto.ca/wiki/index.php/BGQ#Filesystem here ] for details.
 +
 
 +
== Job Submission ==
 +
 
 +
The SOSCIP GPU cluster uses [https://slurm.schedmd.com/ SLURM ] as a job scheduler and jobs are scheduled by node, ie 20 cores and 4 GPUs each. Jobs are submitted from the development node '''<tt>sgc01</tt>'''. The maximum walltime per job is 12 hours (except in the 'long' queue, see below) with up to 8 nodes.
 +
 
 +
<pre>
 +
$ sbatch myjob.script
 +
</pre>
 +
 
 +
Where myjob.script is
 +
 
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=20  # MPI tasks (needed for srun)
 +
#SBATCH --time=00:10:00  # H:M:S
 +
#SBATCH --gres=gpu:4    # Ask for 4 GPUs per node
  
== Compile/Devel/Test ==
+
cd $SLURM_SUBMIT_DIR
  
First login via ssh with your scinet account at '''<tt>login.scinet.utoronto.ca</tt>''', and from there you can proceed to '''<tt>p8t0[1-2]</tt>''' for the K80 GPUs and '''<tt>p8t0[3-4]</tt>''' for the Pascal GPUs.
+
hostname
 +
nvidia-smi
 +
</pre>
  
== Software for  ==
+
More information about the <tt>sbatch</tt> command is found [https://slurm.schedmd.com/sbatch.html here].
  
==== GNU Compilers ====
 
  
To load the newer advance toolchain version use:
+
You can query job information using
  
For '''<tt>p8t0[1-2]</tt>'''
 
 
<pre>
 
<pre>
module load gcc/5.3.1
+
squeue
 
</pre>
 
</pre>
  
For '''<tt>p8t0[3-4]</tt>'''
+
To see only your own jobs, run
 +
 
 
<pre>
 
<pre>
module load gcc/6.2.1
+
squeue -u <userid>
 
</pre>
 
</pre>
  
==== IBM Compilers ====
+
Once your job is running, SLURM creates a file usually named <tt>slurm<jobid>.out</tt> in the directory from where you issued the <tt>sbatch</tt> command. This contains the console output from your job. You can monitor the output of your job by using the <tt>tail -f <file></tt> command.
  
To load the native IBM xlc/xlc++ compilers
 
  
For '''<tt>p8t0[1-2]</tt>'''
+
To cancel a job use
 +
 
 
<pre>
 
<pre>
module load xlc/13.1.4
+
scancel $JOBID
module load xlf/13.1.4
 
 
</pre>
 
</pre>
  
For '''<tt>p8t0[3-4]</tt>'''  
+
=== Longer jobs ===
 +
 
 +
If your job takes more than 12 hours, the sbatch command will not let you submit your job.  There is, however, a way to have jobs up to 24 hours long, by specifying "-p long" as an option (i.e., add <tt>#SBATCH -p long</tt> to your job script).  The priority of such jobs may be throttled in the future if we see that the 'long' queue is having a negative efffect on turnover time in the queue.
 +
 
 +
=== Interactive ===
 +
 
 +
For an interactive session use
 +
 
 +
<pre>
 +
salloc --gres=gpu:4
 +
</pre>
 +
 
 +
After executing this command, you may have to wait in the queue until a system is available.
 +
 
 +
More information about the <tt>salloc</tt> command is [https://slurm.schedmd.com/salloc.html here].
 +
 
 +
=== Automatic Re-submission and Job Dependencies ===
 +
 
 +
Commonly you may have a job that you know will take longer to run than what is permissible in the queue. As long as your program contains checkpoint or restart capability, you can have one job automatically submit the next. In the following example it is assumed that the program finishes before the time limit requested and then resubmits itself by logging into the development nodes.  Job dependencies and a maximum number of job re-submissions are used to ensure sequential operation. 
 +
 
 +
<pre>
 +
#!/bin/bash
 +
 
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=20  # MPI tasks (needed for srun)
 +
#SBATCH --time=00:10:00  # H:M:S
 +
#SBATCH --gres=gpu:4    # Ask for 4 GPUs per node
 +
 
 +
cd $SLURM_SUBMIT_DIR
 +
 
 +
: ${job_number:="1"}          # set job_nubmer to 1 if it is undefined
 +
job_number_max=3
 +
 
 +
echo "hi from ${SLURM_JOB_ID}"
 +
 
 +
#RUN JOB HERE
 +
 
 +
 
 +
# SUBMIT NEXT JOB
 +
if [[ ${job_number} -lt ${job_number_max} ]]
 +
then
 +
  (( job_number++ ))
 +
  next_jobid=$(ssh sgc01-ib0 "cd $SLURM_SUBMIT_DIR; /opt/slurm/bin/sbatch --export=job_number=${job_number} -d afterok:${SLURM_JOB_ID} thisscript.sh | awk '{print $4}'")
 +
  echo "submitted ${next_jobid}"
 +
fi
 +
 +
sleep 15
 +
 
 +
echo "${SLURM_JOB_ID} done"
 +
 
 +
</pre>
 +
===Packing single-GPU jobs within one SLURM job submission===
 +
Jobs are scheduled by node (4 GPUs) on SOSCIP GPU cluster. If user's code/program cannot utilize all 4 GPUs, user can use GNU Parallel tool to pack 4 or more single-GPU jobs into one SLURM job. Below is an example of submitting 4 single-GPU python codes within one job:  (When using GNU parallel for a publication please cite as per '''''parallel --citation''''')
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=20  # MPI tasks (needed for srun)
 +
#SBATCH --time=00:10:00  # H:M:S
 +
#SBATCH --gres=gpu:4    # Ask for 4 GPUs per node
 +
 
 +
module load gnu-parallel/20180422
 +
cd $SLURM_SUBMIT_DIR
 +
 
 +
parallel -a jobname-params.input --colsep ' ' -j 4 'CUDA_VISIBLE_DEVICES=$(( {%} - 1 )) numactl -N $(( ({%} -1) / 2 )) python {1} {2} {3} &> jobname-{#}.out'
 +
</pre>
 +
The jobname-params.input file contains:
 +
<pre>
 +
code-1.py --param1=a --param2=b
 +
code-2.py --param1=c --param2=d
 +
code-3.py --param1=e --param2=f
 +
code-4.py --param1=g --param2=h
 +
</pre>
 +
*In the above example, GNU Parallel tool will read '''jobname-params.input''' file and separate parameters. Each row in the input file has to contain exact 3 parameters to '''python'''. code-N.py is also considered as a parameter. User can change parameter number in the '''parallel''' command ({1} {2} {3}...).
 +
*'''"-j 4"''' flag limits the max number of jobs to be 4. User can have more rows in the input file, but GNU Parallel tool only executes maximum of 4 at the same time.
 +
*'''"CUDA_VISIBLE_DEVICES=$(( {%} - 1 ))"''' will set one GPU for each job. '''"numactl -N $(( ({%} -1) / 2 ))"''' will bind 2 jobs on CPU socket 0, other 2 jobs on socket 1. {%} is job slot which will be translated to 1 or 2 or 3 or 4 in this case.
 +
*Outputs will be  jobname-1.out, jobname-2.out,jobname-3.out,jobname-4.out... {#} is job number which will be translated to the row number in the input file.
 +
 
 +
== Software Installed ==
 +
 
 +
=== IBM PowerAI ===
 +
 
 +
The PowerAI platform contains popular open machine learning frameworks such as '''Caffe, TensorFlow, and Torch'''. Run the <tt>module avail</tt> command for a complete listing. More information is available at this link: https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/. Release 4.0 is currently installed.
 +
 
 +
===GNU Compilers ===
 +
 
 +
System default compiler is GCC/5.4.0. More recent versions of the GNU Compiler Collection (C/C++/Fortran) are provided in the IBM Advance Toolchain with enhancements for the POWER8 CPU. To load the newer advance toolchain version use:
 +
 
 +
Advance Toolchain V10.0
 +
<pre>
 +
module load gcc/6.4.1
 +
</pre>
 +
 
 +
Advance Toolchain V11.0
 +
<pre>
 +
module load gcc/7.3.1
 +
</pre>
 +
 
 +
More information about the IBM Advance Toolchain can be found here: [https://developer.ibm.com/linuxonpower/advance-toolchain/ https://developer.ibm.com/linuxonpower/advance-toolchain/]
 +
 
 +
=== IBM XL Compilers ===
 +
 
 +
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run
 +
 
 
<pre>
 
<pre>
 
module load xlc/13.1.5
 
module load xlc/13.1.5
module load xlf/13.1.5
+
module load xlf/15.1.5
 
</pre>
 
</pre>
  
 +
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP 4.5 GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER8 CPU.
  
==== Driver Version ====
+
Information about the IBM XL Compilers can be found at the following links:
  
The current NVIDIA driver version is 361.93
+
[https://www.ibm.com/support/knowledgecenter/SSXVZZ_13.1.5/com.ibm.compilers.linux.doc/welcome.html IBM XL C/C++]
  
==== CUDA ====
+
[https://www.ibm.com/support/knowledgecenter/SSAT4T_15.1.5/com.ibm.compilers.linux.doc/welcome.html IBM XL Fortran]
  
The current installed CUDA Tookit is 8.0
+
=== NVIDIA GPU Driver ===
 +
 
 +
The current NVIDIA driver version is 396.26
 +
 
 +
=== CUDA ===
 +
 
 +
The current installed CUDA Tookits is are version 8.0, 9.0 and 9.1.
  
 
<pre>
 
<pre>
 
module load cuda/8.0
 
module load cuda/8.0
 +
or
 +
module load cuda/9.0
 +
or
 +
module load cuda/9.1
 +
or
 +
module load cuda/9.2
 
</pre>
 
</pre>
 +
  
 
The CUDA driver is installed locally, however the CUDA Toolkit is installed in:
 
The CUDA driver is installed locally, however the CUDA Toolkit is installed in:
Line 69: Line 232:
 
<pre>
 
<pre>
 
/usr/local/cuda-8.0
 
/usr/local/cuda-8.0
 +
/usr/local/cuda-9.0
 +
/usr/local/cuda-9.1
 +
/usr/local/cuda-9.2
 +
</pre>
 +
 +
Note that the <tt>/usr/local/cuda</tt> directory is linked to the <tt>/usr/local/cuda-9.2</tt> directory.
 +
 +
Documentation and API reference information for the CUDA Toolkit can be found here: [http://docs.nvidia.com/cuda/index.html http://docs.nvidia.com/cuda/index.html]
 +
 +
=== OpenMPI ===
 +
 +
Currently OpenMPI has been setup on the 14 nodes connected over EDR Infiniband.
 +
 +
<pre>
 +
$ module load openmpi/2.1.1-gcc-5.4.0
 +
$ module load openmpi/2.1.1-XL-13_15.1.5
 +
</pre>
 +
 +
== Other Software ==
 +
 +
Other software packages can be installed onto the SOSCIP GPU Platform. It is best to try installing new software in your own home directory, which will give you control of the software (e.g. exact version, configuration, installing sub-packages, etc.).
 +
 +
In the following subsections are instructions for installing several common software packages.
 +
 +
=== Anaconda (Python) ===
 +
 +
Anaconda is a popular distribution of the Python programming language. It contains several common Python libraries such as SciPy and NumPy as pre-built packages, which eases installation.
 +
 +
Anaconda can be downloaded from here: [https://www.anaconda.com/download/#linux https://www.anaconda.com/download/#linux]
 +
 +
NOTE: Be sure to download the '''Power8''' installer.
 +
 +
TIP: If you plan to use Tensorflow within Anaconda, download the Python 2.7 version of Anaconda
 +
 +
=== cuDNN ===
 +
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN accelerates widely used deep learning frameworks, including Caffe2, MATLAB, Microsoft Cognitive Toolkit, TensorFlow, Theano, and PyTorch. If a specific version of cuDNN is needed, user can download from https://developer.nvidia.com/cudnn and choose '''"cuDNN [VERSION] Library for Linux (Power8/Power9)"'''.
 +
 +
The default cuDNN installed on the system is version 6 with CUDA-8 from IBM PowerAI. More recent cuDNN versions are installed as modules:
 +
<pre>
 +
cudnn/cuda9.0/7.0.5
 
</pre>
 
</pre>
  
==== OpenMPI ====
+
=== Keras ===
 +
 
 +
Keras ([https://keras.io/ https://keras.io/]) is a popular high-level deep learning software development framework. It runs on top of other deep-learning frameworks such as TensorFlow.
  
Currently OpenMPI has been setup on the four nodes connected over QDR Infiniband.
+
*The easiest way to install Keras is to install Anaconda first, then install Keras by using using the pip command. Keras uses TensorFlow underneath to run neural network models. Before running code using Keras, be sure to load the PowerAI TensorFlow module and the cuda module.
  
For '''<tt>p8t0[1-2]</tt>'''  
+
*Keras can also be installed into a Python virtual environment by using '''pip'''. User can install optimized scipy (built with OpenBLAS) before installing Keras.
 +
In a virtual environment (python2.7 as example):
 
<pre>
 
<pre>
$ module load openmpi/1.10.3-gcc-5.3.1
+
pip install /scinet/sgc/Libraries/scipy/scipy-1.1.0-cp27-cp27mu-linux_ppc64le.whl
$ module load openmpi/1.10.3-XL-13_15.1.4
+
pip install keras
 
</pre>
 
</pre>
  
For '''<tt>p8t0[3-4]</tt>'''  
+
=== NumPy/SciPy (built with OpenBLAS) ===
 +
 
 +
Optimized NumPy and SciPy are provided as Python wheels located in '''/scinet/sgc/Libraries/numpy''' and '''/scinet/sgc/Libraries/scipy''' and can be installed by '''pip'''. Please uninstall old numpy/scipy before installing the new ones.
 +
 
 +
=== PyTorch ===
 +
 
 +
PyTorch is the Python implementation of the Torch framework for deep learning.
 +
 
 +
It is suggested that you use PyTorch within Anaconda.
 +
 
 +
There is currently no build of PyTorch for POWER8-based systems. You will need to compile it from source.
 +
 
 +
Obtain the source code from here: [http://pytorch.org/ http://pytorch.org/]
 +
 
 +
Before building PyTorch, make sure to load cuda by running
 +
 
 
<pre>
 
<pre>
$ module load openmpi/1.10.3-gcc-6.2.1
+
module load cuda/8.0
$ module load openmpi/1.10.3-XL-13_15.1.5
 
 
</pre>
 
</pre>
  
==== PE ====
+
NOTE: Do not have the gcc modules loaded when building PyTorch. Use the default version of gcc (currently v5.4.0) included with the operating system. Build will fail with later versions of gcc.
  
IBM's Parallel Environment (PE), is available for use with XL compilers using the following
+
=== TensorFlow (new versions and python3) ===
  
 +
The TensorFlow which is included in PowerAI may not be the most recent version. Newer versions of TensorFlow are provided as prebuilt Python Wheels that users can use '''pip''' to install under user space. Custom Python wheels are stored in '''/scinet/sgc/Applications/TensorFlow_wheels'''. It is highly recommended to install custom TensorFlow wheels into a Python virtual environment.
 +
 +
====Installing with Python2.7:====
 +
<div class="toccolours mw-collapsible mw-collapsed" style="overflow:auto;">
 +
* Create a virtual environment '''tensorflow-1.8-py2''' with packages installed with system:
 
<pre>
 
<pre>
$ module pe/xl.perf
+
virtualenv --python=python2.7 --system-site-packages tensorflow-1.8-py2
 
</pre>
 
</pre>
 +
* Activate virtual environment:
 +
<pre>
 +
source tensorflow-1.8-py2/bin/activate
 +
</pre>
 +
* Install TensorFlow into the virtual environment: (A custom Numpy built with OpenBLAS library can be installed)
 +
<pre>
 +
pip install --upgrade --force-reinstall /scinet/sgc/Libraries/numpy/numpy-1.14.3-cp27-cp27mu-linux_ppc64le.whl
 +
pip install /scinet/sgc/Applications/TensorFlow_wheels/tensorflow-1.8.0-cp27-cp27mu-linux_ppc64le.whl
 +
</pre>
 +
</div>
 +
 +
====Installing with Python3.5:====
 +
<div class="toccolours mw-collapsible mw-collapsed" style="overflow:auto;">
 +
* Create a virtual environment '''tensorflow-1.8-py3''' with packages installed with system:
 +
<pre>
 +
virtualenv --python=python3.5 --system-site-packages tensorflow-1.8-py3
 +
</pre>
 +
* Activate virtual environment:
 +
<pre>
 +
source tensorflow-1.8-py3/bin/activate
 +
</pre>
 +
* Install TensorFlow into the virtual environment: (A custom Numpy built with OpenBLAS library can be installed)
 +
<pre>
 +
pip3 install --upgrade --force-reinstall /scinet/sgc/Libraries/numpy/numpy-1.14.3-cp35-cp35m-linux_ppc64le.whl
 +
pip3 install /scinet/sgc/Applications/TensorFlow_wheels/tensorflow-1.8.0-cp35-cp35m-linux_ppc64le.whl
 +
</pre>
 +
</div>
  
 +
====Submitting jobs====
 +
<div class="toccolours mw-collapsible mw-collapsed" style="overflow:auto;">
 +
The above myjob.script file needs to be modified to run custom TensorFlow. '''cuda/9.0''' and '''cudnn/cuda9.0/7.0.5''' modules need to be loaded. Virtual environment needs to be activated.
 
<pre>
 
<pre>
mpiexec -n 4 ./a.out
+
#!/bin/bash
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=20  # MPI tasks (needed for srun)
 +
#SBATCH --time=00:10:00  # H:M:S
 +
#SBATCH --gres=gpu:4    # Ask for 4 GPUs per node
 +
 
 +
module purge
 +
module load cuda/9.0 cudnn/cuda9.0/7.0.5
 +
source tensorflow-1.8-py2/bin/activate #change this to the location where virtual environment is created
 +
 
 +
cd $SLURM_SUBMIT_DIR
 +
python code.py
 
</pre>
 
</pre>
 +
</div>
 +
 +
== LINKS ==
 +
 +
[https://www.olcf.ornl.gov/kb_articles/summitdev-quickstart/#System_Overview  Summit Dev System at ORNL]
 +
 +
== DOCUMENTATION ==
  
documentation is [http://publib.boulder.ibm.com/epubs/pdf/c2372832.pdf here]
+
# GPU Cluster Introduction: [[Media:GPU_Training_01.pdf‎|SOSCIP GPU Platform]]
 +
-->

Latest revision as of 15:17, 5 October 2018


WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to https://docs.scinet.utoronto.ca

SOSCIP GPU
S882lc.png
Installed September 2017
Operating System Ubuntu 16.04 le
Number of Nodes 14x Power 8 with 4x NVIDIA P100
Interconnect Infiniband EDR
Ram/Node 512 GB
Cores/Node 2 x 10core (20 physical, 160 SMT)
Login/Devel Node sgc01
Vendor Compilers xlc/xlf, nvcc

New Documentation Site

Please visit the new documentation site: https://docs.scinet.utoronto.ca/index.php/SOSCIP_GPU for updated information.

SOSCIP

The SOSCIP GPU Cluster is a Southern Ontario Smart Computing Innovation Platform (SOSCIP) resource located at theUniversity of Toronto's SciNet HPC facility. The SOSCIP multi-university/industry consortium is funded by the Ontario Government and the Federal Economic Development Agency for Southern Ontario [1].

Support Email

Please use <soscip-support@scinet.utoronto.ca> for SOSCIP GPU specific inquiries.