Difference between revisions of "Phi"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
 
{{Infobox Computer
 
{{Infobox Computer
 
|image=[[Image:Tesla_S2070_3qtr.gif|center|300px|thumb]]
 
|image=[[Image:Tesla_S2070_3qtr.gif|center|300px|thumb]]
|name=Xeon PHI / K20 Test Node
+
|name=Intel Xeon Phi / NVIDIA Tesla K20  
 
|installed=April 2013
 
|installed=April 2013
 
|operatingsystem= Linux Centos 6.4
 
|operatingsystem= Linux Centos 6.4
 
|loginnode= arc09 (from <tt>arc01</tt>)
 
|loginnode= arc09 (from <tt>arc01</tt>)
 
|nnodes=1
 
|nnodes=1
|rampernode=32 Gb
+
|rampernode=32 GB
 
|corespernode=8 with Xeon Phi & K20
 
|corespernode=8 with Xeon Phi & K20
 
|interconnect=DDR Infiniband
 
|interconnect=DDR Infiniband
Line 14: Line 14:
  
 
This is a single test/devel node, part of the [[Accelerator Research Cluster]], for investigating new accelerator technologies. It consists of a singele x86_64 nodes with one 8-core Intel Sandybridge Xeon   
 
This is a single test/devel node, part of the [[Accelerator Research Cluster]], for investigating new accelerator technologies. It consists of a singele x86_64 nodes with one 8-core Intel Sandybridge Xeon   
E5-2650 2.0GHz CPU with 32GB of RAM per node. It has a single NVIDIA Tesla K20 GPU with CUDA Capability 3.0 (Kepler) with 2496 CUDA Cores and 5 GB of RAM as well as a single Intel Xeon Phi 5110P with
+
E5-2650 2.0GHz CPU with 32GB of RAM per node. It has a single NVIDIA Tesla K20 GPU with CUDA Capability 3.0 (Kepler) with 2496 CUDA Cores and 5 GB of RAM as well as a single Intel Xeon Phi 3120A with 57
 
+
1.1 GHz cores and 6GB of RAM. The node is interconnected to the rest of the clusters with DDR Infiniband and mounts the regular SciNet GPFS filesystems.   
 
 
The nodes are interconnected with DDR Infiniband for MPI communications
 
and disk I/O to the SciNet GPFS filesystems.   
 
 
 
  
 
=== Login ===
 
=== Login ===
Line 53: Line 49:
 
== Xeon Phi ==
 
== Xeon Phi ==
  
 +
=== Compilers ===
 
The Xeon Phi uses the standard intel compilers, however requires at least version 13.0
 
The Xeon Phi uses the standard intel compilers, however requires at least version 13.0
  
Line 58: Line 55:
 
module load intel/13.1.1
 
module load intel/13.1.1
 
</pre>
 
</pre>
 +
 +
 +
=== Tools ===
 +
 +
The Intel Cluters Tools such as vtune amplifier and inspector are available for the Xeon Phi by
 +
loading the following modules.
 +
 +
<pre>
 +
module load inteltools
 +
</pre>
 +
 +
 +
=== Direct Access ===
 +
 +
The Xeon Phi can be accessed directly by
 +
 +
<pre>
 +
ssh mic0
 +
</pre>
 +
 +
=== Shared Filesystem ===
 +
 +
The host node arc09 mounts the standard SciNet filesystems, i.e. $HOME and $SCRATCH, however to share
 +
files between the host and Xeon Phi use /localscratch/$HOME

Revision as of 11:26, 2 May 2013

Intel Xeon Phi / NVIDIA Tesla K20
Tesla S2070 3qtr.gif
Installed April 2013
Operating System Linux Centos 6.4
Number of Nodes 1
Interconnect DDR Infiniband
Ram/Node 32 GB
Cores/Node 8 with Xeon Phi & K20
Login/Devel Node arc09 (from arc01)
Vendor Compilers nvcc,pgcc,icc,gcc
Queue Submission none

This is a single test/devel node, part of the Accelerator Research Cluster, for investigating new accelerator technologies. It consists of a singele x86_64 nodes with one 8-core Intel Sandybridge Xeon E5-2650 2.0GHz CPU with 32GB of RAM per node. It has a single NVIDIA Tesla K20 GPU with CUDA Capability 3.0 (Kepler) with 2496 CUDA Cores and 5 GB of RAM as well as a single Intel Xeon Phi 3120A with 57 1.1 GHz cores and 6GB of RAM. The node is interconnected to the rest of the clusters with DDR Infiniband and mounts the regular SciNet GPFS filesystems.

Login

First login via ssh with your scinet account at login.scinet.utoronto.ca, and from there you can proceed to arc01 which is the GPU development node and then to arc09.

Access to this machines is no enabled be default so please email support@scinet.utoronto.ca for access.

Devel/Compute

As this is a single node there is no queue and users are expected to use it in a "friendly" manner. This system is not setup for production usage, and primarily for investigating new technologies so please keep your run times short.

Software

The same software installed on the GPC is available on ARC using the same modules framework. See here for full details.

NVIDIA K20

See ARC wiki page for details of the available CUDA and OpenCL compilers and modules. To use all the K20 features a minimum of CUDA 5.0 is required.

module load cuda/5.0

Driver Version

The current NVIDIA driver version for the K20 is 310.44

Xeon Phi

Compilers

The Xeon Phi uses the standard intel compilers, however requires at least version 13.0

module load intel/13.1.1


Tools

The Intel Cluters Tools such as vtune amplifier and inspector are available for the Xeon Phi by loading the following modules.

module load inteltools


Direct Access

The Xeon Phi can be accessed directly by

ssh mic0

Shared Filesystem

The host node arc09 mounts the standard SciNet filesystems, i.e. $HOME and $SCRATCH, however to share files between the host and Xeon Phi use /localscratch/$HOME