Knights Landing

From oldwiki.scinet.utoronto.ca
Revision as of 12:29, 9 August 2018 by Rzon (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to https://docs.scinet.utoronto.ca

Intel Xeon Phi (Knights Landing )
KNL-DAP-Adams-Pass.jpg
Installed August 2016
Operating System Linux Centos 7.2
Number of Nodes 4
Interconnect QDR Infiniband
Ram/Node 96GB DDR4 + 16GB MCDRAM
Cores/Node 64
Login/Devel Node knl01
Vendor Compilers icc,ifort
Queue Submission none

This is develop/test system of four x86_64 self-hosted 2nd Generation Intel Xeon Phi (Knights Landing, KNL) nodes, aka an Intel "Ninja" platform. Each node has one 64-core Intel(R) Xeon Phi(TM) CPU 7210 @ 1.30GHz with 4 threads per core. These systems are not add-on accelerators, but instead act as full-fledged processors running a regular linux operating system. They are configured with 96GB of DDR4 system RAM along with 16GB of very fast MCDRAM, see here for details. The nodes are interconnected to the rest of the clusters with QDR Infiniband and shares the regular SciNet GPFS filesystems.

Login

First login via ssh with your SciNet account at login.scinet.utoronto.ca, and from there you can proceed to knl01,knl02,knl03,knl04.

KNL Operational Modes

The four nodes all have identical hardware, however there are multiple options that control how the MCDRAM High Bandwidth Memory (HBM) is accessed. Mode changes are not dynamic and require the node to be rebooted to take affect.

Clustering

Currently all KNL nodes have the Cluster Mode configured to "Quadrant". See this article for more details about the clustering options that contorl how memory is accessed on the KNL.

Memory

Two nodes, knl01,kn02 have the MCDRAM configured as "Cache" mode and the other two knl03,kn04 are configured with the "Flat" memory mode. See this article for more details about the MCDRAM memory modes.

Initially when you first compile/port your code, use the Cache mode nodes, however if you wish to try and optimize memory performance by directly using the HBM memory with the memkind library or the numactl options, use the Flat nodes.

user@knl03$ numactl --membind 1 ./mycode

Queue

Currently there is no queue, be nice.

Software

Software is available using the standard modules framework used on other SciNet systems, however is separate from the GPC modules as the KNL has a newer Centos7 based operating system.

Compilers

The Xeon Phi uses the standard intel compilers.

module load intel/16.0.3

MPI

IntelMPI is currently the default MPI

module load intelmpi/5.1.3.219

References