Difference between revisions of "BGQ"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 26: Line 26:
 
BGQ job size is typically determined by midplanes (512 nodes or 8192 cores), however sub-blocks can be used to further subdivide midplanes with a minimum of
 
BGQ job size is typically determined by midplanes (512 nodes or 8192 cores), however sub-blocks can be used to further subdivide midplanes with a minimum of
 
one IO node per block.  In SciNet's configuration (with 8 I/O nodes per midplane) this allows 64 nodes (1024 cores) to be the smallest job size.   
 
one IO node per block.  In SciNet's configuration (with 8 I/O nodes per midplane) this allows 64 nodes (1024 cores) to be the smallest job size.   
 +
 +
 +
==== Compile ====
 +
 +
<pre>
 +
/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpich2version
 +
/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpixlc
 +
/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpixf90
 +
</pre>
 +
 +
==== Run a Job =====
 +
 +
<pre>
 +
runjob --block R00-M0-N03-32 --ranks-per-node=16 --cwd=/gpfs/DDNgpfs3/xsnorthrup/osu_bgq --exe=/gpfs/DDNgpfs3/xsnorthrup/osu_bgq/osu_mbw_mr
 +
</pre>
 +
 +
 +
==== Setup blocks ====
 +
 +
<pre>
 +
bg_console
 +
 +
gen_small_block  R00-M0-N03-32 R00-M0 32 N03
 +
 +
allocate R00-M0-N03-32
 +
 +
select_block R00-M0-N03
 +
free_block
 +
</pre>
  
  
Line 32: Line 61:
  
 
GPFS
 
GPFS
 +
 +
  
  

Revision as of 15:21, 21 August 2012

Blue Gene/Q (BGQ)
Blue Gene Cabinet.jpeg
Installed August 2012
Operating System RH6.3, CNK (Linux)
Number of Nodes 2048(32,768 cores), 512 (8,192 cores)
Interconnect 5D Torus (jobs), QDR Infiniband (I/O)
Ram/Node 16 Gb
Cores/Node 16 (64 threads)
Login/Devel Node bgq01,bgq02
Vendor Compilers bgxlc, bgxlf
Queue Submission Loadleveler

Specifications

BGQ is an extremely powerful and energy efficient 3rd generation IBM Supercomputer built around a system on a chip compute node that has a 16core 1.6GHz Power based CPU and 16Gb of Ram and runs a very lightweight Linux OS called CNK. The nodes are bundled in groups of 32 and then 16 of these groups make up a midplane with 2 midplanes per rack. The compute nodes are all connected togther using a custom 5D interconnect. Each midplane has 8 Power7 I/O nodes that run a full Redhat Linux OS that manages the compute nodes and mounts the GPFS filesystem.

Jobs

BGQ job size is typically determined by midplanes (512 nodes or 8192 cores), however sub-blocks can be used to further subdivide midplanes with a minimum of one IO node per block. In SciNet's configuration (with 8 I/O nodes per midplane) this allows 64 nodes (1024 cores) to be the smallest job size.


Compile

/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpich2version
/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpixlc
/bgsys/drivers/V1R1M1/ppc64/comm/xl/bin/mpixf90

Run a Job =

runjob --block R00-M0-N03-32 --ranks-per-node=16 --cwd=/gpfs/DDNgpfs3/xsnorthrup/osu_bgq --exe=/gpfs/DDNgpfs3/xsnorthrup/osu_bgq/osu_mbw_mr


Setup blocks

bg_console

gen_small_block  R00-M0-N03-32 R00-M0 32 N03

allocate R00-M0-N03-32

select_block R00-M0-N03
free_block


I/O

GPFS



Documentation

BGQ System Administration Guide

BGQ Application Development