Difference between revisions of "Co-array Fortran on the GPC"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
(How to compile and run Co-array Fortran programs on the GPC)
 
m
Line 1: Line 1:
Version 12 of the Intel Fortran compiler supports co-arrays, and is installed on the [[GPC]].  This page will briefly sketch how to compile and run Co-array Fortran programs.
+
Version 12 of the Intel Fortran compiler supports co-arrays, and is installed on the [[GPC Quickstart | GPC]].  This page will briefly sketch how to compile and run Co-array Fortran programs.
  
 
==Loading necessary modules==
 
==Loading necessary modules==
Line 29: Line 29:
 
./[executable]
 
./[executable]
 
</source>
 
</source>
in your job submission script. This give 16 images because [[HyperThreading]] is enabled on the GPC nodes, which makes it seem like there are 16 computing units on a node, even though physically there are only 8.
+
in your job submission script. The reason that this gives 16 images is that [[GPC_Quickstart#HyperThreading | HyperThreading]] is enabled on the GPC nodes, which makes it seem to the system as if there are 16 computing units on a node, even though physically there are only 8.
  
 
To control the number of images, you can change the <tt>FOR_COARRAY_NUM_IMAGES</tt> environment variable:
 
To control the number of images, you can change the <tt>FOR_COARRAY_NUM_IMAGES</tt> environment variable:
Line 72: Line 72:
 
mpirun ./[executable]-env I_MPI_FABRICS shm:tcp
 
mpirun ./[executable]-env I_MPI_FABRICS shm:tcp
 
</source>
 
</source>
Note that the total number of images is set explicitly, and should not be given to mpirun. You can still pass other parameters to mpirun, though, such as <tt>-env I_MPI_FABRICS shm:tcp</tt> if you're running on ethernet and want to suppress the warning messages saying that there's [[no ]].
+
Note that the total number of images is set explicitly, and should not be given to mpirun. You can still pass other parameters to mpirun, though, such as "<tt>-env I_MPI_FABRICS shm:tcp</tt>" if you're running on ethernet and want to suppress the warning messages saying that [[FAQ#Another_transport_will_be_used_instead | another transport will be used instead]].
  
 
An example submission script would look as follows:
 
An example submission script would look as follows:
Line 91: Line 91:
 
</source>
 
</source>
  
On Infiniband, in the last line you should replace <tt>tcp</tt> by <tt>dapl</tt> (see [[MPI]]).
+
On Infiniband, in the last line you should replace <tt>tcp</tt> by <tt>dapl</tt> (see [[GPC MPI Versions]]).

Revision as of 16:41, 3 January 2011

Version 12 of the Intel Fortran compiler supports co-arrays, and is installed on the GPC. This page will briefly sketch how to compile and run Co-array Fortran programs.

Loading necessary modules

First, you need to load the module for version 12 of the Intel compilers, as well as Intel MPI.

   module load intel/intel-v12.0.0.084 intelmpi

(you may put this in your .bashrc file.)

Note: For multiple node usage, it makes sense to have to load the IntelMPI module, since Intel's implementation of Co-array Fortran uses MPI. However, the Intel MPI module is needed even for single-node usage, just in order to link successfully.

Compiling, linking and running co-array fortran programs is different depending on whether you will run the program only on a single node (with 8 cores), or on several nodes.

Single node usage

Compilation

<source lang="bash"> ifort -O3 -xHost -coarray=shared -c [sourcefile] -o [objectfile] </source>

Linking

<source lang="bash"> ifort -coarray=shared [objectfile] -o [executable] </source>

Running

To run this co-array program on one node with 16 images (co-array version for what openmp calls a thread and mpi calls a process), you simply put <source lang="bash"> ./[executable] </source> in your job submission script. The reason that this gives 16 images is that HyperThreading is enabled on the GPC nodes, which makes it seem to the system as if there are 16 computing units on a node, even though physically there are only 8.

To control the number of images, you can change the FOR_COARRAY_NUM_IMAGES environment variable: <source lang="bash"> export FOR_COARRAY_NUM_IMAGES=2 ./[executable] </source> This can be useful for testing.

An example submission script would look as follows:

<source lang="bash">

  1. !/bin/bash
  2. MOAB/Torque submission script for SciNet GPC (OpenMP)
  3. PBS -l nodes=1:ppn=8,walltime=1:00:00
  4. PBS -N test
  1. DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from

cd $PBS_O_WORKDIR

export FOR_COARRAY_NUM_IMAGES=16 ./[executable] </source>

Multiple nodes usage

Compilation

<source lang="bash"> ifort -O3 -xHost -coarray=distributed -c [sourcefile] -o [objectfile] </source>

Linking

<source lang="bash"> ifort -coarray=distributed [objectfile] -o [executable] </source>

Running

Because distributed co-array fortran is based on MPI, we need to launch the mpi processes on different nodes. The easiest way to set the number of images using FOR_COARRAY_NUM_IMAGES and then to use mpirun without an -np parameter: <source lang="bash"> export FOR_COARRAY_NUM_IMAGES=32 mpirun ./[executable]-env I_MPI_FABRICS shm:tcp </source> Note that the total number of images is set explicitly, and should not be given to mpirun. You can still pass other parameters to mpirun, though, such as "-env I_MPI_FABRICS shm:tcp" if you're running on ethernet and want to suppress the warning messages saying that another transport will be used instead.

An example submission script would look as follows:

<source lang="bash">

  1. !/bin/bash
  2. MOAB/Torque submission script for SciNet GPC (ethernet)
  3. PBS -l nodes=4:ppn=8,walltime=1:00:00
  4. PBS -N test
  1. DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from

cd $PBS_O_WORKDIR

  1. EXECUTION COMMAND; FOR_COARRAY_NUM_IMAGES = nodes*ppn

export FOR_COARRAY_NUM_IMAGES=32 mpirun -env I_MPI_FABRICS shm:tcp ./[executable] </source>

On Infiniband, in the last line you should replace tcp by dapl (see GPC MPI Versions).