Co-array Fortran on the GPC
Version 12 of the Intel Fortran compiler supports co-arrays, and is installed on the GPC. This page will briefly sketch how to compile and run Co-array Fortran programs.
Loading necessary modules
First, you need to load the module for version 12 of the Intel compilers, as well as Intel MPI.
module load intel/intel-v12.0.0.084 intelmpi
(you may put this in your .bashrc file.)
Note: For multiple node usage, it makes sense to have to load the IntelMPI module, since Intel's implementation of Co-array Fortran uses MPI. However, the Intel MPI module is needed even for single-node usage, just in order to link successfully.
Compiling, linking and running co-array fortran programs is different depending on whether you will run the program only on a single node (with 8 cores), or on several nodes.
Single node usage
Compilation
<source lang="bash"> ifort -O3 -xHost -coarray=shared -c [sourcefile] -o [objectfile] </source>
Linking
<source lang="bash"> ifort -coarray=shared [objectfile] -o [executable] </source>
Running
To run this co-array program on one node with 16 images (co-array version for what openmp calls a thread and mpi calls a process), you simply put <source lang="bash"> ./[executable] </source> in your job submission script. This give 16 images because HyperThreading is enabled on the GPC nodes, which makes it seem like there are 16 computing units on a node, even though physically there are only 8.
To control the number of images, you can change the FOR_COARRAY_NUM_IMAGES environment variable: <source lang="bash"> export FOR_COARRAY_NUM_IMAGES=2 ./[executable] </source> This can be useful for testing.
An example submission script would look as follows:
<source lang="bash">
- !/bin/bash
- MOAB/Torque submission script for SciNet GPC (OpenMP)
- PBS -l nodes=1:ppn=8,walltime=1:00:00
- PBS -N test
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
export FOR_COARRAY_NUM_IMAGES=16 ./[executable] </source>
Multiple nodes usage
Compilation
<source lang="bash"> ifort -O3 -xHost -coarray=distributed -c [sourcefile] -o [objectfile] </source>
Linking
<source lang="bash"> ifort -coarray=distributed [objectfile] -o [executable] </source>
Running
Because distributed co-array fortran is based on MPI, we need to launch the mpi processes on different nodes. The easiest way to set the number of images using FOR_COARRAY_NUM_IMAGES and then to use mpirun without an -np parameter: <source lang="bash"> export FOR_COARRAY_NUM_IMAGES=32 mpirun ./[executable]-env I_MPI_FABRICS shm:tcp </source> Note that the total number of images is set explicitly, and should not be given to mpirun. You can still pass other parameters to mpirun, though, such as -env I_MPI_FABRICS shm:tcp if you're running on ethernet and want to suppress the warning messages saying that there's no .
An example submission script would look as follows:
<source lang="bash">
- !/bin/bash
- MOAB/Torque submission script for SciNet GPC (ethernet)
- PBS -l nodes=4:ppn=8,walltime=1:00:00
- PBS -N test
- DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR
- EXECUTION COMMAND; FOR_COARRAY_NUM_IMAGES = nodes*ppn
export FOR_COARRAY_NUM_IMAGES=32 mpirun -env I_MPI_FABRICS shm:tcp ./[executable] </source>
On Infiniband, in the last line you should replace tcp by dapl (see MPI).