Difference between revisions of "Compiling Gromacs"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
 +
==Available Compilations==
 +
 
Chris Neale has compiled gromacs on GPC, with assistance from Scott Northrup, and
 
Chris Neale has compiled gromacs on GPC, with assistance from Scott Northrup, and
 
on the power6 cluster with assistance from Ching-Hsing Yu. Users are welcome to utilize
 
on the power6 cluster with assistance from Ching-Hsing Yu. Users are welcome to utilize
Line 10: Line 12:
 
'''TCS:''' /scratch/cneale/exe/gromacs-4.0.4_aix/exec/bin
 
'''TCS:''' /scratch/cneale/exe/gromacs-4.0.4_aix/exec/bin
  
Below you will find, in order, scripts for the different compilations that you can follow
+
==Compiling your own GROMACS executables==
to make your own binaries.
+
 
 +
Below you will find scripts for the different compilations that you can follow to make your own binaries.
  
 
'''NOTE:''' ''the steps are not listed in order! You must compile fftw before compiling gromacs, and if you are going to use mvapich2-1.4rc1 then you must compile it also before compiling parallel gromacs.''
 
'''NOTE:''' ''the steps are not listed in order! You must compile fftw before compiling gromacs, and if you are going to use mvapich2-1.4rc1 then you must compile it also before compiling parallel gromacs.''
Line 23: Line 26:
 
# Compiling mvapich2-1.4rc1
 
# Compiling mvapich2-1.4rc1
 
# Compiling gromacs on GPC using mvapich2-1.4rc1
 
# Compiling gromacs on GPC using mvapich2-1.4rc1
# Submitting an IB GPC job using openmpi
 
# Submitting an IB GPC job using mvapich2-1.4rc1
 
# Submitting a non-IB GPC job using openmpi
 
  
 
-- [[User:Cneale|cneale]] 18 August 2009
 
-- [[User:Cneale|cneale]] 18 August 2009
Line 245: Line 245:
  
 
-- [[User:Cneale|cneale]] 18 August 2009
 
-- [[User:Cneale|cneale]] 18 August 2009
=====Submitting an IB GPC job using openmpi=====
 
 
<source lang="sh">
 
#!/bin/bash
 
#PBS -l nodes=10:ib:ppn=8,walltime=40:00:00,os=centos53computeA
 
#PBS -N 1
 
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
 
  if [ -n "$PBS_O_WORKDIR" ]; then
 
    cd $PBS_O_WORKDIR
 
  fi
 
fi
 
/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun -np $(wc -l
 
$PBS_NODEFILE | gawk '{print $1}') -machinefile $PBS_NODEFILE
 
/scratch/cneale/exe/intel/gromacs-4.0.5/exec/bin/mdrun_openmpi -deffnm
 
pagp -nosum -dlb yes -npme 24 -cpt 120
 
## To submit type: qsub this.sh
 
</source>
 
 
-- [[User:Cneale|cneale]] 18 August 2009
 
=====Submitting an IB GPC job using mvapich2-1.4rc1=====
 
 
Note that mvapich2-1.4rc1 is not configured to fall back to ethernet
 
so this will not work on the non-IB nodes, even for 8 cores.
 
 
<source lang="sh">
 
#!/bin/bash
 
#PBS -l nodes=4:ib:ppn=8,walltime=30:00:00,os=centos53computeA
 
#PBS -N 1
 
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
 
  if [ -n "$PBS_O_WORKDIR" ]; then
 
    cd $PBS_O_WORKDIR
 
  fi
 
fi
 
module purge
 
module load mvapich2 intel
 
/scratch/cneale/exe/mvapich2-1.4rc1/bin/mpirun_rsh -np $(wc -l
 
$PBS_NODEFILE | gawk '{print $1}') -hostfile $PBS_NODEFILE
 
/scratch/cneale/exe/intel/gromacs-4.0.5/exec/bin/mdrun_mvapich2
 
-deffnm pagp -nosum -dlb yes -npme 12 -cpt 120
 
## To submit type: qsub this.sh
 
</source>
 
 
-- [[User:Cneale|cneale]] 18 August 2009
 
=====Submitting a non-IB GPC job using openmpi=====
 
 
<source lang="sh">
 
#!/bin/bash
 
#PBS -l nodes=1:compute-eth:ppn=8,walltime=40:00:00,os=centos53computeA
 
#PBS -N 1
 
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
 
  if [ -n "$PBS_O_WORKDIR" ]; then
 
    cd $PBS_O_WORKDIR
 
  fi
 
fi
 
/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun
 
-mca btl_sm_num_fifos 7 -np $(wc -l $PBS_NODEFILE | gawk '{print $1}')
 
-mca btl self,sm -machinefile $PBS_NODEFILE
 
/scratch/cneale/exe/intel/gromacs-4.0.5/exec/bin/mdrun_openmpi -deffnm
 
pagp -nosum -dlb yes -npme 24 -cpt 120
 
## To submit type: qsub this.sh
 
</source>
 
 
This is '''VERY IMPORTANT !!!'''
 
Please read the [[https://support.scinet.utoronto.ca/wiki/index.php/User_Tips#Running_single_node_MPI_jobs relevant user tips section]] for information that is essential for your single node (up to 8 core) MPI GROMACS jobs.
 
 
-- [[User:Cneale|cneale]] 14 September 2009
 

Revision as of 16:16, 24 September 2009

Available Compilations

Chris Neale has compiled gromacs on GPC, with assistance from Scott Northrup, and on the power6 cluster with assistance from Ching-Hsing Yu. Users are welcome to utilize these binary executables, but only at their own peril since compiling and testing your own executable is safer and more stable.

Gromacs executables:

GPC: /scratch/cneale/exe/intel/gromacs-4.0.5/exec/bin

TCS: /scratch/cneale/exe/gromacs-4.0.4_aix/exec/bin

Compiling your own GROMACS executables

Below you will find scripts for the different compilations that you can follow to make your own binaries.

NOTE: the steps are not listed in order! You must compile fftw before compiling gromacs, and if you are going to use mvapich2-1.4rc1 then you must compile it also before compiling parallel gromacs.

  1. Compiling serial single precision gromacs on GPC
  2. Compiling openmpi parallel gromacs on GPC
  3. Compiling serial gromacs on the power6 (submitted to the queue):
  4. Compiling parallel gromacs on the power6 (submitted to the queue):
  5. fftw single precision compilation
  6. Change to get mvapich2-1.4rc1 to compile gromacs
  7. Compiling mvapich2-1.4rc1
  8. Compiling gromacs on GPC using mvapich2-1.4rc1

-- cneale 18 August 2009

Compiling serial single precision gromacs on GPC

<source lang="sh"> cd /scratch/cneale/exe/intel/gromacs-4.0.5 mkdir exec module purge module load intel export FFTW_LOCATION=/scratch/cneale/exe/intel/fftw-3.1.2/exec export GROMACS_LOCATION=/scratch/cneale/exe/intel/gromacs-4.0.5/exec export CPPFLAGS=-I$FFTW_LOCATION/include export LDFLAGS=-L$FFTW_LOCATION/lib ./configure --prefix=$GROMACS_LOCATION --without-motif-includes --without-motif-libraries --without-x --without-xml >output.configure 2>&1 make >output.make 2>&1 make install >output.make_install 2>&1 make distclean </source>

-- cneale 18 August 2009

Compiling openmpi parallel gromacs on GPC:

<source lang="sh"> cd /scratch/cneale/exe/intel/gromacs-4.0.5 mkdir exec module purge module load openmpi intel export FFTW_LOCATION=/scratch/cneale/exe/intel/fftw-3.1.2/exec export GROMACS_LOCATION=/scratch/cneale/exe/intel/gromacs-4.0.5/exec export CPPFLAGS="-I$FFTW_LOCATION/include -I/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/include -I/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/lib" export LDFLAGS=-L$FFTW_LOCATION/lib /gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/lib/openmpi -I/scinet/gpc/x1/intel/Compiler/11.0/081/lib/intel64 -I/scinet/gpc/x1/intel/Compiler/11.0/081/mkl/lib/em64t/" ./configure --prefix=$GROMACS_LOCATION --without-motif-includes --without-motif-libraries --without-x --without-xml --enable-mpi --program-suffix="_openmpi" >output.configure.mpi 2>&1 make >output.make.mpi 2>&1 make install-mdrun >output.make_install.mpi 2>&1 make distclean </source>

-- cneale 18 August 2009

Compiling serial gromacs on the power6 (submitted to the queue)

Note that the -O5 flag for the power6 compilation makes it take about 20 hours to compile. You can drop that if you want, but it does give you a few more percent performance.

<source lang="sh">

  1. ======================================================================
  2. Specifies the name of the shell to use for the job
  3. @ shell = /usr/bin/ksh
  4. @ job_type = serial
  5. @ class = verylong
    1. # @ node = 1
    2. # @ tasks_per_node = 1
  6. @ output = $(jobid).out
  7. @ error = $(jobid).err
  8. @ wall_clock_limit = 40:00:00
  9. =====================================
    1. this is necessary in order to avoid core dumps for batch files
    2. which can cause the system to be overloaded
  10. ulimits
  11. @ core_limit = 0
  12. =====================================
    1. necessary to force use of infiniband network for MPI traffic
      1. TURN IT OFF # @ network.MPI = csss,not_shared,US,HIGH
  13. =====================================
  14. @ environment=COPY_ALL
  15. @ queue

export PATH=/usr/lpp/ppe.hpct/bin:/usr/vacpp/bin:.:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin:/usr/lpp/LoadL/full/bin:/usr/local/bin export F77=xlf_r export CC=xlc_r export CXX=xlc++_r export FFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export CFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export CXXFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export FFTW_LOCATION=/scratch/cneale/exe/fftw-3.1.2_aix/exec export GROMACS_LOCATION=/scratch/cneale/exe/gromacs-4.0.4_aix/exec export CPPFLAGS=-I$FFTW_LOCATION/include export LDFLAGS=-L$FFTW_LOCATION/lib cd /scratch/cneale/exe/gromacs-4.0.4_aix mkdir exec ./configure --prefix=$GROMACS_LOCATION --without-motif-includes --without-motif-libraries --without-x --without-xml >output.configure 2>&1 make >output.make 2>&1 make install >output.make_install 2>&1 make distclean </source>

-- cneale 18 August 2009

Compiling parallel gromacs on the power6 (submitted to the queue)

<source lang="sh">

  1. ===============================================================================
  2. Specifies the name of the shell to use for the job
  3. @ shell = /usr/bin/ksh
          1. @ job_type = serial
  4. @ job_type = parallel
  5. @ class = verylong
  6. @ node = 1
  7. @ tasks_per_node = 1
  8. @ output = $(jobid).out
  9. @ error = $(jobid).err
  10. @ wall_clock_limit = 40:00:00
  11. =====================================
    1. this is necessary in order to avoid core dumps for batch files
    2. which can cause the system to be overloaded
  12. ulimits
  13. @ core_limit = 0
  14. =====================================
    1. necessary to force use of infiniband network for MPI traffic
      1. TURN IT OFF # @ network.MPI = csss,not_shared,US,HIGH
  15. =====================================
  16. @ environment=COPY_ALL
  17. @ queue

export F77=xlf_r export CC=xlc_r export CXX=xlc++_r export FFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export CFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export CXXFLAGS="-O5 -qarch=pwr6 -qtune=pwr6" export FFTW_LOCATION=/scratch/cneale/exe/fftw-3.1.2_aix/exec export GROMACS_LOCATION=/scratch/cneale/exe/gromacs-4.0.4_aix/exec export CPPFLAGS=-I$FFTW_LOCATION/include export LDFLAGS=-L$FFTW_LOCATION/lib cd /scratch/cneale/exe/gromacs-4.0.4_aix echo "cn-r0-10" > ~/.rhosts echo localhost > ~/host.list for((i=2;i<=16;i++)); do

 echo localhost >> ~/host.list

done export MP_HOSTFILE=~/host.list ./configure --prefix=$GROMACS_LOCATION --without-motif-includes --without-motif-libraries --without-x --without-xml --enable-mpi --disable-nice --program-suffix="_mpi" CC=mpcc_r F77=mpxlf_r > output.configure_mpi 2>&1 make mdrun > output.make_mpi 2>&1 make install-mdrun > output.make_install_mpi 2>&1 make distclean </source>

-- cneale 18 August 2009

fftw single precision compilation

FFTW is required by GROMACS. This compilation must be completed before compiling GROMACS.

<source lang="sh"> mkdir exec export FFTW_LOCATION=/scratch/cneale/exe/intel/fftw-3.1.2/exec module purge module load openmpi intel ./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} make make install make distclean </source>

-- cneale 18 August 2009

Change to get mvapich2-1.4rc1 to compile gromacs

This change is required to the mvapich2-1.4rc1 source code in order to compile GROMACS with it.

<source lang="sh"> src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c line 503 unsigned long debug = 0; to static unsigned long debug = 0; </source>

-- cneale 18 August 2009

Compiling mvapich2-1.4rc1

<source lang="sh"> cd /scratch/cneale/exe/mvapich2-1.4rc1 mkdir exec module purge module load intel ./configure --prefix=/scratch/cneale/exe/mvapich2-1.4rc1/exec CC=icc CXX=icpc F90=ifort F77=ifort >output.configure 2>&1 make >output.make 2>&1 make install >output.make_install 2>&1 make distclean </source>

-- cneale 18 August 2009

Compiling gromacs on GPC using mvapich2-1.4rc1

<source lang="sh">

  1. !/bin/bash

cd /scratch/cneale/exe/intel/gromacs-4.0.5 mkdir exec PATH=/usr/lib64/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/xcat/bin:/opt/xcat/sbin:/root/bin:/opt/torque/bin:/opt/xcat/bin:/opt/xcat/sbin:/usr/lpp/mmfs/bin:/scratch/cneale/exe/mvapich2-1.4rc1/exec/bin/:/scinet/gpc/x1/intel/Compiler/11.0/081/bin/intel64 LD_LIBRARY_PATH=/scratch/cneale/exe/mvapich2-1.4rc1/exec/lib/:/scinet/gpc/x1/intel/Compiler/11.0/081/lib/intel64:/scinet/gpc/x1/intel/Compiler/11.0/081/mkl/lib/em64t/ export FFTW_LOCATION=/scratch/cneale/exe/intel/fftw-3.1.2/exec export GROMACS_LOCATION=/scratch/cneale/exe/intel/gromacs-4.0.5/exec export CPPFLAGS="-I$FFTW_LOCATION/include -I/scratch/cneale/exe/mvapich2-1.4rc1/exec/include -I/scratch/cneale/exe/mvapich2-1.4rc1/exec/lib" export LDFLAGS=-L$FFTW_LOCATION/lib ./configure --prefix=$GROMACS_LOCATION --without-motif-includes --without-motif-libraries --without-x --without-xml --enable-mpi --program-suffix="_mvapich2" >output.configure.mpi.mvapich2 2>&1 make >output.make.mpi.mvapich2 2>&1 make install-mdrun >output.make_install.mpi.mvapich2 2>&1 make distclean </source>

-- cneale 18 August 2009