Cpmd

From oldwiki.scinet.utoronto.ca
Revision as of 14:35, 17 June 2013 by Northrup (talk | contribs)
Jump to navigation Jump to search

CPMD

The CPMD version 3.13.2 package was built using the Intel v11.1 compilers, and the OpenMPI v1.4.1 library.

Patched with the latest available patch. In the SOURCE directory:

  patch -p2 < ../../cpmd-3.13.2_01.patch


Basic configuration used:

<source lang="bash">

  1. INFO#
  2. INFO# Configuration to build a parallel cpmd executable for a linux machine
  3. INFO# with an AMD64/EM64T cpu (Opteron/AthlonFX/Athlon64/Xeon-EM64T) using
  4. INFO# the Intel Fortran Compiler with EM64T extensions.
  5. INFO#
  6. INFO# For optimal performance you should use a specifically tuned BLAS/LAPACK
  7. INFO# library. This example uses the Intel MKL library.
  8. INFO#
  9. INFO# see http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-linux.html
  10. INFO# for more information on compilind and running CPMD on linux machines.
  11. INFO#
  12. INFO# NOTE: CPMD cannot be compiled with the GNU Fortran compiler.
  13. INFO#
    IRAT=2
    CFLAGS='-O2 -Wall -m64'
    CPP='/lib/cpp -P -C -traditional'
    CPPFLAGS='-D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC \
       -DPARALLEL -DMYRINET'
    FFLAGS='-pc64 -O2 -unroll'
    LFLAGS=' -L. -L${MKLPATH} ${MKLPATH}/libmkl_solver_lp64_sequential.a -Wl,--start-group \
       -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread'
    FFLAGS_GROMOS='-Dgood_luck $(FFLAGS)'
    if [ $debug ]; then
      FC='mpif77 -c -g'
      CC='mpicc -g -Wall -m64'
      LD='mpif77 -g'
    else
      FC='mpif77 -c '
      CC='mpicc'
      LD='mpif77 -static-intel '
    fi

</source>


In the SOURCE directory do:

  ./config.sh LINUX_INTEL64_INTEL_MPI > Makefile
  Make >& make.out

Created cpmd/3.13.2 module:

<source lang="tcl">

  1. %Module -*- tcl -*-
  1. CPMD 3.13.2

proc ModulesHelp { } {

 puts stderr "\tThis module adds CPMD environment variables"

}

module-whatis "adds CPMD environment variables"

  1. CPMD was compiled with Intel compilers and OpenMPI

prereq intel prereq openmpi

setenv SCINET_CPMD_HOME /scinet/gpc/Applications/cpmd/3.13.2 setenv SCINET_CPMD_BIN /scinet/gpc/Applications/cpmd/3.13.2/bin append-path PATH /scinet/gpc/Applications/cpmd/3.13.2/bin setenv CPMD_PP_LIBRARY_PATH /scinet/gpc/Applications/cpmd/3.13.2/PPLIBNEW </source>


Running CPMD

- Load the necessary modules: gcc, intel, intelmpi, nwchem (best done in your .bashrc)

  # module load intel openmpi cpmd

- The CPMD executable is in $SCINET_CPMD_BIN/cpmd.x

- For multinode runs, use the sample IB script futher below

- Create a torque script to run CPMD. Here is an example for a calculation on a single 8-core node:

- The standard CPMD pseudopotentials are in the directory $CPMD_PP_LIBRARY_PATH. The following sample scripts use a user-defined pseudopotential.


<source lang="bash">

  1. !/bin/bash
  2. MOAB/Torque submission script for Scinet GPC (ethernet)
  3. PBS -l nodes=1:ppn=8,walltime=00:30:00
  4. PBS -N cpmdjob
  5. load the cpmd and other required modules if not in .bashrc already

module load intel openmpi cpmd

  1. If not an interactive job (i.e. -I), then cd into the directory where
  2. I typed qsub.

if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then

  if [ -n "$PBS_O_WORKDIR" ]; then
    cd $PBS_O_WORKDIR
  fi

fi

mpirun -np 8 -hostfile $PBS_NODEFILE $SCINET_CPMD_BIN/cpmd.x inp1 /home/mgalib/uspp/uspp-736/Pot >out1 </source>

Here is a similar script, but this one uses 2 InfiniBand-connected nodes:

<source lang="bash">

  1. !/bin/bash
  2. PBS -l nodes=2:ib:ppn=8,walltime=48:00:00
  3. PBS -N cpmdjob
    1. To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
  1. If not an interactive job (i.e. -I), then cd into the directory where
  2. I typed qsub.

if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then

  if [ -n "$PBS_O_WORKDIR" ]; then
    cd $PBS_O_WORKDIR
  fi

fi

mpirun -np 16 -hostfile $PBS_NODEFILE $SCINET_CPMD_BIN/cpmd.x inp1 /home/mgalib/uspp/uspp-736/Pot >out1 </source>


-- dgruner 3 September 2010