Cpmd
CPMD
The CPMD version 3.13.2 package was built using the Intel v11.1 compilers, and the OpenMPI v1.4.1 library.
Patched with the latest available patch. In the SOURCE directory:
patch -p2 < ../../cpmd-3.13.2_01.patch
Basic configuration used:
<source lang="bash">
- INFO#
- INFO# Configuration to build a parallel cpmd executable for a linux machine
- INFO# with an AMD64/EM64T cpu (Opteron/AthlonFX/Athlon64/Xeon-EM64T) using
- INFO# the Intel Fortran Compiler with EM64T extensions.
- INFO#
- INFO# For optimal performance you should use a specifically tuned BLAS/LAPACK
- INFO# library. This example uses the Intel MKL library.
- INFO#
- INFO# see http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-linux.html
- INFO# for more information on compilind and running CPMD on linux machines.
- INFO#
- INFO# NOTE: CPMD cannot be compiled with the GNU Fortran compiler.
- INFO#
IRAT=2 CFLAGS='-O2 -Wall -m64' CPP='/lib/cpp -P -C -traditional' CPPFLAGS='-D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC \ -DPARALLEL -DMYRINET' FFLAGS='-pc64 -O2 -unroll' LFLAGS=' -L. -L${MKLPATH} ${MKLPATH}/libmkl_solver_lp64_sequential.a -Wl,--start-group \ -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread' FFLAGS_GROMOS='-Dgood_luck $(FFLAGS)' if [ $debug ]; then FC='mpif77 -c -g' CC='mpicc -g -Wall -m64' LD='mpif77 -g' else FC='mpif77 -c ' CC='mpicc' LD='mpif77 -static-intel ' fi
</source>
In the SOURCE directory do:
./config.sh LINUX_INTEL64_INTEL_MPI > Makefile Make >& make.out
Created cpmd/3.13.2 module:
<source lang="tcl">
- %Module -*- tcl -*-
- CPMD 3.13.2
proc ModulesHelp { } {
puts stderr "\tThis module adds CPMD environment variables"
}
module-whatis "adds CPMD environment variables"
- CPMD was compiled with Intel compilers and OpenMPI
prereq intel prereq openmpi
setenv SCINET_CPMD_HOME /scinet/gpc/Applications/cpmd/3.13.2 setenv SCINET_CPMD_BIN /scinet/gpc/Applications/cpmd/3.13.2/bin append-path PATH /scinet/gpc/Applications/cpmd/3.13.2/bin setenv CPMD_PP_LIBRARY_PATH /scinet/gpc/Applications/cpmd/3.13.2/PPLIBNEW </source>
Running CPMD
- Load the necessary modules: gcc, intel, intelmpi, nwchem (best done in your .bashrc)
# module load intel openmpi cpmd
- The CPMD executable is in $SCINET_CPMD_BIN/cpmd.x
- For multinode runs, use the sample IB script futher below
- Create a torque script to run CPMD. Here is an example for a calculation on a single 8-core node:
<source lang="bash">
- !/bin/bash
- MOAB/Torque submission script for Scinet GPC (ethernet)
- PBS -l nodes=1:ppn=8,walltime=00:30:00
- PBS -N cpmdjob
- load the cpmd and other required modules if not in .bashrc already
module load intel openmpi cpmd
- If not an interactive job (i.e. -I), then cd into the directory where
- I typed qsub.
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
if [ -n "$PBS_O_WORKDIR" ]; then cd $PBS_O_WORKDIR fi
fi
mpirun -np 8 -hostfile $PBS_NODEFILE $SCINET_CPMD_BIN/cpmd.x inp1 /home/mgalib/uspp/uspp-736/Pot >out1 </source>
Here is a similar script, but this one uses 2 InfiniBand-connected nodes:
<source lang="bash">
- !/bin/bash
- PBS -l nodes=2:ib:ppn=8,walltime=48:00:00,os=centos53computeA
- PBS -N cpmdjob
- To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
- If not an interactive job (i.e. -I), then cd into the directory where
- I typed qsub.
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
if [ -n "$PBS_O_WORKDIR" ]; then cd $PBS_O_WORKDIR fi
fi
mpirun -np 16 -hostfile $PBS_NODEFILE $SCINET_CPMD_BIN/cpmd.x inp1 /home/mgalib/uspp/uspp-736/Pot >out1 </source>
-- dgruner 3 September 2010