Nwchem
NWChem
The NWChem version 6.0 package was built using the Intel v12.1 compilers, and the IntelMPI v.4 library, on CentOS 6.
The following environment was used:
<source lang="bash">
- environment variables needed to build NWChem for the GPC
- must make sure to load modules gcc, intel, intelmpi
module load gcc intel intelmpi
export LARGE_FILES=TRUE export NWCHEM_TOP=/scinet/gpc/src/nwchem-6.0 export TCGRSH=/usr/bin/ssh export NWCHEM_TARGET=LINUX64
- use the BLAS in MKL
export HAS_BLAS=yes export BLASOPT="-L$MKLROOT/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"
- export LIB_DEFINES='-DDFLT_TOT_MEM=16777216'
export USE_MPI=y export USE_MPIF=y export IB_HOME=/usr export IB_INCLUDE=$IB_HOME/include export IB_LIB=$IB_HOME/lib64 export IB_LIB_NAME="-libumad -libverbs -lpthread" export ARMCI_NETWORK=OPENIB export MPI_LOC=$MPI_HOME export MPI_LIB=$MPI_LOC/lib64 export MPI_INCLUDE=$MPI_LOC/include64 export LIBMPI='-lmpigf -lmpigi -lmpi_ilp64 -lmpi'
- from the Argonne wiki https://wiki.alcf.anl.gov/index.php/NWChem
export NWCHEM_MODULES=all export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -unroll-aggressive" export COPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops"
</source>
The above script is /scinet/gpc/Applications/NWChem-6.0/utils/env.sh .
Important note:
NWChem must be run in "direct" mode. That is, recalculating integrals on demand, and NOT saving them to disk. This is especially important when running on a large cluster with a shared filesystem. In addition, it has been observed that direct mode is at least 2-3 times faster than using temporary files on disk. For example, when running an MP2 computation, the following option MUST be set:
TASK DIRECT_MP2 optimize
Running NWChem
- Load the necessary modules: gcc, intel, intelmpi, nwchem (best done in your .bashrc)
# module load gcc intel intelmpi nwchem
- Make sure you've made a link in your home directory to the default.nwchemrc file in the installation:
# ln -s $SCINET_NWCHEM_HOME/data/default.nwchemrc ~/.nwchemrc
If you had used a previous version of nwchem, please remove the link and recreate it, or remove it and copy the default .nwchemrc to your home directory.
- The NWChem executable is in $SCINET_NWCHEM_HOME/bin/nwchem
- For multinode runs, use the sample IB script futher below
- Create a torque script to run NWChem. Here is an example for a calculation on a single 8-core node:
<source lang="bash">
- !/bin/bash
- PBS -l nodes=1:ppn=8,walltime=48:00:00
- PBS -N nwchemjob
- To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
- If not an interactive job (i.e. -I), then cd into the directory where
- I typed qsub.
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
if [ -n "$PBS_O_WORKDIR" ]; then cd $PBS_O_WORKDIR fi
fi
- the input file is typically named something like "nwchemjob.nw"
- so the calculation will be run like "mpirun nwchem nwchemjob.nw > nwchemjob.out"
- load the nwchem and other required modules if not in .bashrc already
module load gcc intel intelmpi nwchem
- run the program
mpirun -r ssh -env I_MPI_DEVICE ssm -np 8 $SCINET_NWCHEM_HOME/bin/nwchem nwchemjob.nw >& nwchemjob.out </source>
Here is a similar script, but this one uses 2 InfiniBand-connected nodes:
<source lang="bash">
- !/bin/bash
- PBS -l nodes=2:ib:ppn=8,walltime=48:00:00
- PBS -N nwchemjob
- To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
- If not an interactive job (i.e. -I), then cd into the directory where
- I typed qsub.
if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then
if [ -n "$PBS_O_WORKDIR" ]; then cd $PBS_O_WORKDIR fi
fi
- the input file is typically named something like "nwchemjob.nw"
- so the calculation will be run like "mpirun nwchem nwchemjob.nw > nwchemjob.out"
- load the nwchem and other required modules if not in .bashrc already
module load gcc intel intelmpi nwchem
- run the program
mpirun -r ssh -env I_MPI_DEVICE rdssm -np 16 $SCINET_NWCHEM_HOME/bin/nwchem nwchemjob.nw >& nwchemjob.out </source>
notes on the older NWChem, 5.1.1, built on CentOS 5
The NWChem version 5.1.1 package was built using the Intel v11.1 compilers, and the IntelMPI v.4 library.
The following environment was used:
<source lang="bash">
- environment variables needed to build NWChem for the GPC
- must make sure to load modules gcc, intel, intelmpi
module load gcc intel intelmpi
export LARGE_FILES=TRUE export NWCHEM_TOP=/scinet/gpc/src/nwchem-5.1.1 export TCGRSH=/usr/bin/ssh export NWCHEM_TARGET=LINUX64
- use the BLAS in MKL - Actually, this was NOT used, as there is a problem with 8-byte integers
- export HAS_BLAS=yes
- export BLASOPT="-L$MKLPATH $MKLPATH/libmkl_solver_ilp64_sequential.a -Wl,--start-group -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread"
- export LIB_DEFINES='-DDFLT_TOT_MEM=16777216'
export USE_MPI=y export USE_MPIF=y export IB_HOME=/usr export IB_INCLUDE=$IB_HOME/include export IB_LIB=$IB_HOME/lib64 export IB_LIB_NAME="-libumad -libverbs -lpthread" export ARMCI_NETWORK=OPENIB export MPI_LOC=$MPI_HOME export MPI_LIB=$MPI_LOC/lib64 export MPI_INCLUDE=$MPI_LOC/include64 export LIBMPI='-lmpigf -lmpigi -lmpi_ilp64 -lmpi'
- from the Argonne wiki https://wiki.alcf.anl.gov/index.php/NWChem
export NWCHEM_MODULES=all
- export CCSDTQ=y
- export CCSDTLR=y
export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -unroll-aggressive"
- export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -multiple-processes=8 -unroll-aggressive"
export COPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops" </source>
The above script is /scinet/gpc/Applications/NWChem-5.1.1/utils/env.sh .
-- dgruner 3 September 2010