Nwchem

From oldwiki.scinet.utoronto.ca
Revision as of 17:04, 7 December 2011 by Dgruner (talk | contribs)
Jump to navigation Jump to search

NWChem

The NWChem version 6.0 package was built using the Intel v12.1 compilers, and the IntelMPI v.4 library, on CentOS 6.


The following environment was used:

<source lang="bash">

  1. environment variables needed to build NWChem for the GPC
  1. must make sure to load modules gcc, intel, intelmpi

module load gcc intel intelmpi

export LARGE_FILES=TRUE export NWCHEM_TOP=/scinet/gpc/src/nwchem-6.0 export TCGRSH=/usr/bin/ssh export NWCHEM_TARGET=LINUX64

  1. use the BLAS in MKL

export HAS_BLAS=yes export BLASOPT="-L$MKLROOT/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"

  1. export LIB_DEFINES='-DDFLT_TOT_MEM=16777216'

export USE_MPI=y export USE_MPIF=y export IB_HOME=/usr export IB_INCLUDE=$IB_HOME/include export IB_LIB=$IB_HOME/lib64 export IB_LIB_NAME="-libumad -libverbs -lpthread" export ARMCI_NETWORK=OPENIB export MPI_LOC=$MPI_HOME export MPI_LIB=$MPI_LOC/lib64 export MPI_INCLUDE=$MPI_LOC/include64 export LIBMPI='-lmpigf -lmpigi -lmpi_ilp64 -lmpi'

  1. from the Argonne wiki https://wiki.alcf.anl.gov/index.php/NWChem

export NWCHEM_MODULES=all export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -unroll-aggressive" export COPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops"

</source>

The above script is /scinet/gpc/Applications/NWChem-6.0/utils/env.sh .


Running NWChem

- Load the necessary modules: gcc, intel, intelmpi, nwchem (best done in your .bashrc)

  # module load gcc intel intelmpi nwchem

- Make sure you've made a link in your home directory to the default.nwchemrc file in the installation:

  # ln -s $SCINET_NWCHEM_HOME/data/default.nwchemrc ~/.nwchemrc

If you had used a previous version of nwchem, please remove the link and recreate it, or remove it and copy the default .nwchemrc to your home directory.

- The NWChem executable is in $SCINET_NWCHEM_HOME/bin/nwchem

- For multinode runs, use the sample IB script futher below

- Create a torque script to run NWChem. Here is an example for a calculation on a single 8-core node:


<source lang="bash">

  1. !/bin/bash
  2. PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA
  3. PBS -N nwchemjob
    1. To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
  1. If not an interactive job (i.e. -I), then cd into the directory where
  2. I typed qsub.

if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then

  if [ -n "$PBS_O_WORKDIR" ]; then
    cd $PBS_O_WORKDIR
  fi

fi

  1. the input file is typically named something like "nwchemjob.nw"
  2. so the calculation will be run like "mpirun nwchem nwchemjob.nw > nwchemjob.out"
  1. load the nwchem and other required modules if not in .bashrc already

module load gcc intel intelmpi nwchem

  1. run the program

mpirun -r ssh -env I_MPI_DEVICE ssm -np 8 $SCINET_NWCHEM_HOME/bin/nwchem nwchemjob.nw >& nwchemjob.out </source>

Here is a similar script, but this one uses 2 InfiniBand-connected nodes:

<source lang="bash">

  1. !/bin/bash
  2. PBS -l nodes=2:ib:ppn=8,walltime=48:00:00,os=centos53computeA
  3. PBS -N nwchemjob
    1. To submit type: qsub nwc.sh (where nwc.sh is the name of this script)
  1. If not an interactive job (i.e. -I), then cd into the directory where
  2. I typed qsub.

if [ "$PBS_ENVIRONMENT" != "PBS_INTERACTIVE" ]; then

  if [ -n "$PBS_O_WORKDIR" ]; then
    cd $PBS_O_WORKDIR
  fi

fi

  1. the input file is typically named something like "nwchemjob.nw"
  2. so the calculation will be run like "mpirun nwchem nwchemjob.nw > nwchemjob.out"
  1. load the nwchem and other required modules if not in .bashrc already

module load gcc intel intelmpi nwchem

  1. run the program

mpirun -r ssh -env I_MPI_DEVICE rdssm -np 16 $SCINET_NWCHEM_HOME/bin/nwchem nwchemjob.nw >& nwchemjob.out </source>


notes on the older NWChem, 5.1.1, built on CentOS 5

The NWChem version 5.1.1 package was built using the Intel v11.1 compilers, and the IntelMPI v.4 library.

The following environment was used:

<source lang="bash">

  1. environment variables needed to build NWChem for the GPC
  1. must make sure to load modules gcc, intel, intelmpi

module load gcc intel intelmpi

export LARGE_FILES=TRUE export NWCHEM_TOP=/scinet/gpc/src/nwchem-5.1.1 export TCGRSH=/usr/bin/ssh export NWCHEM_TARGET=LINUX64

  1. use the BLAS in MKL - Actually, this was NOT used, as there is a problem with 8-byte integers
  2. export HAS_BLAS=yes
  3. export BLASOPT="-L$MKLPATH $MKLPATH/libmkl_solver_ilp64_sequential.a -Wl,--start-group -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread"
  4. export LIB_DEFINES='-DDFLT_TOT_MEM=16777216'

export USE_MPI=y export USE_MPIF=y export IB_HOME=/usr export IB_INCLUDE=$IB_HOME/include export IB_LIB=$IB_HOME/lib64 export IB_LIB_NAME="-libumad -libverbs -lpthread" export ARMCI_NETWORK=OPENIB export MPI_LOC=$MPI_HOME export MPI_LIB=$MPI_LOC/lib64 export MPI_INCLUDE=$MPI_LOC/include64 export LIBMPI='-lmpigf -lmpigi -lmpi_ilp64 -lmpi'

  1. from the Argonne wiki https://wiki.alcf.anl.gov/index.php/NWChem

export NWCHEM_MODULES=all

  1. export CCSDTQ=y
  2. export CCSDTLR=y

export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -unroll-aggressive"

  1. export FOPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops -multiple-processes=8 -unroll-aggressive"

export COPTIMIZE="-O3 -xSSE2,SSE3,SSSE3,SSE4.1,SSE4.2 -no-prec-div -funroll-loops" </source>


The above script is /scinet/gpc/Applications/NWChem-5.1.1/utils/env.sh .



-- dgruner 3 September 2010