<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://oldwiki.scinet.utoronto.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Guido</id>
	<title>oldwiki.scinet.utoronto.ca - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://oldwiki.scinet.utoronto.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Guido"/>
	<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php/Special:Contributions/Guido"/>
	<updated>2026-05-10T04:14:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.12</generator>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3828</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3828"/>
		<updated>2011-07-27T17:09:00Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
'''Note:''' To run interactively on GPC interactive queue or TCS on small subset of interactive processors, see the instructions below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CASEROOT which will specify your model run naming conventions will not archive properly (to short term archiving directory i.e. /scratch) if your CASEROOT name is too long. It is best to keep it short. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 interactively on TCS''&lt;br /&gt;
&lt;br /&gt;
In the run script you need to specify the following set and unset of environment variables. You also need a hostfile that will describe the node procs that you will be using&lt;br /&gt;
&lt;br /&gt;
 unsetenv MP_EUILIB&lt;br /&gt;
 &lt;br /&gt;
 setenv MP_PROCS 16 &lt;br /&gt;
 &lt;br /&gt;
 setenv MP_NODES 1&lt;br /&gt;
   &lt;br /&gt;
 &lt;br /&gt;
 /usr/bin/poe /project/ccsm/bin/ccsm_launch ./ccsm.exe -hfile /project/&amp;lt;user&amp;gt;/runs/&amp;lt;casename&amp;gt;/hostfile&lt;br /&gt;
&lt;br /&gt;
and the hostfile looks like (for 16 threads of tcs02)&lt;br /&gt;
&lt;br /&gt;
 --&amp;gt; more hostfile&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;br /&gt;
 tcs-f11n06&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3241</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3241"/>
		<updated>2011-05-30T17:45:59Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
'''Note:''' To run interactively on GPC interactive queue or TCS on small subset of interactive processors, see the instructions below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CASEROOT which will specify your model run naming conventions will not archive properly (to short term archiving directory i.e. /scratch) if your CASEROOT name is too long. It is best to keep it short. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 interactively on TCS''&lt;br /&gt;
&lt;br /&gt;
In the run script you need to specify the following set and unset of environment variables. You also need a hostfile that will describe the node procs that you will be using&lt;br /&gt;
&lt;br /&gt;
 unsetenv MP_EUILIB&lt;br /&gt;
 &lt;br /&gt;
 setenv MP_PROCS 16 &lt;br /&gt;
 &lt;br /&gt;
 setenv MP_NODES 1&lt;br /&gt;
   &lt;br /&gt;
 &lt;br /&gt;
 /usr/bin/poe /project/ccsm/bin/ccsm_launch ./ccsm.exe -hfile /project/&amp;lt;user&amp;gt;/runs/&amp;lt;casename&amp;gt;/hostfile&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3240</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=3240"/>
		<updated>2011-05-30T17:45:19Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
'''Note:''' To run interactively on GPC interactive queue or TCS on small subset of interactive processors, see the instructions below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CASEROOT which will specify your model run naming conventions will not archive properly (to short term archiving directory i.e. /scratch) if your CASEROOT name is too long. It is best to keep it short. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 interactively on TCS''&lt;br /&gt;
&lt;br /&gt;
In the run script you need to specify the following set and unset of environment variables. You also need a hostfile that will describe the node procs that you will be using&lt;br /&gt;
&lt;br /&gt;
unsetenv MP_EUILIB&lt;br /&gt;
setenv MP_PROCS 16&lt;br /&gt;
setenv MP_NODES 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/usr/bin/poe /project/ccsm/bin/ccsm_launch ./ccsm.exe -hfile /project/&amp;lt;user&amp;gt;/runs/&amp;lt;casename&amp;gt;/hostfile&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2886</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2886"/>
		<updated>2011-04-19T16:14:15Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help to prevent the duplication of simulations (e.g. control runs)&lt;br /&gt;
&lt;br /&gt;
'''List of Simulations:'''&lt;br /&gt;
&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 700 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:Guido|Guido]] 16:23, 3 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
simulation: ccsm4_comp-B_1850_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
 Current Output Data Location: /scratch/jyang/ccsm4/exe/ccsm4control/run&lt;br /&gt;
 Length: a very short run, 10 years&lt;br /&gt;
 Description: This is a CCSM4 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  No interactive carbon cycle.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:jyang|jyang]] 14 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
simulation: ccsm4_comp-F_1850_WACCM_res-f45f45&lt;br /&gt;
&lt;br /&gt;
 Current Output Data Location: /scratch/jyang/ccsm4/archive/GHG_ccsm4_WACCM_f45f45&lt;br /&gt;
 Length: a short run, 8 years&lt;br /&gt;
 Description: This is a CCSM4 1850 control run using component set F&lt;br /&gt;
  WACCM: the whole atmosphere community climate model, version 4 &lt;br /&gt;
  Atmosphere: including troposphere, stratosphere, and mesosphere and lower thermosphere (0-146 km)&lt;br /&gt;
  chemistry: specified greenhouse gases, no MOZART&lt;br /&gt;
  active land and atmosphere; specified sea surface temperature and ice-cover；stub ice sheet.&lt;br /&gt;
  This is a low resolution simulation (f45-f45, approximately 4x5 deg atm, 4x5 deg ocn).&lt;br /&gt;
  No interactive carbon cycle.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:jyang|jyang]] 14 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
'''List of Simulations:'''&lt;br /&gt;
&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-0.9x1.25_gx1v6&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-0.9x1.25_gx1v6&lt;br /&gt;
 Length: approx 40 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a high resolution simulation (approximately 1 deg atm, 1 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;br /&gt;
 Node Usage: 4&lt;br /&gt;
 User: --[[User:Guido|Guido]] 19 April 2011 (EST)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2885</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2885"/>
		<updated>2011-04-19T15:15:36Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help to prevent the duplication of simulations (e.g. control runs)&lt;br /&gt;
&lt;br /&gt;
'''List of Simulations:'''&lt;br /&gt;
&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 700 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:Guido|Guido]] 16:23, 3 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
simulation: ccsm4_comp-B_1850_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
 Current Output Data Location: /scratch/jyang/ccsm4/exe/ccsm4control/run&lt;br /&gt;
 Length: a very short run, 10 years&lt;br /&gt;
 Description: This is a CCSM4 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  No interactive carbon cycle.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:jyang|jyang]] 14 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
simulation: ccsm4_comp-F_1850_WACCM_res-f45f45&lt;br /&gt;
&lt;br /&gt;
 Current Output Data Location: /scratch/jyang/ccsm4/archive/GHG_ccsm4_WACCM_f45f45&lt;br /&gt;
 Length: a short run, 8 years&lt;br /&gt;
 Description: This is a CCSM4 1850 control run using component set F&lt;br /&gt;
  WACCM: the whole atmosphere community climate model, version 4 &lt;br /&gt;
  Atmosphere: including troposphere, stratosphere, and mesosphere and lower thermosphere (0-146 km)&lt;br /&gt;
  chemistry: specified greenhouse gases, no MOZART&lt;br /&gt;
  active land and atmosphere; specified sea surface temperature and ice-cover；stub ice sheet.&lt;br /&gt;
  This is a low resolution simulation (f45-f45, approximately 4x5 deg atm, 4x5 deg ocn).&lt;br /&gt;
  No interactive carbon cycle.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:jyang|jyang]] 14 January 2011 (EST)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2877</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2877"/>
		<updated>2011-04-18T15:28:10Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run (for interactive)&lt;br /&gt;
  or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
To Run on TCS&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run (for interactive)&lt;br /&gt;
 or llsubmit ccsm3_t31_tcs.tcs.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on TCS:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script (MPI_HOSTFILE, MP_PROCS, MP_NODES)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;br /&gt;
&lt;br /&gt;
change&lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile -hfile /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then run interactively (e.g. ./YOURRUNNAME.run , then use top in another terminal to see it run)&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on GPC:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.gpc to have the correct number of processors (i.e. less then the 8 tasks per node time the nodes , e.g. 16).&lt;br /&gt;
&lt;br /&gt;
Compile the model on one of the 4 GPC interactive nodes&lt;br /&gt;
&lt;br /&gt;
In env_run in your run script directory, change:&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        AUTO            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
to&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        FALSE            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
Then get on the interactive nodes&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=2:ppn=8,walltime=1:00:00&lt;br /&gt;
&lt;br /&gt;
and run interactively:&lt;br /&gt;
&lt;br /&gt;
 ./YOURRUNNAME.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: Both CCSM3 and CCSM4/CESM1 can be run on the GPC batch queue by instating the full environment setup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add &amp;quot;module load extras&amp;quot; to your env_machopts (h/t Chris)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2703</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2703"/>
		<updated>2011-03-08T15:59:37Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on TCS:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script (MPI_HOSTFILE, MP_PROCS, MP_NODES)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;br /&gt;
&lt;br /&gt;
change&lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile -hfile /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then run interactively (e.g. ./YOURRUNNAME.run , then use top in another terminal to see it run)&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on GPC:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.gpc to have the correct number of processors (i.e. less then the 8 tasks per node time the nodes , e.g. 16).&lt;br /&gt;
&lt;br /&gt;
Compile the model on one of the 4 GPC interactive nodes&lt;br /&gt;
&lt;br /&gt;
In env_run in your run script directory, change:&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        AUTO            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
to&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        FALSE            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
Then get on the interactive nodes&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=2:ppn=8,walltime=1:00:00&lt;br /&gt;
&lt;br /&gt;
and run interactively:&lt;br /&gt;
&lt;br /&gt;
 ./YOURRUNNAME.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: Both CCSM3 and CCSM4/CESM1 can be run on the GPC batch queue by instating the full environment setup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add &amp;quot;module load extras&amp;quot; to your env_machopts (h/t Chris)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2692</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2692"/>
		<updated>2011-03-02T23:16:22Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
'''Note:''' To run interactively on GPC interactive queue or TCS on small subset of interactive processors, see the instructions on Running CCSM3 page. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' $CASEROOT which will specify your model run naming conventions will not archive properly (to short term archiving directory i.e. /scratch) if your CASEROOT name is too long. It is best to keep it short. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2683</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2683"/>
		<updated>2011-02-25T19:10:18Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
'''Note:''' To run interactively on GPC interactive queue or TCS on small subset of interactive processors, see the instructions on Running CCSM3 page. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2682</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2682"/>
		<updated>2011-02-25T19:07:58Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on TCS:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script (MPI_HOSTFILE, MP_PROCS, MP_NODES)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;br /&gt;
&lt;br /&gt;
change&lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile -hfile /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then run interactively (e.g. ./YOURRUNNAME.run , then use top in another terminal to see it run)&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on GPC:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.gpc to have the correct number of processors (i.e. less then the 8 tasks per node time the nodes , e.g. 16).&lt;br /&gt;
&lt;br /&gt;
Compile the model on one of the 4 GPC interactive nodes&lt;br /&gt;
&lt;br /&gt;
In env_run in your run script directory, change:&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        AUTO            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
to&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        FALSE            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
Then get on the interactive nodes&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=2:ppn=8,walltime=1:00:00&lt;br /&gt;
&lt;br /&gt;
and run interactively:&lt;br /&gt;
&lt;br /&gt;
 ./YOURRUNNAME.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2681</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2681"/>
		<updated>2011-02-25T19:07:35Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on TCS:&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script (MPI_HOSTFILE, MP_PROCS, MP_NODES)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;br /&gt;
&lt;br /&gt;
change&lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile -hfile /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then run interactively (e.g. ./YOURRUNNAME.run , then use top in another terminal to see it run)&lt;br /&gt;
&lt;br /&gt;
'''To Run interactively on GPC:&lt;br /&gt;
'''&lt;br /&gt;
Set up env_mach.gpc to have the correct number of processors (i.e. less then the 8 tasks per node time the nodes , e.g. 16).&lt;br /&gt;
&lt;br /&gt;
Compile the model on one of the 4 GPC interactive nodes&lt;br /&gt;
&lt;br /&gt;
In env_run in your run script directory, change:&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        AUTO            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
to&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        FALSE            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
Then get on the interactive nodes&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=2:ppn=8,walltime=1:00:00&lt;br /&gt;
&lt;br /&gt;
and run interactively:&lt;br /&gt;
&lt;br /&gt;
 ./YOURRUNNAME.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2680</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2680"/>
		<updated>2011-02-25T19:06:53Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To Run interactively on TCS:&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script (MPI_HOSTFILE, MP_PROCS, MP_NODES)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;br /&gt;
&lt;br /&gt;
change&lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&lt;br /&gt;
 timex poe -cmdfile poe.cmdfile -hfile /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then run interactively (e.g. ./YOURRUNNAME.run , then use top in another terminal to see it run)&lt;br /&gt;
&lt;br /&gt;
To Run interactively on GPC:&lt;br /&gt;
&lt;br /&gt;
Set up env_mach.gpc to have the correct number of processors (i.e. less then the 8 tasks per node time the nodes , e.g. 16).&lt;br /&gt;
&lt;br /&gt;
Compile the model on one of the 4 GPC interactive nodes&lt;br /&gt;
&lt;br /&gt;
In env_run in your run script directory, change:&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        AUTO            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
to&lt;br /&gt;
&lt;br /&gt;
 setenv SETBLD        FALSE            # [AUTO, TRUE, FALSE]&lt;br /&gt;
&lt;br /&gt;
Then get on the interactive nodes&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=2:ppn=8,walltime=1:00:00&lt;br /&gt;
&lt;br /&gt;
and run interactively:&lt;br /&gt;
&lt;br /&gt;
 ./YOURRUNNAME.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2679</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2679"/>
		<updated>2011-02-25T18:58:12Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To Run interactively on TCS:&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 '''setenv MP_PROCS 16&lt;br /&gt;
 '''setenv MP_NODES 1&lt;br /&gt;
 source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2678</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2678"/>
		<updated>2011-02-25T18:57:40Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To Run interactively on TCS:&lt;br /&gt;
Set up env_mach.tcs to have the correct number of processors (i.e. less then the 64 threads on the node, e.g. 16).&lt;br /&gt;
&lt;br /&gt;
after you configure and build, add some environment variables to the .run script&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #-----------------------------------------------------------------------&lt;br /&gt;
 # Determine necessary environment variables&lt;br /&gt;
 #----------------------------------------------------------------------- &lt;br /&gt;
 &lt;br /&gt;
 cd /project/peltier/guido/runs/T42_tcs &lt;br /&gt;
 setenv MACH tcs&lt;br /&gt;
 setenv PATH /scratch/$LOGNAME/bin:$PATH&lt;br /&gt;
 setenv MP_EUIDEVICE sn_all&lt;br /&gt;
 #setenv LAPI_DEBUG_SLOT_ATT_THRESH 500000&lt;br /&gt;
 setenv MP_RC_USE_LMC yes&lt;br /&gt;
 setenv MP_POLLING_INTERVAL 20000000&lt;br /&gt;
 setenv MP_EAGER_LIMIT 65536&lt;br /&gt;
 setenv MP_BULK_MIN_MSG_SIZE 65537&lt;br /&gt;
 '''setenv MPI_HOSTFILE /project/peltier/guido/runs/T42_tcs/hostfile&lt;br /&gt;
 setenv MP_PROCS 16&lt;br /&gt;
 setenv MP_NODES 1&lt;br /&gt;
 '''source env_conf || &amp;quot;problem sourcing env_conf&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_run  || &amp;quot;problem sourcing env_run&amp;quot; &amp;amp;&amp;amp; exit -1  &lt;br /&gt;
 source env_mach.tcs || &amp;quot;problem sourcing env_mach.tcs&amp;quot; &amp;amp;&amp;amp; exit -1&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2649</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2649"/>
		<updated>2011-02-15T17:15:58Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2648</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2648"/>
		<updated>2011-02-15T17:14:47Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
'''Load Balancing:&lt;br /&gt;
Please See: http://www.cesm.ucar.edu/models/cesm1.0/timing/ to edit env_mach_pes.xml&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM4&amp;diff=2578</id>
		<title>Installing CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM4&amp;diff=2578"/>
		<updated>2011-01-31T20:06:56Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Under construction'''&lt;br /&gt;
&lt;br /&gt;
As a group we have a set of source code located in /project/ccsm on the SciNET GPFS:&lt;br /&gt;
&lt;br /&gt;
Each of the latest versions of the model will have a &amp;quot;_current&amp;quot; at the end of the model version to designate a link to the current subversion being used.&lt;br /&gt;
These codes have been modified to run on SciNET:&lt;br /&gt;
Status of compile/run:&lt;br /&gt;
&lt;br /&gt;
Model --------   TCS  --------- GPC&lt;br /&gt;
&lt;br /&gt;
CCSM3 ---------  Yes  --------- Yes&lt;br /&gt;
&lt;br /&gt;
CCSM4 ---------  Yes  --------- Yes&lt;br /&gt;
&lt;br /&gt;
CESM1 ---------  Yes  --------- Yes&lt;br /&gt;
&lt;br /&gt;
If you prefer to install in your own user space directory (e.g. you don't have access to /project/ccsm) you can use the following instructions:&lt;br /&gt;
&lt;br /&gt;
#  DOWNLOAD SOURCE CODE AND DATA&lt;br /&gt;
Download the source code from http://www.ccsm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
The CCSM4.0 User's Guide, available from the CCSM4 web site, gives instructions on getting the input data sets. There is new script check_input_data which checks whether the correct input data sets are available. The build script now calls check_input_data and downloads any missing data sets.&lt;br /&gt;
Note that this means that the first time the build script is run it must be as an interactive job, not as a batch job, as the compute nodes do not have access to an external network.&lt;br /&gt;
The directory that holds the input data is set in the variable DIN_LOC_ROOT_CSMDATA in config_machines.xml below.&lt;br /&gt;
&lt;br /&gt;
# CREATE SPECIFIC FILES&lt;br /&gt;
In $CCSM4_DIR/scripts/ccsm_utils/Machines create the files Macros.tcs, env_machopts.tcs and mkbatch.tcs by copying the equivalent generic_linux_intel or bluefire (equivalent to TCS) files.&lt;br /&gt;
&lt;br /&gt;
EDIT Specific files '''(See diff files below for details)'''&lt;br /&gt;
&lt;br /&gt;
# EDIT Macros.tcs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# EDIT env_machopts.tcs&lt;br /&gt;
&lt;br /&gt;
depending on which modules are loaded on GPC (TCS doesn't need this)&lt;br /&gt;
#--- set env variables for Macros if needed&lt;br /&gt;
 #setenv NETCDF_PATH something&lt;br /&gt;
 setenv NETCDF_PATH something&lt;br /&gt;
 setenv NETCDF_MODS something&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# EDIT mkbatch.tcs&lt;br /&gt;
You will need the following:&lt;br /&gt;
&lt;br /&gt;
 set mach = tcs&lt;br /&gt;
(copy and modify from bluefire or appropriate machine type)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 setenv OMP_NUM_THREADS ${maxthrds}&lt;br /&gt;
 #mpiexec -n ${maxtasks} ./ccsm.exe &amp;gt;&amp;amp;! ccsm.log.\$LID&lt;br /&gt;
 mpirun -np ${maxtasks} ./ccsm.exe &amp;gt;&amp;amp;! ccsm.log.\$LID&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The value of vmem will have to adjusted for each job.&lt;br /&gt;
&lt;br /&gt;
# EDIT config_machines.xml&lt;br /&gt;
&lt;br /&gt;
Replace the paths with paths to your own copy of the data and executable. Not all of the above may be necessary for all resolutions and compsets.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are the diff files for TCS:&lt;br /&gt;
----&lt;br /&gt;
 diff --git a/Macros.bluefire b/Macros.tcs&lt;br /&gt;
 index 9643b77..aec7a13 100644&lt;br /&gt;
 --- a/Macros.bluefire&lt;br /&gt;
 +++ b/Macros.tcs&lt;br /&gt;
 @@ -74,15 +74,15 @@ else&lt;br /&gt;
  endif&lt;br /&gt;
  LD            := $(FC)&lt;br /&gt;
  &lt;br /&gt;
 -NETCDF_PATH   := /usr/local&lt;br /&gt;
 -INC_NETCDF    := $(NETCDF_PATH)/include&lt;br /&gt;
 -LIB_NETCDF    := $(NETCDF_PATH)/lib&lt;br /&gt;
 -MOD_NETCDF    := $(NETCDF_PATH)/include&lt;br /&gt;
 +NETCDF_PATH   := $(SCINET_NETCDF_BASE)&lt;br /&gt;
 +INC_NETCDF    := $(SCINET_NETCDF_INC)&lt;br /&gt;
 +LIB_NETCDF    := $(SCINET_NETCDF_LIB)&lt;br /&gt;
 +MOD_NETCDF    := $(SCINET_NETCDF_INC)&lt;br /&gt;
  &lt;br /&gt;
  INC_MPI       := &lt;br /&gt;
  LIB_MPI       := &lt;br /&gt;
 -PNETCDF_PATH  := /contrib/parallel-netcdf-1.1.1svn&lt;br /&gt;
 -LIB_PNETCDF   := $(PNETCDF_PATH)/lib&lt;br /&gt;
 +PNETCDF_PATH  := $(SCINET_PNETCDF_BASE)&lt;br /&gt;
 +LIB_PNETCDF   := $(SCINET_PNETCDF_LIB)&lt;br /&gt;
  LAPACK_LIBDIR := /usr/local/lib&lt;br /&gt;
  &lt;br /&gt;
  CFLAGS        := $(CPPDEFS) -q64 -O2 &lt;br /&gt;
 @@ -94,7 +94,8 @@ FLAGS_OPT     := -O2 -qstrict -Q&lt;br /&gt;
  LDFLAGS       := -q64 -bdatapsize:64K -bstackpsize:64K -btextpsize:64K &lt;br /&gt;
  AR            := ar&lt;br /&gt;
  MOD_SUFFIX    := mod&lt;br /&gt;
 -CONFIG_SHELL  := /usr/local/bin/bash&lt;br /&gt;
 +CONFIG_SHELL  := /usr/bin/bash&lt;br /&gt;
 +&lt;br /&gt;
  &lt;br /&gt;
  #===============================================================================&lt;br /&gt;
  # Override with user settings&lt;br /&gt;
 @@ -233,7 +234,8 @@ ifeq ($(MODEL),mct)&lt;br /&gt;
    ifeq ($(USE_MPISERIAL),TRUE)&lt;br /&gt;
       CONFIG_ARGS= --enable-mpiserial&lt;br /&gt;
    endif&lt;br /&gt;
 -  CONFIG_ARGS += CC=&amp;quot;/bin/cc&amp;quot; &lt;br /&gt;
 +#  CONFIG_ARGS += CC=&amp;quot;/bin/cc&amp;quot; &lt;br /&gt;
 +  CONFIG_ARGS += CC=$(shell which cc) &lt;br /&gt;
  endif&lt;br /&gt;
  &lt;br /&gt;
  ifeq ($(MODEL),pio)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
 diff --git a/mkbatch.bluefire b/mkbatch.tcs&lt;br /&gt;
 index c4a8e20..0e71df1 100755&lt;br /&gt;
 --- a/mkbatch.bluefire&lt;br /&gt;
 +++ b/mkbatch.tcs&lt;br /&gt;
 @@ -1,6 +1,6 @@&lt;br /&gt;
  #! /bin/tcsh -f&lt;br /&gt;
  &lt;br /&gt;
 -set mach = bluefire&lt;br /&gt;
 +set mach = tcs&lt;br /&gt;
  &lt;br /&gt;
  #################################################################################&lt;br /&gt;
  if ($PHASE == set_batch) then&lt;br /&gt;
 @@ -42,14 +42,7 @@ endif&lt;br /&gt;
  @ batchpes = ${nodes} * ${PES_PER_NODE}&lt;br /&gt;
  ./xmlchange -file env_mach_pes.xml -id BATCH_PES -val ${batchpes}&lt;br /&gt;
  &lt;br /&gt;
 -if ($?ACCOUNT) then&lt;br /&gt;
 -  set account_name = $ACCOUNT&lt;br /&gt;
 -else&lt;br /&gt;
 -  set account_name = `grep -i &amp;quot;^${CCSMUSER}:&amp;quot; /etc/project.ncar | cut -f 1 -d &amp;quot;,&amp;quot; | cut -f 2 -d &amp;quot;:&amp;quot; `&lt;br /&gt;
 -  if (-e ~/.ccsm_proj) then&lt;br /&gt;
 -     set account_name = `head -1 ~/.ccsm_proj`&lt;br /&gt;
 -  endif&lt;br /&gt;
 -endif&lt;br /&gt;
 +set account_name = $USER&lt;br /&gt;
  &lt;br /&gt;
  if ($?QUEUE) then&lt;br /&gt;
    set queue_name = $QUEUE&lt;br /&gt;
 @@ -57,7 +50,7 @@ else&lt;br /&gt;
    set queue_name = regular&lt;br /&gt;
  endif&lt;br /&gt;
  &lt;br /&gt;
 -set time_limit = &amp;quot;0:50&amp;quot;&lt;br /&gt;
 +set time_limit = &amp;quot;24:00&amp;quot;&lt;br /&gt;
  if ($CCSM_ESTCOST &amp;gt; 0) set time_limit = &amp;quot;1:50&amp;quot;&lt;br /&gt;
  if ($CCSM_ESTCOST &amp;gt; 1) set time_limit = &amp;quot;4:00&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
 @@ -66,17 +59,31 @@ cat &amp;gt;! $CASEROOT/${CASE}.${mach}.run &amp;lt;&amp;lt; EOF1&lt;br /&gt;
  #==============================================================================&lt;br /&gt;
  #  This is a CCSM coupled model Load Leveler batch job script for $mach&lt;br /&gt;
  #==============================================================================&lt;br /&gt;
 -#BSUB -n $ntasks_tot&lt;br /&gt;
 -#BSUB -R &amp;quot;span[ptile=${ptile}]&amp;quot;&lt;br /&gt;
 -#BSUB -q ${queue_name}&lt;br /&gt;
 -#BSUB -N&lt;br /&gt;
 -#BSUB -x&lt;br /&gt;
 -#BSUB -a poe&lt;br /&gt;
 -#BSUB -o poe.stdout.%J&lt;br /&gt;
 -#BSUB -e poe.stderr.%J&lt;br /&gt;
 -#BSUB -J $CASE&lt;br /&gt;
 -#BSUB -W ${time_limit}&lt;br /&gt;
 -#BSUB -P ${account_name}&lt;br /&gt;
 +# @ shell = /usr/bin/tcsh&lt;br /&gt;
 +# @ output = poe.stdout.\$(jobid).\$(stepid)&lt;br /&gt;
 +# @ error  = poe.stderr.\$(jobid).\$(stepid) &lt;br /&gt;
 &lt;br /&gt;
 +# @ notification = never&lt;br /&gt;
 +# @ bulkxfer = yes&lt;br /&gt;
 +# @ environment = COPY_ALL&lt;br /&gt;
 +# @ node_usage = not_shared&lt;br /&gt;
 +# @ checkpoint = no&lt;br /&gt;
 +# @ class = verylong&lt;br /&gt;
 +# @ job_type = parallel&lt;br /&gt;
 +# @ job_name = $CASE&lt;br /&gt;
 +## @ wall_clock_limit = ${time_limit}&lt;br /&gt;
 +## @ node = 10&lt;br /&gt;
 +## @ tasks_per_node = 1&lt;br /&gt;
 +# @ task_geometry = {$task_geo}&lt;br /&gt;
 +#&lt;br /&gt;
 +## this is necessary in order to avoid core dumps for batch files&lt;br /&gt;
 +## which can cause the system to be overloaded&lt;br /&gt;
 +# ulimits&lt;br /&gt;
 +# @ core_limit = 0&lt;br /&gt;
 +#=====================================&lt;br /&gt;
 +## necessary to force use of infiniband network for MPI traffic&lt;br /&gt;
 +# @ network.MPI = sn_all,not_shared,US,HIGH&lt;br /&gt;
 +#=====================================&lt;br /&gt;
 +# @ queue&lt;br /&gt;
  &lt;br /&gt;
  setenv LSB_PJL_TASK_GEOMETRY &amp;quot;{$task_geo}&amp;quot;&lt;br /&gt;
  setenv    BIND_THRD_GEOMETRY &amp;quot;$thrd_geo&amp;quot;&lt;br /&gt;
 @@ -99,11 +106,8 @@ echo &amp;quot;\`date\` -- CSM EXECUTION BEGINS HERE&amp;quot;&lt;br /&gt;
   &lt;br /&gt;
  setenv NTHRDS \$BIND_THRD_GEOMETRY&lt;br /&gt;
  setenv MP_LABELIO yes&lt;br /&gt;
 -if (\$USE_MPISERIAL == &amp;quot;FALSE&amp;quot;) then&lt;br /&gt;
 -   mpirun.lsf /contrib/bin/ccsm_launch /contrib/bin/job_memusage.exe ./ccsm.exe &amp;gt;&amp;amp;! ccsm.log.\$LID&lt;br /&gt;
 -else&lt;br /&gt;
 -                                       /contrib/bin/job_memusage.exe ./ccsm.exe &amp;gt;&amp;amp;! ccsm.log.\$LID&lt;br /&gt;
 -endif&lt;br /&gt;
 +&lt;br /&gt;
 +/usr/bin/poe /project/ccsm/bin/ccsm_launch ./ccsm.exe &amp;gt;&amp;amp;! ccsm.log.\$LID&lt;br /&gt;
  &lt;br /&gt;
  wait&lt;br /&gt;
  echo &amp;quot;\`date\` -- CSM EXECUTION HAS FINISHED&amp;quot; &lt;br /&gt;
 @@ -130,7 +134,7 @@ endif&lt;br /&gt;
  touch ${CASEROOT}/${CASE}.${mach}.l_archive&lt;br /&gt;
  chmod 775 ${CASEROOT}/${CASE}.${mach}.l_archive&lt;br /&gt;
  &lt;br /&gt;
 -set account_name = `grep -i &amp;quot;^${CCSMUSER}:&amp;quot; /etc/project.ncar | cut -f 1 -d &amp;quot;,&amp;quot; | cut -f 2 -d &amp;quot;:&amp;quot; `&lt;br /&gt;
 +set account_name = $USER&lt;br /&gt;
  if (-e ~/.ccsm_proj) then&lt;br /&gt;
     set account_name = `head -1 ~/.ccsm_proj`&lt;br /&gt;
  endif&lt;br /&gt;
----&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
 diff --git a/config_machines.xml~ b/config_machines.xml&lt;br /&gt;
 index 97f2829..3b2c70a 100644&lt;br /&gt;
 --- a/config_machines.xml~&lt;br /&gt;
 +++ b/config_machines.xml&lt;br /&gt;
 @@ -2,6 +2,68 @@&lt;br /&gt;
  &lt;br /&gt;
  &amp;lt;config_machines&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
 +&amp;lt;machine MACH=&amp;quot;cryo&amp;quot;&lt;br /&gt;
 +         DESC=&amp;quot;Guido's i7 desktop (intel), 8 pes&amp;quot;&lt;br /&gt;
 +         EXEROOT=&amp;quot;/home/$CCSMUSER/cesm/exe/$CASE&amp;quot;&lt;br /&gt;
 +         OBJROOT=&amp;quot;$EXEROOT&amp;quot;&lt;br /&gt;
 +         INCROOT=&amp;quot;$EXEROOT/lib/include&amp;quot; &lt;br /&gt;
 +         DIN_LOC_ROOT_CSMDATA=&amp;quot;/home/guido/cesm/inputdata&amp;quot;&lt;br /&gt;
 +         DIN_LOC_ROOT_CLMQIAN=&amp;quot;/project/tss/atm_forcing.datm7.Qian.T62.c080727&amp;quot;&lt;br /&gt;
 +         DOUT_S_ROOT=&amp;quot;/home/$CCSMUSER/cesm/archive/$CASE&amp;quot;&lt;br /&gt;
 +         DOUT_L_HTAR=&amp;quot;FALSE&amp;quot;&lt;br /&gt;
 +         DOUT_L_MSROOT=&amp;quot;csm/$CASE&amp;quot;&lt;br /&gt;
 +         CCSM_BASELINE=&amp;quot;/fs/cgd/csm/ccsm_baselines&amp;quot;&lt;br /&gt;
 +         CCSM_CPRNC=&amp;quot;/fs/cgd/csm/tools/cprnc_64/cprnc&amp;quot;&lt;br /&gt;
 +         OS=&amp;quot;Linux&amp;quot;&lt;br /&gt;
 +         BATCHQUERY=&amp;quot;/usr/local/torque/bin/qstat&amp;quot;&lt;br /&gt;
 +         BATCHSUBMIT=&amp;quot;/usr/local/torque/bin/qsub&amp;quot; &lt;br /&gt;
 +         GMAKE_J=&amp;quot;1&amp;quot; &lt;br /&gt;
 +         MAX_TASKS_PER_NODE=&amp;quot;8&amp;quot;&lt;br /&gt;
 +         MPISERIAL_SUPPORT=&amp;quot;FALSE&amp;quot; /&amp;gt;&lt;br /&gt;
 +&lt;br /&gt;
 +&amp;lt;machine MACH=&amp;quot;tcs&amp;quot;&lt;br /&gt;
 +         DESC=&amp;quot;U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler, testing&amp;quot; &lt;br /&gt;
 +         EXEROOT=&amp;quot;/scratch/$CCSMUSER/$CASE&amp;quot;&lt;br /&gt;
 +         OBJROOT=&amp;quot;$EXEROOT&amp;quot;&lt;br /&gt;
 +         LIBROOT=&amp;quot;$EXEROOT/lib&amp;quot;&lt;br /&gt;
 +         INCROOT=&amp;quot;$EXEROOT/lib/include&amp;quot; &lt;br /&gt;
 +         DIN_LOC_ROOT_CSMDATA=&amp;quot;/project/ccsm/inputdata&amp;quot;&lt;br /&gt;
 +         DIN_LOC_ROOT_CLMQIAN=&amp;quot;/cgd/tss/atm_forcing.datm7.Qian.T62.c080727&amp;quot;&lt;br /&gt;
 +         DOUT_S_ROOT=&amp;quot;/scratch/$CCSMUSER/archive/$CASE&amp;quot;&lt;br /&gt;
 +         DOUT_L_HTAR=&amp;quot;TRUE&amp;quot;&lt;br /&gt;
 +         DOUT_L_MSROOT=&amp;quot;/project/peltier/$CCSMUSER/archive/$CASE&amp;quot;&lt;br /&gt;
 +         CCSM_BASELINE=&amp;quot;/project/ccsm&amp;quot;&lt;br /&gt;
 +         CCSM_CPRNC=&amp;quot;/home/guido/bin/cprnc&amp;quot;&lt;br /&gt;
 +         OS=&amp;quot;AIX&amp;quot; &lt;br /&gt;
 +         BATCHQUERY=&amp;quot;llq&amp;quot;&lt;br /&gt;
 +         BATCHSUBMIT=&amp;quot;llsubmit&amp;quot; &lt;br /&gt;
 +         GMAKE_J=&amp;quot;32&amp;quot; &lt;br /&gt;
 +         MAX_TASKS_PER_NODE=&amp;quot;64&amp;quot;&lt;br /&gt;
 +         MPISERIAL_SUPPORT=&amp;quot;TRUE&amp;quot;&lt;br /&gt;
 +         PES_PER_NODE=&amp;quot;32&amp;quot; /&amp;gt;&lt;br /&gt;
 +&lt;br /&gt;
 &lt;br /&gt;
 +&amp;lt;machine MACH=&amp;quot;gpc&amp;quot;&lt;br /&gt;
 +         DESC=&amp;quot;U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque, testing&amp;quot; &lt;br /&gt;
 +         EXEROOT=&amp;quot;/scratch/$CCSMUSER/$CASE&amp;quot;&lt;br /&gt;
 +         OBJROOT=&amp;quot;$EXEROOT&amp;quot;&lt;br /&gt;
 +         LIBROOT=&amp;quot;$EXEROOT/lib&amp;quot;&lt;br /&gt;
 +         INCROOT=&amp;quot;$EXEROOT/lib/include&amp;quot; &lt;br /&gt;
 +         DIN_LOC_ROOT_CSMDATA=&amp;quot;/project/ccsm/inputdata&amp;quot;&lt;br /&gt;
 +         DIN_LOC_ROOT_CLMQIAN=&amp;quot;/cgd/tss/atm_forcing.datm7.Qian.T62.c080727&amp;quot;&lt;br /&gt;
 +         DOUT_S_ROOT=&amp;quot;/scratch/$CCSMUSER/archive/$CASE&amp;quot;&lt;br /&gt;
 +         DOUT_L_HTAR=&amp;quot;TRUE&amp;quot;&lt;br /&gt;
 +         DOUT_L_MSROOT=&amp;quot;/project/peltier/$CCSMUSER/archive/$CASE&amp;quot;&lt;br /&gt;
 +         CCSM_BASELINE=&amp;quot;/project/ccsm&amp;quot;&lt;br /&gt;
 +         CCSM_CPRNC=&amp;quot;/home/guido/bin/cprnc&amp;quot;&lt;br /&gt;
 +         OS=&amp;quot;AIX&amp;quot; &lt;br /&gt;
 +         BATCHQUERY=&amp;quot;/opt/torque/bin/qstat&amp;quot;&lt;br /&gt;
 +         BATCHSUBMIT=&amp;quot;/opt/torque/bin/qsub&amp;quot; &lt;br /&gt;
 +         GMAKE_J=&amp;quot;1&amp;quot; &lt;br /&gt;
 +         MAX_TASKS_PER_NODE=&amp;quot;8&amp;quot;&lt;br /&gt;
 +         MPISERIAL_SUPPORT=&amp;quot;TRUE&amp;quot;&lt;br /&gt;
 +         PES_PER_NODE=&amp;quot;8&amp;quot; /&amp;gt;&lt;br /&gt;
 +&lt;br /&gt;
 +&lt;br /&gt;
  &amp;lt;machine MACH=&amp;quot;bluefire&amp;quot;&lt;br /&gt;
           DESC=&amp;quot;NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF&amp;quot; &lt;br /&gt;
           EXEROOT=&amp;quot;/ptmp/$CCSMUSER/$CASE&amp;quot;&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes on installing CESM1:&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2577</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2577"/>
		<updated>2011-01-31T20:05:41Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Modify env_mach.gpc, env_run, env_conf to your needs.&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2576</id>
		<title>Running CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM3&amp;diff=2576"/>
		<updated>2011-01-31T20:04:11Z</updated>

		<summary type="html">&lt;p&gt;Guido: Created page with &amp;quot;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.  Cr...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You can run CCSM3 in interactive mode on GPC without a host file, but only do so for very short compilation/initialization tests. Run the model in the TCS or GPC batch queue.&lt;br /&gt;
&lt;br /&gt;
Create a setup script:&lt;br /&gt;
 #!/bin/csh&lt;br /&gt;
 &lt;br /&gt;
 setenv CCSMROOT /project/ccsm/ccsm3_current&lt;br /&gt;
 setenv SCRATCH /scratch/$USER&lt;br /&gt;
 setenv CASEROOT /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
 setenv MACH gpc&lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -mach $MACH -res T31_gx3v5 -case $CASEROOT&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
 cd /home/$USER/runs/ccsm3_t31_gpc&lt;br /&gt;
&lt;br /&gt;
Configure and Build:&lt;br /&gt;
&lt;br /&gt;
 ./configure -mach gpc&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.build&lt;br /&gt;
&lt;br /&gt;
Run&lt;br /&gt;
&lt;br /&gt;
 ./ccsm3_t31_gpc.gpc.run&lt;br /&gt;
 or qsub ccsm3_t31_gpc.gpc.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2575</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2575"/>
		<updated>2011-01-31T19:55:39Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux). This may be useful for running the model through it's initialization step, if for example, you want to change the boundary conditions and need to see if the model initializes with the new boundary conditions.&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
added:&lt;br /&gt;
&lt;br /&gt;
 mpirun -np $NTASKS[1] ./$COMPONENTS[1] : -np $NTASKS[2] ./$COMPONENTS[2] : -np $NTASKS[3] ./$COMPONENTS[3] : -np $NTASKS[4] ./$COMPONENTS[4] : -np $NTASKS[5] ./$COMPONENTS[5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
Mirrored the other linux batch config file, you may need to change in your run script after the model is built to run on torque PBS properly&lt;br /&gt;
&lt;br /&gt;
 #! /bin/csh -f&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 #  This is a CCSM batch job script for $mach&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ## BATCH INFO&lt;br /&gt;
 #PBS -N ${jobname}&lt;br /&gt;
 #PBS -q ${qname}&lt;br /&gt;
 ##PBS -l nodes=${nodes}:ib:ppn=${tasks}&lt;br /&gt;
 #PBS -l walltime=${tlimit}&lt;br /&gt;
 #PBS -r n&lt;br /&gt;
 #PBS -j oe&lt;br /&gt;
 #PBS -k oe&lt;br /&gt;
 #PBS -S /bin/csh -V&lt;br /&gt;
  &lt;br /&gt;
 limit coredumpsize 1000000&lt;br /&gt;
 limit stacksize unlimited&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
 # General machine specific environment variables  - edit before the initial build&lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 &lt;br /&gt;
 setenv LIB_NETCDF $SCINET_NETCDF_LIB&lt;br /&gt;
 setenv INC_NETCDF $SCINET_NETCDF_INC&lt;br /&gt;
 setenv INC_MPI $SCINET_MPI_INC&lt;br /&gt;
 setenv SCRATCH /scratch/$USER/&lt;br /&gt;
  &lt;br /&gt;
 if !($?SCRATCH) then&lt;br /&gt;
   set SCRATCH = $HOME&lt;br /&gt;
   echo &amp;quot;## Warning: SCRATCH not defined in system environment. Set SCRATCH to be $HOME&amp;quot;;&lt;br /&gt;
 endif&lt;br /&gt;
  &lt;br /&gt;
 setenv EXEROOT             $SCRATCH/exe/$CASE&lt;br /&gt;
 setenv RUNROOT             $EXEROOT&lt;br /&gt;
 setenv GMAKE_J             1 &lt;br /&gt;
 &lt;br /&gt;
 #####set the path for netcdf module: netcdf.mod&lt;br /&gt;
 if(! $?NETCDF_MOD) then&lt;br /&gt;
    ##### user need to set this variable.&lt;br /&gt;
    ### setenv NETCDF_MOD /usr/local/lib64/r4i4&lt;br /&gt;
 endif&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 # Environment variables for prestaging input data - edit anytime during run&lt;br /&gt;
 # -------------------------------------------------------------------------  &lt;br /&gt;
 &lt;br /&gt;
 setenv DIN_LOC_ROOT        /project/ccsm/inputdata&lt;br /&gt;
 setenv DIN_LOC_ROOT_USER   /project/ccsm/inputdata_user&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
 &lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
 if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
 endif&lt;br /&gt;
&lt;br /&gt;
Finally make sure the following modules are loaded: intel, intelmpi, netcdf4.0.1_nc3, gcc&lt;br /&gt;
&lt;br /&gt;
I have the following set on GPC:&lt;br /&gt;
  Currently Loaded Modulefiles:&lt;br /&gt;
&lt;br /&gt;
  1) intel/intel-v11.1.072     3) gcc/4.4.0                 5) netcdf/4.0.1_nc3_intel    7) parallel-netcdf/1.1.1&lt;br /&gt;
  2) intelmpi/impi-4.0.0.027   4) Xlibraries/X11-64         6) python/2.6.2&lt;br /&gt;
&lt;br /&gt;
My .bashrc has:&lt;br /&gt;
&lt;br /&gt;
 if [ &amp;quot;${HOST}&amp;quot; == &amp;quot;AIX&amp;quot; ]; then&lt;br /&gt;
  # do things for the TCS machine&lt;br /&gt;
  :&lt;br /&gt;
 else&lt;br /&gt;
  # do things for the GPC machine&lt;br /&gt;
  module load intel intelmpi gcc Xlibraries netcdf python parallel-netcdf&lt;br /&gt;
  export MACH=&amp;quot;gpc&amp;quot;&lt;br /&gt;
  export PATH=&amp;quot;/home/$LOGNAME/bin:$PATH:/scinet/gpc/bin&amp;quot;&lt;br /&gt;
  :&lt;br /&gt;
 fi&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2574</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2574"/>
		<updated>2011-01-31T19:53:47Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux). This may be useful for running the model through it's initialization step, if for example, you want to change the boundary conditions and need to see if the model initializes with the new boundary conditions.&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
added:&lt;br /&gt;
&lt;br /&gt;
 mpirun -np $NTASKS[1] ./$COMPONENTS[1] : -np $NTASKS[2] ./$COMPONENTS[2] : -np $NTASKS[3] ./$COMPONENTS[3] : -np $NTASKS[4] ./$COMPONENTS[4] : -np $NTASKS[5] ./$COMPONENTS[5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
Mirrored the other linux batch config file, you may need to change in your run script after the model is built to run on torque PBS properly&lt;br /&gt;
&lt;br /&gt;
 #! /bin/csh -f&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 #  This is a CCSM batch job script for $mach&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ## BATCH INFO&lt;br /&gt;
 #PBS -N ${jobname}&lt;br /&gt;
 #PBS -q ${qname}&lt;br /&gt;
 ##PBS -l nodes=${nodes}:ib:ppn=${tasks}&lt;br /&gt;
 #PBS -l walltime=${tlimit}&lt;br /&gt;
 #PBS -r n&lt;br /&gt;
 #PBS -j oe&lt;br /&gt;
 #PBS -k oe&lt;br /&gt;
 #PBS -S /bin/csh -V&lt;br /&gt;
  &lt;br /&gt;
 limit coredumpsize 1000000&lt;br /&gt;
 limit stacksize unlimited&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
 # General machine specific environment variables  - edit before the initial build&lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 &lt;br /&gt;
 setenv LIB_NETCDF $SCINET_NETCDF_LIB&lt;br /&gt;
 setenv INC_NETCDF $SCINET_NETCDF_INC&lt;br /&gt;
 setenv INC_MPI $SCINET_MPI_INC&lt;br /&gt;
 setenv SCRATCH /scratch/$USER/&lt;br /&gt;
  &lt;br /&gt;
 if !($?SCRATCH) then&lt;br /&gt;
   set SCRATCH = $HOME&lt;br /&gt;
   echo &amp;quot;## Warning: SCRATCH not defined in system environment. Set SCRATCH to be $HOME&amp;quot;;&lt;br /&gt;
 endif&lt;br /&gt;
  &lt;br /&gt;
 setenv EXEROOT             $SCRATCH/exe/$CASE&lt;br /&gt;
 setenv RUNROOT             $EXEROOT&lt;br /&gt;
 setenv GMAKE_J             1 &lt;br /&gt;
 &lt;br /&gt;
 #####set the path for netcdf module: netcdf.mod&lt;br /&gt;
 if(! $?NETCDF_MOD) then&lt;br /&gt;
    ##### user need to set this variable.&lt;br /&gt;
    ### setenv NETCDF_MOD /usr/local/lib64/r4i4&lt;br /&gt;
 endif&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 # Environment variables for prestaging input data - edit anytime during run&lt;br /&gt;
 # -------------------------------------------------------------------------  &lt;br /&gt;
 &lt;br /&gt;
 setenv DIN_LOC_ROOT        /project/ccsm/inputdata&lt;br /&gt;
 setenv DIN_LOC_ROOT_USER   /project/ccsm/inputdata_user&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
 &lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
 if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
 endif&lt;br /&gt;
&lt;br /&gt;
Finally make sure the following modules are loaded: intel, intelmpi, netcdf4.0.1_nc3, gcc&lt;br /&gt;
&lt;br /&gt;
I have the following set on GPC:&lt;br /&gt;
  Currently Loaded Modulefiles:&lt;br /&gt;
&lt;br /&gt;
  1) intel/intel-v11.1.072     3) gcc/4.4.0                 5) netcdf/4.0.1_nc3_intel    7) parallel-netcdf/1.1.1&lt;br /&gt;
  2) intelmpi/impi-4.0.0.027   4) Xlibraries/X11-64         6) python/2.6.2&lt;br /&gt;
&lt;br /&gt;
My .bashrc has:&lt;br /&gt;
&lt;br /&gt;
if [ &amp;quot;${HOST}&amp;quot; == &amp;quot;AIX&amp;quot; ]; then&lt;br /&gt;
  # do things for the TCS machine&lt;br /&gt;
  :&lt;br /&gt;
else&lt;br /&gt;
  # do things for the GPC machine&lt;br /&gt;
  module load intel intelmpi gcc Xlibraries netcdf python parallel-netcdf&lt;br /&gt;
  export MACH=&amp;quot;gpc&amp;quot;&lt;br /&gt;
  export PATH=&amp;quot;/home/$LOGNAME/bin:$PATH:/scinet/gpc/bin&amp;quot;&lt;br /&gt;
  :&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
My PATH is:&lt;br /&gt;
/home/guido/bin:/scinet/gpc/Libraries/parallel-netcdf-1.1.1/bin:/scinet/gpc/tools/Python/Python262/bin:/scinet/gpc/compilers/gcc/bin/:/scinet/gpc/intel/impi/4.0.0.027/bin64/:/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/:/usr/lib64/qt-3.3/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/scinet/gpc/x11/bin:/scinet/gpc/Libraries/netcdf-4.0.1_nc3_intel/bin:/scinet/gpc/bin&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2573</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2573"/>
		<updated>2011-01-31T19:53:02Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux). This may be useful for running the model through it's initialization step, if for example, you want to change the boundary conditions and need to see if the model initializes with the new boundary conditions.&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
added:&lt;br /&gt;
&lt;br /&gt;
 mpirun -np $NTASKS[1] ./$COMPONENTS[1] : -np $NTASKS[2] ./$COMPONENTS[2] : -np $NTASKS[3] ./$COMPONENTS[3] : -np $NTASKS[4] ./$COMPONENTS[4] : -np $NTASKS[5] ./$COMPONENTS[5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
Mirrored the other linux batch config file, you may need to change in your run script after the model is built to run on torque PBS properly&lt;br /&gt;
&lt;br /&gt;
 #! /bin/csh -f&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 #  This is a CCSM batch job script for $mach&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ## BATCH INFO&lt;br /&gt;
 #PBS -N ${jobname}&lt;br /&gt;
 #PBS -q ${qname}&lt;br /&gt;
 ##PBS -l nodes=${nodes}:ib:ppn=${tasks}&lt;br /&gt;
 #PBS -l walltime=${tlimit}&lt;br /&gt;
 #PBS -r n&lt;br /&gt;
 #PBS -j oe&lt;br /&gt;
 #PBS -k oe&lt;br /&gt;
 #PBS -S /bin/csh -V&lt;br /&gt;
  &lt;br /&gt;
 limit coredumpsize 1000000&lt;br /&gt;
 limit stacksize unlimited&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
 # General machine specific environment variables  - edit before the initial build&lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 &lt;br /&gt;
 setenv LIB_NETCDF $SCINET_NETCDF_LIB&lt;br /&gt;
 setenv INC_NETCDF $SCINET_NETCDF_INC&lt;br /&gt;
 setenv INC_MPI $SCINET_MPI_INC&lt;br /&gt;
 setenv SCRATCH /scratch/$USER/&lt;br /&gt;
 &lt;br /&gt;
 if !($?SCRATCH) then&lt;br /&gt;
   set SCRATCH = $HOME&lt;br /&gt;
   echo &amp;quot;## Warning: SCRATCH not defined in system environment. Set SCRATCH to be $HOME&amp;quot;;&lt;br /&gt;
 endif&lt;br /&gt;
 &lt;br /&gt;
 setenv EXEROOT             $SCRATCH/exe/$CASE&lt;br /&gt;
 setenv RUNROOT             $EXEROOT&lt;br /&gt;
 setenv GMAKE_J             1 &lt;br /&gt;
&lt;br /&gt;
 #####set the path for netcdf module: netcdf.mod&lt;br /&gt;
 if(! $?NETCDF_MOD) then&lt;br /&gt;
    ##### user need to set this variable.&lt;br /&gt;
    ### setenv NETCDF_MOD /usr/local/lib64/r4i4&lt;br /&gt;
 endif&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 # Environment variables for prestaging input data - edit anytime during run&lt;br /&gt;
 # -------------------------------------------------------------------------  &lt;br /&gt;
 &lt;br /&gt;
 setenv DIN_LOC_ROOT        /project/ccsm/inputdata&lt;br /&gt;
 setenv DIN_LOC_ROOT_USER   /project/ccsm/inputdata_user&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
 &lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
 if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
 endif&lt;br /&gt;
&lt;br /&gt;
Finally make sure the following modules are loaded: intel, intelmpi, netcdf4.0.1_nc3, gcc&lt;br /&gt;
&lt;br /&gt;
I have the following set on GPC:&lt;br /&gt;
  Currently Loaded Modulefiles:&lt;br /&gt;
&lt;br /&gt;
  1) intel/intel-v11.1.072     3) gcc/4.4.0                 5) netcdf/4.0.1_nc3_intel    7) parallel-netcdf/1.1.1&lt;br /&gt;
  2) intelmpi/impi-4.0.0.027   4) Xlibraries/X11-64         6) python/2.6.2&lt;br /&gt;
&lt;br /&gt;
My .bashrc has:&lt;br /&gt;
&lt;br /&gt;
if [ &amp;quot;${HOST}&amp;quot; == &amp;quot;AIX&amp;quot; ]; then&lt;br /&gt;
  # do things for the TCS machine&lt;br /&gt;
  :&lt;br /&gt;
else&lt;br /&gt;
  # do things for the GPC machine&lt;br /&gt;
  module load intel intelmpi gcc Xlibraries netcdf python parallel-netcdf&lt;br /&gt;
  export MACH=&amp;quot;gpc&amp;quot;&lt;br /&gt;
  export PATH=&amp;quot;/home/$LOGNAME/bin:$PATH:/scinet/gpc/bin&amp;quot;&lt;br /&gt;
  :&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
My PATH is:&lt;br /&gt;
/home/guido/bin:/scinet/gpc/Libraries/parallel-netcdf-1.1.1/bin:/scinet/gpc/tools/Python/Python262/bin:/scinet/gpc/compilers/gcc/bin/:/scinet/gpc/intel/impi/4.0.0.027/bin64/:/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/:/usr/lib64/qt-3.3/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/scinet/gpc/x11/bin:/scinet/gpc/Libraries/netcdf-4.0.1_nc3_intel/bin:/scinet/gpc/bin&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2572</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2572"/>
		<updated>2011-01-31T19:52:02Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux). This may be useful for running the model through it's initialization step, if for example, you want to change the boundary conditions and need to see if the model initializes with the new boundary conditions.&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
added:&lt;br /&gt;
&lt;br /&gt;
 mpirun -np $NTASKS[1] ./$COMPONENTS[1] : -np $NTASKS[2] ./$COMPONENTS[2] : -np $NTASKS[3] ./$COMPONENTS[3] : -np $NTASKS[4] ./$COMPONENTS[4] : -np $NTASKS[5] ./$COMPONENTS[5]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
Mirrored the other linux batch config file, you may need to change in your run script after the model is built&lt;br /&gt;
&lt;br /&gt;
 #! /bin/csh -f&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 #  This is a CCSM batch job script for $mach&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ## BATCH INFO&lt;br /&gt;
 #PBS -N ${jobname}&lt;br /&gt;
 #PBS -q ${qname}&lt;br /&gt;
 ##PBS -l nodes=${nodes}:ib:ppn=${tasks}&lt;br /&gt;
 #PBS -l walltime=${tlimit}&lt;br /&gt;
 #PBS -r n&lt;br /&gt;
 #PBS -j oe&lt;br /&gt;
 #PBS -k oe&lt;br /&gt;
 #PBS -S /bin/csh -V&lt;br /&gt;
  &lt;br /&gt;
 limit coredumpsize 1000000&lt;br /&gt;
 limit stacksize unlimited&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
 # General machine specific environment variables  - edit before the initial build&lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 &lt;br /&gt;
 setenv LIB_NETCDF $SCINET_NETCDF_LIB&lt;br /&gt;
 setenv INC_NETCDF $SCINET_NETCDF_INC&lt;br /&gt;
 setenv INC_MPI $SCINET_MPI_INC&lt;br /&gt;
 setenv SCRATCH /scratch/$USER/&lt;br /&gt;
 &lt;br /&gt;
 if !($?SCRATCH) then&lt;br /&gt;
   set SCRATCH = $HOME&lt;br /&gt;
   echo &amp;quot;## Warning: SCRATCH not defined in system environment. Set SCRATCH to be $HOME&amp;quot;;&lt;br /&gt;
 endif&lt;br /&gt;
 &lt;br /&gt;
 setenv EXEROOT             $SCRATCH/exe/$CASE&lt;br /&gt;
 setenv RUNROOT             $EXEROOT&lt;br /&gt;
 setenv GMAKE_J             1 &lt;br /&gt;
&lt;br /&gt;
 #####set the path for netcdf module: netcdf.mod&lt;br /&gt;
 if(! $?NETCDF_MOD) then&lt;br /&gt;
    ##### user need to set this variable.&lt;br /&gt;
    ### setenv NETCDF_MOD /usr/local/lib64/r4i4&lt;br /&gt;
 endif&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 # -------------------------------------------------------------------------&lt;br /&gt;
 # Environment variables for prestaging input data - edit anytime during run&lt;br /&gt;
 # -------------------------------------------------------------------------  &lt;br /&gt;
 &lt;br /&gt;
 setenv DIN_LOC_ROOT        /project/ccsm/inputdata&lt;br /&gt;
 setenv DIN_LOC_ROOT_USER   /project/ccsm/inputdata_user&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
 &lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
 if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
 endif&lt;br /&gt;
&lt;br /&gt;
Finally make sure the following modules are loaded: intel, intelmpi, netcdf4.0.1_nc3, gcc&lt;br /&gt;
&lt;br /&gt;
I have the following set on GPC:&lt;br /&gt;
  Currently Loaded Modulefiles:&lt;br /&gt;
&lt;br /&gt;
  1) intel/intel-v11.1.072     3) gcc/4.4.0                 5) netcdf/4.0.1_nc3_intel    7) parallel-netcdf/1.1.1&lt;br /&gt;
  2) intelmpi/impi-4.0.0.027   4) Xlibraries/X11-64         6) python/2.6.2&lt;br /&gt;
&lt;br /&gt;
My .bashrc has:&lt;br /&gt;
&lt;br /&gt;
if [ &amp;quot;${HOST}&amp;quot; == &amp;quot;AIX&amp;quot; ]; then&lt;br /&gt;
  # do things for the TCS machine&lt;br /&gt;
  :&lt;br /&gt;
else&lt;br /&gt;
  # do things for the GPC machine&lt;br /&gt;
  module load intel intelmpi gcc Xlibraries netcdf python parallel-netcdf&lt;br /&gt;
  export MACH=&amp;quot;gpc&amp;quot;&lt;br /&gt;
  export PATH=&amp;quot;/home/$LOGNAME/bin:$PATH:/scinet/gpc/bin&amp;quot;&lt;br /&gt;
  :&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
My PATH is:&lt;br /&gt;
/home/guido/bin:/scinet/gpc/Libraries/parallel-netcdf-1.1.1/bin:/scinet/gpc/tools/Python/Python262/bin:/scinet/gpc/compilers/gcc/bin/:/scinet/gpc/intel/impi/4.0.0.027/bin64/:/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/:/usr/lib64/qt-3.3/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/usr/lpp/mmfs/bin:/opt/torque/bin:/opt/torque/sbin:/scinet/gpc/x11/bin:/scinet/gpc/Libraries/netcdf-4.0.1_nc3_intel/bin:/scinet/gpc/bin&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2571</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2571"/>
		<updated>2011-01-31T19:29:37Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
&lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
 if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
 endif&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2570</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2570"/>
		<updated>2011-01-31T19:29:18Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
&lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;br /&gt;
&lt;br /&gt;
Edit  /project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/mct.buildlib&lt;br /&gt;
&lt;br /&gt;
MCT configure wants to grab gfortran because of the PATH order&lt;br /&gt;
if ( `uname` == &amp;quot;Linux&amp;quot; ) then&lt;br /&gt;
     setenv FC mpif90&lt;br /&gt;
endif&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2569</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2569"/>
		<updated>2011-01-31T17:49:56Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
&lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Changed:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Components/esmf.buildlib &lt;br /&gt;
&lt;br /&gt;
 diff esmf.buildlib esmf.buildlib~&lt;br /&gt;
 &lt;br /&gt;
 24c24&lt;br /&gt;
 &amp;lt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_intel&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;gt; if ($OS == 'Linux')        setenv ESMF_ARCH linux_pgi&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2568</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2568"/>
		<updated>2011-01-31T17:29:56Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
&lt;br /&gt;
env.linux.gpc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modified check_machine to include gpc:&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/ccsm3_current/scripts/ccsm_utils/Tools/check_machine&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2567</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2567"/>
		<updated>2011-01-31T17:23:31Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
 &lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In /project/ccsm/ccsm3_current/scripts/ccsm_utils/Machines&lt;br /&gt;
&lt;br /&gt;
Modified these files:&lt;br /&gt;
run.linux.gpc&lt;br /&gt;
batch.linux.gpc&lt;br /&gt;
env.linux.gpc&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2566</id>
		<title>Installing CCSM3</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Installing_CCSM3&amp;diff=2566"/>
		<updated>2011-01-31T17:11:07Z</updated>

		<summary type="html">&lt;p&gt;Guido: Created page with &amp;quot;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).  The first several lines of the Macros.Linux (configuration) file ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is mostly a record of the steps needed to install CCSM3 on the General Purpose Cluster (GPC) (Linux).&lt;br /&gt;
&lt;br /&gt;
The first several lines of the Macros.Linux (configuration) file (Using Intel Fortran Compilers):&lt;br /&gt;
&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 # CVS $Id: Macros.Linux,v 1.11.2.4 2007/01/17 05:17:49 tcraig Exp $&lt;br /&gt;
 # CVS $Source: /fs/cgd/csm/models/CVS.REPOS/shared/bld/Macros.Linux,v $&lt;br /&gt;
 # CVS $Name: ccsm3_0_1_beta24 $&lt;br /&gt;
 #===============================================================================&lt;br /&gt;
 ### Makefile macros for &amp;quot;Linux&amp;quot;, supports portland + gnu &lt;br /&gt;
 # Makefile macros for &amp;quot;Linux&amp;quot;, supports Intel compilers + gnu &lt;br /&gt;
 #===============================================================================  &lt;br /&gt;
&lt;br /&gt;
 INCLDIR    := -I. -I$(FPATH) -I$(SCINET_NETCDF_INC) -I$(INCROOT) -I$(INC_MPI) -I$(NETCDF_MOD)&lt;br /&gt;
 &lt;br /&gt;
 #SLIBS      := -L$(LIB_NETCDF) -lnetcdf  -llapack -lblas&lt;br /&gt;
 #SLIBS      := -L$(LIBRARY_PATH) -lmkl_lapack -lmkl_em64t -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/usr/local/lib -lnetcdf&lt;br /&gt;
 SLIBS      := -L$(SCINET_NETCDF_LIB) -lnetcdf&lt;br /&gt;
 &lt;br /&gt;
 CPP        := NONE&lt;br /&gt;
 CPPFLAGS   :=&lt;br /&gt;
 CPPDEFS    := -DLINUX -DFORTRANUNDERSCORE -DLINUX  -DNO_SHR_VMATH&lt;br /&gt;
 CPPFLAGS   := -DLINUX -DNO_SHR_VMATH -DINTEL_COMPILER -Df2cFortran&lt;br /&gt;
 FC         := mpif90&lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -Kieee -Mrecursive -Mdalign -Mextend&lt;br /&gt;
 #FFLAGS     := -c -real-size 64 -integer-size 32 -align all -fltconsistency -recursive -extend_source 132&lt;br /&gt;
 ## This compiles on intel x86_64&lt;br /&gt;
 FFLAGS     := -O0 -cpp -c -real-size 64 -integer-size 32 -align all -fltconsistency -save   &lt;br /&gt;
 &lt;br /&gt;
 #FFLAGS     := -c -r8 -i4 -autodouble -align all -fltconsistency -recursive -fast&lt;br /&gt;
 #FFLAGS     := -cpp -c -r8 -i4 -132 -autodouble -convert big_endian -fp-model precise -prec-div -prec-sqrt -recursive -align all -fltconsistency &lt;br /&gt;
 CC         := mpicc&lt;br /&gt;
 CFLAGS     := -c -DUSE_GCC&lt;br /&gt;
 &lt;br /&gt;
 FIXEDFLAGS := &lt;br /&gt;
 FREEFLAGS  := -free&lt;br /&gt;
 MOD_SUFFIX := mod&lt;br /&gt;
 LD         := $(FC)&lt;br /&gt;
 AR         := ar&lt;br /&gt;
 ULIBS      := -L$(LIBROOT) -lesmf -lmct -lmpeu -lmph&lt;br /&gt;
 FLIBS      := -L/usr/local/lib&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2565</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2565"/>
		<updated>2011-01-31T17:06:47Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* Climate Modelling */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
Please see:&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM3]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM3]]===&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2564</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2564"/>
		<updated>2011-01-31T17:06:26Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* Climate Modelling */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
Please see:&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM3]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2488</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2488"/>
		<updated>2011-01-03T21:23:20Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help to prevent the duplication of simulations (e.g. control runs)&lt;br /&gt;
&lt;br /&gt;
'''List of Simulations:'''&lt;br /&gt;
&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;br /&gt;
 Node Usage: 1&lt;br /&gt;
 User: --[[User:Guido|Guido]] 16:23, 3 January 2011 (EST)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2487</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2487"/>
		<updated>2011-01-03T21:22:18Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help to prevent the duplication of simulations (e.g. control runs)&lt;br /&gt;
&lt;br /&gt;
 --[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2486</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2486"/>
		<updated>2011-01-03T21:21:28Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
 --[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B (Fully Coupled atm/lnd/ocn/ice).&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2485</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2485"/>
		<updated>2011-01-03T21:20:01Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
 --[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B.&lt;br /&gt;
  This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn).&lt;br /&gt;
  The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2484</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2484"/>
		<updated>2011-01-03T21:19:19Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
 --[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B. This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn). The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2483</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2483"/>
		<updated>2011-01-03T21:18:51Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
--[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
 Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
 Current Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
 Length: approx 450 years&lt;br /&gt;
&lt;br /&gt;
 Description: This is a CESM1 1850 control run using component set B. This is a low resolution simulation (T31 gx3v7, approximately 4 deg atm, 3 deg ocn). The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2482</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2482"/>
		<updated>2011-01-03T21:17:19Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
--[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
&lt;br /&gt;
Length: approx 450 years&lt;br /&gt;
&lt;br /&gt;
Description: This is a CESM1 1850 control run (T31 gx3v7, approximately 4 deg atm, 3 deg ocn). The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2481</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2481"/>
		<updated>2011-01-03T21:17:03Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
--[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
&lt;br /&gt;
Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
Length: approx 450 years&lt;br /&gt;
Description: This is a CESM1 1850 control run (T31 gx3v7, approximately 4 deg atm, 3 deg ocn). The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2480</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2480"/>
		<updated>2011-01-03T21:16:42Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
--[[User:Guido|Guido]] 16:16, 3 January 2011 (EST)&lt;br /&gt;
Simulation: cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
Output Data Location: /scratch/guido/archive/cesm1_comp-B_1850_CN_res-T31_g37&lt;br /&gt;
Length: approx 450 years&lt;br /&gt;
Description: This is a CESM1 1850 control run (T31 gx3v7, approximately 4 deg atm, 3 deg ocn). The simulation has the interactive carbon cycle turned on.&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2479</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2479"/>
		<updated>2011-01-03T21:09:33Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* Installing CCSM4 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
Please see:&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2473</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2473"/>
		<updated>2011-01-03T18:56:37Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* CCSM4/CESM1 TCS Simulation List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2472</id>
		<title>CCSM4/CESM1 TCS Simulation List</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=CCSM4/CESM1_TCS_Simulation_List&amp;diff=2472"/>
		<updated>2011-01-03T18:55:39Z</updated>

		<summary type="html">&lt;p&gt;Guido: Created page with &amp;quot;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2471</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2471"/>
		<updated>2011-01-03T18:55:11Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* CCSM4/CESM1 TCS Simulation List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
This page will be used to record information about CESM1/CCSM4 simulations that are being conducted on the TCS system. This will help in preventing the duplication of simulations (like control runs)&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2470</id>
		<title>User Codes</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=User_Codes&amp;diff=2470"/>
		<updated>2011-01-03T18:53:02Z</updated>

		<summary type="html">&lt;p&gt;Guido: /* Climate Modelling */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
==Astrophysics==&lt;br /&gt;
&lt;br /&gt;
===Athena (explicit, uniform grid MHD code)===&lt;br /&gt;
&lt;br /&gt;
[[Image:StrongScalingAthenaGPC.png|thumb|right|320px|Athena scaling on GPC with OpenMPI and MVAPICH2 on GigE, and OpenMPI on InfiniBand]]&lt;br /&gt;
&lt;br /&gt;
[http://www.astro.princeton.edu/~jstone/athena.html Athena] is a straightforward C code which doesn't use a lot of libraries so it is pretty straightforward to build and compile on new machines.   &lt;br /&gt;
&lt;br /&gt;
It encapsulates its compiler flags, etc in an &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; file which is then processed by &amp;lt;tt&amp;gt;configure&amp;lt;/tt&amp;gt;.   I've used the following additions to &amp;lt;tt&amp;gt;Makeoptions.in&amp;lt;/tt&amp;gt; on TCS and GPC:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
ifeq ($(MACHINE),scinettcs)&lt;br /&gt;
  CC = mpcc_r&lt;br /&gt;
  LDR = mpcc_r&lt;br /&gt;
  OPT = -O5 -q64 -qarch=pwr6 -qtune=pwr6 -qcache=auto -qlargepage -qstrict&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -ldl -lm&lt;br /&gt;
else&lt;br /&gt;
ifeq ($(MACHINE),scinetgpc)&lt;br /&gt;
  CC = mpicc&lt;br /&gt;
  LDR = mpicc&lt;br /&gt;
  OPT = -O3&lt;br /&gt;
  MPIINC =&lt;br /&gt;
  MPILIB =&lt;br /&gt;
  CFLAGS = $(OPT)&lt;br /&gt;
  LIB = -lm&lt;br /&gt;
else&lt;br /&gt;
...&lt;br /&gt;
endif&lt;br /&gt;
endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
It performs quite well on the GPC, scaling extremely well even on a strong scaling test out to about 256 cores (32 nodes) on Gigabit ethernet, and performing beautifully on InfiniBand out to 512 cores (64 nodes). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]]  19:20, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
===FLASH3 (Adaptive Mesh reactive hydrodynamics; explict hydro/MHD)===&lt;br /&gt;
&lt;br /&gt;
[[Image:weak-scaling-example.png|thumb|right|320px|Weak scaling test of the 2d sod problem on both the GPC and TCS.  The results are actually somewhat faster on the GPC; in both cases (weak) scaling is very good out at least to 256 cores]]&lt;br /&gt;
&lt;br /&gt;
[http://flash.uchicago.edu FLASH] encapsulates its machine-dependant information in the &amp;lt;tt&amp;gt;FLASH3/sites&amp;lt;/tt&amp;gt; directory.  For the GPC, you'll have to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi&lt;br /&gt;
module load hdf5/184-p1-v16-openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and with that, the following file (&amp;lt;tt&amp;gt;sites/scinetgpc/Makefile.h&amp;lt;/tt&amp;gt;) works for me:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
## Must do module load hdf5/183-v16-openmpi&lt;br /&gt;
HDF5_PATH = ${SCINET_HDF5_BASE}&lt;br /&gt;
ZLIB_PATH = /usr/local&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compiler and linker commands&lt;br /&gt;
#&lt;br /&gt;
#  We use the f90 compiler as the linker, so some C libraries may explicitly&lt;br /&gt;
#  need to be added into the link line.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
## modules will put the right mpi in our path&lt;br /&gt;
FCOMP   = mpif77&lt;br /&gt;
CCOMP   = mpicc&lt;br /&gt;
CPPCOMP = mpiCC&lt;br /&gt;
LINK    = mpif77&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Compilation flags&lt;br /&gt;
#&lt;br /&gt;
#  Three sets of compilation/linking flags are defined: one for optimized&lt;br /&gt;
#  code, one for testing, and one for debugging.  The default is to use the &lt;br /&gt;
#  _OPT version.  Specifying -debug to setup will pick the _DEBUG version,&lt;br /&gt;
#  these should enable bounds checking.  Specifying -test is used for &lt;br /&gt;
#  flash_test, and is set for quick code generation, and (sometimes) &lt;br /&gt;
#  profiling.  The Makefile generated by setup will assign the generic token &lt;br /&gt;
#  (ex. FFLAGS) to the proper set of flags (ex. FFLAGS_OPT).&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
FFLAGS_OPT   =  -c -r8 -i4 -O3 -xSSE4.2&lt;br /&gt;
FFLAGS_DEBUG =  -c -g -r8 -i4 -O0&lt;br /&gt;
FFLAGS_TEST  =  -c -r8 -i4&lt;br /&gt;
&lt;br /&gt;
LIB_HDF5 = -L${HDF5_PATH}/lib -lhdf5 -L${SCINET_ZLIB_LIB} -lz -lgpfs&lt;br /&gt;
&lt;br /&gt;
# if we are using HDF5, we need to specify the path to the include files&lt;br /&gt;
CFLAGS_HDF5  = -I${HDF5_PATH}/include&lt;br /&gt;
&lt;br /&gt;
CFLAGS_OPT   = -c -O3 -xSSE4.2&lt;br /&gt;
CFLAGS_TEST  = -c -O2 &lt;br /&gt;
CFLAGS_DEBUG = -c -g  &lt;br /&gt;
&lt;br /&gt;
MDEFS = &lt;br /&gt;
&lt;br /&gt;
.SUFFIXES: .o .c .f .F .h .fh .F90 .f90&lt;br /&gt;
&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
# Linker flags&lt;br /&gt;
#&lt;br /&gt;
#  There is a seperate version of the linker flags for each of the _OPT, &lt;br /&gt;
#  _DEBUG, and _TEST cases.&lt;br /&gt;
#----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
LFLAGS_OPT   = -o&lt;br /&gt;
LFLAGS_TEST  = -o&lt;br /&gt;
LFLAGS_DEBUG = -g -o&lt;br /&gt;
&lt;br /&gt;
MACHOBJ = &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MV = mv -f&lt;br /&gt;
AR = ar -r&lt;br /&gt;
RM = rm -f&lt;br /&gt;
CD = cd&lt;br /&gt;
RL = ranlib&lt;br /&gt;
ECHO = echo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Ljdursi|ljdursi]] 22:11, 13 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Aeronautics==&lt;br /&gt;
&lt;br /&gt;
==Chemistry==&lt;br /&gt;
&lt;br /&gt;
===CPMD===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Cpmd | CPMD]] page.&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[Nwchem | NWChem]] page.&lt;br /&gt;
&lt;br /&gt;
===GAMESS (US)===&lt;br /&gt;
&lt;br /&gt;
Please refer to the [[gamess|GAMESS (US)]] page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
Through trial and error, we have found a few useful things that we would like to share:&lt;br /&gt;
&lt;br /&gt;
1. Two very useful, open-source programs for visualization of output files from GAMESS(US) and for generation of input files are [http://www.scl.ameslab.gov/MacMolPlt/ MacMolPlt]and [http://avogadro.openmolecules.net/wiki/Main_Page Avogadro].  The are available for UNIX/LINUX, Windows and Mac based machines, HOWEVER:  any input files that we have generated with these programs on a Windows-based machine do not run on Mac based machines.  We don't know why.&lt;br /&gt;
&lt;br /&gt;
2. [http://winscp.net/eng/index.php WinSCP] is a very useful tool that has a graphical user interface for moving files from a local machine to SCINET and vice versa.  It also has text editing capabilities.&lt;br /&gt;
&lt;br /&gt;
3. The [https://bse.pnl.gov/bse/portal ESML Basis Set Exchange] is an excellent source for custom basis set or effective core potential parameters.  Make sure that you specify &amp;quot;Gamess-US&amp;quot; in the format drop-down box.&lt;br /&gt;
&lt;br /&gt;
4.  The commercial program [http://www.chemcraftprog.com/ ChemCraft] is a highly useful visualization program that has the ability to edit molecules in a very similar fashion to GaussView.  It can also be customized to build GAMESS(US) input files.&lt;br /&gt;
&lt;br /&gt;
====Anatomy of a GAMESS(US) Input File with Basis Set Info in an External File====&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=525600 MWORDS=1750 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
 C1&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
  $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====The Input Deck=====&lt;br /&gt;
&lt;br /&gt;
Below is the input deck.  It is where you tell GAMESS(US) what job type to execute and where all you individual parameters are entered for your specific job type.  The example input deck below is for a geometry optimization and frequency calculation.  This input deck is equivalent to the Gaussian job with &amp;quot;opt&amp;quot; and &amp;quot;freq&amp;quot; in the route section.&lt;br /&gt;
&lt;br /&gt;
  $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=M06-L MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
   ECP=READ $END&lt;br /&gt;
  $SYSTEM TIMLIM=2850 MWORDS=1750 MEMDDI=20 PARALL=.TRUE. $END&lt;br /&gt;
  $BASIS GBASIS=CUSTOMNI EXTFIL=.t. $END&lt;br /&gt;
  $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
  $STATPT OPTTOL=0.0001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
  $DATA&lt;br /&gt;
&lt;br /&gt;
An important thing to note is the spacing.  In the input deck, there must be 1 space at the beginning of each line of the input deck.  If not, the job will fail.  Most builders will insert this space anyway, but it helps to double check.&lt;br /&gt;
&lt;br /&gt;
The end of the input deck is marked by the &amp;quot;$DATA&amp;quot; line.&lt;br /&gt;
&lt;br /&gt;
=====Job Title Line=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the job title.  It can be anthing you wish, however, we have found that to be on the safe side, we avoide using symbols or spaces&lt;br /&gt;
&lt;br /&gt;
  Mo_BDT3&lt;br /&gt;
&lt;br /&gt;
=====Symmetry Point Group=====&lt;br /&gt;
&lt;br /&gt;
The next line of the file is the symmetry point group of your molecule.  Note that there is no leading space before the point group.&lt;br /&gt;
&lt;br /&gt;
 C1&lt;br /&gt;
&lt;br /&gt;
=====Coordinates=====&lt;br /&gt;
&lt;br /&gt;
The next block of text is set aside for the coordinates of the molecule.  This can be in internal (or z-matrix) format or cartesian coordinates.  Note that there is no leading space before the coordinates.  One may use the chemical symbol or the full name of each atom in the molecule.  Note that the end of the coordinates is signified by an &amp;quot;$END&amp;quot;, which MUST have one space preceding it.  The coordinates below do NOT have any basis set information inserted.  It is possible to insert basis set information directly into the input file.  This is accomplished by obtaining the desired basis set parameters from the EMSL and then inserting them below each relevant atom.  An example input file with inserted basis set information will be shown later.&lt;br /&gt;
&lt;br /&gt;
 MOLYBDENUM 42.0      5.7556500000      4.4039600000     16.5808400000&lt;br /&gt;
 SULFUR     16.0      7.4169700000      3.1956300000     15.2089300000&lt;br /&gt;
 SULFUR     16.0      4.0966800000      3.2258300000     15.1761100000&lt;br /&gt;
 SULFUR     16.0      3.9677300000      4.4940500000     18.3266100000&lt;br /&gt;
 SULFUR     16.0      7.1776900000      3.5815000000     18.4485200000&lt;br /&gt;
 SULFUR     16.0      4.3776600000      6.2447400000     15.6786900000&lt;br /&gt;
 SULFUR     16.0      7.5478700000      6.0679800000     16.2223700000&lt;br /&gt;
 CARBON      6.0      6.4716900000      2.1004800000     14.1902300000&lt;br /&gt;
 CARBON      6.0      5.0690300000      2.1781400000     14.1080700000&lt;br /&gt;
 CARBON      6.0      4.8421800000      4.2701300000     19.8855500000&lt;br /&gt;
 CARBON      6.0      6.1969000000      3.9249600000     19.9397400000&lt;br /&gt;
 CARBON      6.0      6.8280600000      3.7834200000     21.1913200000&lt;br /&gt;
 CARBON      6.0      5.7697600000      7.6933500000     17.4241800000&lt;br /&gt;
 CARBON      6.0      7.2043100000      7.9413600000     17.8281100000&lt;br /&gt;
 CARBON      6.0      5.5051400000      7.0409700000     14.5903800000&lt;br /&gt;
 CARBON      6.0      6.8905200000      6.9194700000     14.7626200000&lt;br /&gt;
 CARBON      6.0      7.7396400000      7.5379800000     13.8285700000&lt;br /&gt;
 HYDROGEN    1.0      8.8190700000      7.4520600000     13.9252200000&lt;br /&gt;
 CARBON      6.0      7.2169400000      8.2960300000     12.7704100000&lt;br /&gt;
 HYDROGEN    1.0      7.8667000000      8.7825100000     12.0575600000&lt;br /&gt;
 CARBON      6.0      5.8260300000      8.4502300000     12.6467800000&lt;br /&gt;
 HYDROGEN    1.0      5.4143000000      9.0544300000     11.8493100000&lt;br /&gt;
 CARBON      6.0      4.9881500000      7.8192300000     13.5528400000&lt;br /&gt;
 HYDROGEN    1.0      3.9090500000      7.9420000000     13.4583700000&lt;br /&gt;
 CARBON      6.0      7.1538500000      1.1569600000     13.4143900000&lt;br /&gt;
 CARBON      6.0      4.4018100000      1.3603900000     13.1919900000&lt;br /&gt;
 CARBON      6.0      6.4791600000      0.3185500000     12.5353300000&lt;br /&gt;
 CARBON      6.0      5.0837400000      0.4369500000     12.4084900000&lt;br /&gt;
 HYDROGEN    1.0      7.0116000000     -0.4099400000     11.9434600000&lt;br /&gt;
 HYDROGEN    1.0      8.2399000000      1.0702400000     13.4937600000&lt;br /&gt;
 HYDROGEN    1.0      3.3185600000      1.4368700000     13.0953100000&lt;br /&gt;
 HYDROGEN    1.0      4.5549800000     -0.1997300000     11.7165200000&lt;br /&gt;
 CARBON      6.0      6.1105700000      3.9639000000     22.3866100000&lt;br /&gt;
 CARBON      6.0      4.1216300000      4.4424400000     21.1020100000&lt;br /&gt;
 HYDROGEN    1.0      7.8732900000      3.5217100000     21.2520500000&lt;br /&gt;
 CARBON      6.0      4.7606000000      4.2868500000     22.3363800000&lt;br /&gt;
 HYDROGEN    1.0      6.6064200000      3.8406000000     23.3428500000&lt;br /&gt;
 HYDROGEN    1.0      4.2065000000      4.4170700000     23.2667100000&lt;br /&gt;
 HYDROGEN    1.0      3.0674000000      4.6893500000     21.0889000000&lt;br /&gt;
 HYDROGEN    1.0      7.4249200000      7.7545300000     18.8583200000&lt;br /&gt;
 HYDROGEN    1.0      7.6651700000      8.9049700000     17.7652100000&lt;br /&gt;
 HYDROGEN    1.0      5.3324000000      8.6487800000     17.2222700000&lt;br /&gt;
 HYDROGEN    1.0      5.5015000000      7.1039000000     18.2759400000&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
=====Effective Core Potential Data=====&lt;br /&gt;
&lt;br /&gt;
The effective core potential (ECP) data is entered after the coordinates.  It starts with &amp;quot;$ECP&amp;quot;, which must be preceded with a space.   The atoms of the molecule are listed in the same order as in the coordinates section and the parameters for the ECP are listed after each atom.  Note that for any atom that does NOT have an ECP, one must enter &amp;quot;ECP-NONE&amp;quot; or &amp;quot;NONE&amp;quot; after each atom without an ECP.&lt;br /&gt;
&lt;br /&gt;
 $ECP&lt;br /&gt;
 MO-ECP GEN     28     3&lt;br /&gt;
  5      ----- f potential     -----&lt;br /&gt;
     -0.0469492        0    537.9667807        &lt;br /&gt;
    -20.2080084        1    147.8982938        &lt;br /&gt;
   -106.2116302        2     45.7358898        &lt;br /&gt;
    -41.8107368        2     13.2911467        &lt;br /&gt;
     -4.2054103        2      4.7059961        &lt;br /&gt;
  3      ----- s-f potential     -----&lt;br /&gt;
      2.8063717        0    110.2991760        &lt;br /&gt;
     44.5162012        1     23.2014645        &lt;br /&gt;
     82.7785227        2      5.3530131        &lt;br /&gt;
  4      ----- p-f potential     -----&lt;br /&gt;
      4.9420876        0     63.2901397        &lt;br /&gt;
     25.8604976        1     23.3315302        &lt;br /&gt;
    132.4708742        2     24.6759423        &lt;br /&gt;
     57.3149794        2      4.6493040        &lt;br /&gt;
  5      ----- d-f potential     -----&lt;br /&gt;
      3.0054591        0    104.4839977        &lt;br /&gt;
     26.3637851        1     66.2307245        &lt;br /&gt;
    183.3849199        2     39.1283176        &lt;br /&gt;
     98.4453068        2     13.1164437        &lt;br /&gt;
     22.4901377        2      3.6280263 &lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 S NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 C NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
 H NONE&lt;br /&gt;
  $END&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  16 November 2009&lt;br /&gt;
&lt;br /&gt;
====Using an External File to Define Basis Set in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Since GAMESS(US) has a limited number of built-in ECPs and basis sets, one may want to make GAMESS(US) read an external file that contains the basis set information ECP data using the &amp;quot;EXTFIL&amp;quot; keyword in the $GBASIS command line of the input file.  For many metal containing compounds, it is very convenient and time saving to use an effective core potential (ECP) for the core metal electrons, as they are usually not important to the reactivity of the complex or the geometry around the metal.  In addition, to make GAMESS(US) use this external file, one must copy the &amp;quot;rungms&amp;quot; file and modify it accordingly.  The following is a list of instructions with commands that will work from a terminal.  One could also use WinSCP to do all of this with a GUI rather than a TUI.  &lt;br /&gt;
&lt;br /&gt;
=====Modifiying rungms to Use Custom Basis Set File=====&lt;br /&gt;
1. Copy &amp;quot;rungms&amp;quot; from /scinet/gpc/Applications/gamess to one's own /scratch/$USER/ directory:&lt;br /&gt;
 cp /scinet/gpc/Applications/gamess/rungms /scratch/$USER/&lt;br /&gt;
&lt;br /&gt;
2. Change to the scratch directory and check to see if &amp;quot;rungms&amp;quot; has copied successfully.&lt;br /&gt;
 cd /scratch/$USER&lt;br /&gt;
 ls&lt;br /&gt;
&lt;br /&gt;
3. Edit line 147 of the script.  &lt;br /&gt;
 vi rungms&lt;br /&gt;
Move the cursor down to line 147 using the arrow keys.  It should say &amp;quot;setenv EXTBAS /dev/null&amp;quot;.  Using the arrow keys, move the cursor to the first &amp;quot;/&amp;quot; and then hit &amp;quot;i&amp;quot; to insert text.  Put the path to your external basis file here.  For example, /scratch/$USER/basisset.  Then hit &amp;quot;escape&amp;quot;.  To save the changes and exit vi, type &amp;quot;:&amp;quot; and you should see a colon appear at the bottom of the window.  Type &amp;quot;wq&amp;quot; (which should appear at the bottom of the window next to the colon) and then hit enter.  Now you are done with vi.&lt;br /&gt;
&lt;br /&gt;
=====Creating a Custom Basis Set File=====&lt;br /&gt;
1. To create a custom basis set file, you need create a new text document.  Our group's common practice is to comment out the first line of this file by inserting an exclamation mark (!) followed by noting the specific basis sets and ECPs that are going to be used for each of the atoms.  Let us use the molecule Mo(CO)6, Molybdenum hexacarbonyl, as an example.  Below is the first line of the the external file, which we will call &amp;quot;CUSTOMMO&amp;quot;  (NOTE:  you can use any name for the external file that suits you, as long as it has no spaces and is 8 characters or less).&lt;br /&gt;
&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&lt;br /&gt;
&lt;br /&gt;
2. The next step is to visit the [https://bse.pnl.gov/bse/portal EMSL Basis Set exchange] and select C and O from the periodic table.  Then, on the left of the page, select &amp;quot;6-31G&amp;quot; as the basis set.  Finally, make sure the output is in GAMESS(US) format using the drop-down menu and then click &amp;quot;get basis set&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:C_O_6_31G_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
3. A new window should appear with text in it.  For our example case, the text looks like this:&lt;br /&gt;
 &lt;br /&gt;
 !  6-31G  EMSL  Basis Set Exchange Library   10/13/09 11:12 AM&lt;br /&gt;
 ! Elements                             References&lt;br /&gt;
 ! --------                             ----------&lt;br /&gt;
 ! H - He: W.J. Hehre, R. Ditchfield and J.A. Pople, J. Chem. Phys. 56,&lt;br /&gt;
 ! Li - Ne: 2257 (1972).  Note: Li and B come from J.D. Dill and J.A.&lt;br /&gt;
 ! Pople, J. Chem. Phys. 62, 2921 (1975).&lt;br /&gt;
 ! Na - Ar: M.M. Francl, W.J. Petro, W.J. Hehre, J.S. Binkley, M.S. Gordon,&lt;br /&gt;
 ! D.J. DeFrees and J.A. Pople, J. Chem. Phys. 77, 3654 (1982)&lt;br /&gt;
 ! K  - Zn: V. Rassolov, J.A. Pople, M. Ratner and T.L. Windus, J. Chem. Phys.&lt;br /&gt;
 ! 109, 1223 (1998)&lt;br /&gt;
 ! Note: He and Ne are unpublished basis sets taken from the Gaussian&lt;br /&gt;
 ! program&lt;br /&gt;
 ! &lt;br /&gt;
 $DATA&amp;lt;br /&amp;gt;&lt;br /&gt;
 CARBON&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 OXYGEN&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000        &lt;br /&gt;
 $END&lt;br /&gt;
&lt;br /&gt;
3. Now, copy and paste the text between the $DATA and $END headings onto our external text file, CUSTOMMO.  We also need to change the change the name of each element to the corresponding symbol in the periodic table.  Finally, we need to add the name of the external file next to the element symbol, separated by one space.  Note that there should be a blank line separating the basis set information and the first, commented-out line (The line starting with the '!').  The CUSTOMMO should look like this:&lt;br /&gt;
 &lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000 &lt;br /&gt;
&lt;br /&gt;
4. Repeat Step 3 above but choose Mo and select the LANL2DZ ECP instead.  A new window will pop up with the basis set information as well as the ECP data we need, since we specified the LANL2DZ '''ECP'''.  The ECP data is not inserted into the external file, rather it is placed into the input file itself (More on this later).  &lt;br /&gt;
&lt;br /&gt;
[[File:Mo_LANL2DZ_basisset.JPG|centre]]&lt;br /&gt;
&lt;br /&gt;
5.  After copying the molybdenum basis set information, your fiished external basis set file should look like this:&lt;br /&gt;
 ! 6-31G on C and O and LANL2D2 ECP on Mo&amp;lt;br /&amp;gt;&lt;br /&gt;
 C CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   3047.5249000              0.0018347        &lt;br /&gt;
   2    457.3695100              0.0140373        &lt;br /&gt;
   3    103.9486900              0.0688426        &lt;br /&gt;
   4     29.2101550              0.2321844        &lt;br /&gt;
   5      9.2866630              0.4679413        &lt;br /&gt;
   6      3.1639270              0.3623120        &lt;br /&gt;
 L   3&lt;br /&gt;
   1      7.8682724             -0.1193324              0.0689991        &lt;br /&gt;
   2      1.8812885             -0.1608542              0.3164240        &lt;br /&gt;
   3      0.5442493              1.1434564              0.7443083        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.1687144              1.0000000              1.0000000&amp;lt;br /&amp;gt;      &lt;br /&gt;
 O CUSTOMMO&lt;br /&gt;
 S   6&lt;br /&gt;
   1   5484.6717000              0.0018311        &lt;br /&gt;
   2    825.2349500              0.0139501        &lt;br /&gt;
   3    188.0469600              0.0684451        &lt;br /&gt;
   4     52.9645000              0.2327143        &lt;br /&gt;
   5     16.8975700              0.4701930        &lt;br /&gt;
   6      5.7996353              0.3585209        &lt;br /&gt;
 L   3&lt;br /&gt;
   1     15.5396160             -0.1107775              0.0708743        &lt;br /&gt;
   2      3.5999336             -0.1480263              0.3397528        &lt;br /&gt;
   3      1.0137618              1.1307670              0.7271586        &lt;br /&gt;
 L   1&lt;br /&gt;
   1      0.2700058              1.0000000              1.0000000&amp;lt;br /&amp;gt; &lt;br /&gt;
 Mo CUSTOMO&lt;br /&gt;
 S   3&lt;br /&gt;
   1      2.3610000             -0.9121760        &lt;br /&gt;
   2      1.3090000              1.1477453        &lt;br /&gt;
   3      0.4500000              0.6097109        &lt;br /&gt;
 S   4&lt;br /&gt;
   1      2.3610000              0.8139259        &lt;br /&gt;
   2      1.3090000             -1.1360084        &lt;br /&gt;
   3      0.4500000             -1.1611592        &lt;br /&gt;
   4      0.1681000              1.0064786        &lt;br /&gt;
 S   1&lt;br /&gt;
   1      0.0423000              1.0000000        &lt;br /&gt;
 P   3&lt;br /&gt;
   1      4.8950000             -0.0908258        &lt;br /&gt;
   2      1.0440000              0.7042899        &lt;br /&gt;
   3      0.3877000              0.3973179        &lt;br /&gt;
 P   2&lt;br /&gt;
   1      0.4995000             -0.1081945        &lt;br /&gt;
   2      0.0780000              1.0368093        &lt;br /&gt;
 P   1&lt;br /&gt;
   1      0.0247000              1.0000000        &lt;br /&gt;
 D   3&lt;br /&gt;
   1      2.9930000              0.0527063        &lt;br /&gt;
   2      1.0630000              0.5003907        &lt;br /&gt;
   3      0.3721000              0.5794024        &lt;br /&gt;
 D   1&lt;br /&gt;
   1      0.1178000              1.0000000&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====A Modified BASH Script for Runnning GAMESS(US)====&lt;br /&gt;
Below please find the bash script that we use to run GAMESS(US) on a single node with 8 processors.  &lt;br /&gt;
&lt;br /&gt;
One quirk of GAMESS(US) is that it will NOT write over old or failed jobs that have the same name as the input file you are submitting.  For example:  my input file name is &amp;quot;mo_opt.inp&amp;quot; and I submit this job to the queue.  However, it comes back seconds later with an error.  The log file says that I have typed an incorrect keyword, and lo and behold, I have a comma where it shouldn't be.  Such typos can be common.  If you simply try to re-submit, GAMESS(US) will fail again, because it has written a .log file and some other files to the /scratch/user/gamess-scratch/ directory.  These files must all be deleted before you re-submit your fixed input file.&lt;br /&gt;
&lt;br /&gt;
This script takes care of this annoying problem by deleting failed jobs with the same file name for you.&lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #PBS -l nodes=1:ppn=8,walltime=48:00:00,os=centos53computeA&lt;br /&gt;
 &lt;br /&gt;
 ## To submit type: qsub x.sh&lt;br /&gt;
 &lt;br /&gt;
 # If not an interactive job (i.e. -I), then cd into the directory where&lt;br /&gt;
 # I typed qsub.&lt;br /&gt;
 if [ &amp;quot;$PBS_ENVIRONMENT&amp;quot; != &amp;quot;PBS_INTERACTIVE&amp;quot; ]; then&lt;br /&gt;
   if [ -n &amp;quot;$PBS_O_WORKDIR&amp;quot; ]; then&lt;br /&gt;
     cd $PBS_O_WORKDIR&lt;br /&gt;
   fi&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 # the input file is typically named something like &amp;quot;gamesjob.inp&amp;quot;&lt;br /&gt;
 # so the script will be run like &amp;quot;$SCINET_RUNGMS gamessjob 00 8 8&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 find /scratch/user/gamess-scratch -type f -name ${NAME:-safety_net}\* -exec /bin/rm {} \;&lt;br /&gt;
 &lt;br /&gt;
 # load the gamess module if not in .bashrc already&lt;br /&gt;
 # actually, it MUST be in .bashrc&lt;br /&gt;
 # module load gamess&lt;br /&gt;
 &lt;br /&gt;
 # run the program&lt;br /&gt;
 &lt;br /&gt;
 /scratch/user/rungms $NAME 00 8 8 &amp;gt;&amp;amp; $NAME.log&lt;br /&gt;
&lt;br /&gt;
====A Script to Add the $VIB Group for Hessian Restarts in GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
Sometimes, a optimization + vibrational analysis or just a plain vibrational analysis must be restarted.  This can be because the two day time limit has been exceeded or perhaps there was an error during calculation.  In any case, when this happens, the job must be restarted.  In GAMESS(US), you can restart a vibrational analysis from a previous one and it will utilize the frequencies that were already computed in the failed run.&lt;br /&gt;
&lt;br /&gt;
For example, if one submits the input file &amp;quot;job_name.inp&amp;quot; and it fails before it has finished, then one must utilize the file &amp;quot;job_name.rst&amp;quot;, which contains data that is required to restart the calculation.  This file is located in the /scratch/user/gamess-scratch directory.  Data from the &amp;quot;job_name.rst&amp;quot; file must be appended at the end of the new input file (after the coordinates and ECP section if it is present) to restart the calculation, letus call it &amp;quot;job_name_restart.inp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
A shortened version of the &amp;quot;job_name.rst&amp;quot; file looks like this:&lt;br /&gt;
&lt;br /&gt;
  ENERGY/GRADIENT/DIPOLE RESTART DATA FOR RUNTYP=HESSIAN&lt;br /&gt;
  job_name                           &lt;br /&gt;
  $VIB   &lt;br /&gt;
         IVIB=   0 IATOM=   0 ICOORD=   0 E=    -3717.1435124522&lt;br /&gt;
 -5.165258381E-04 1.584665821E-02-1.206270555E-02-2.241461728E-03 3.176050715E-03&lt;br /&gt;
 -5.706738823E-04 2.502034151E-03 5.130112290E-04-2.716945939E-03 1.357008279E-03&lt;br /&gt;
 -1.059915305E-03 1.693526456E-03-2.957638907E-04-5.994938737E-04 9.684054361E-04&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The text eventually ends with one blank line. The $VIB heading and all of the text after $VIB must be appended to the end of file &amp;quot;job_name_restart.inp&amp;quot; and then &amp;quot; $END&amp;quot; must be inserted at the very end of the file.&lt;br /&gt;
&lt;br /&gt;
One could do this, one could cut cut and paste in a text editor, but we have written a small script that will do this automatically.  We call it &amp;quot;.vib.sh&amp;quot; but you can call it whatever you like.  Here it is:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add vibrational data for a hessian restart&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$VIB/{p=1}p;END{print &amp;quot; $END&amp;quot;}' /scratch/user/gamess-scratch/$NAME1.rst &amp;gt;&amp;gt; $NAME2.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the extension &amp;quot;.sh&amp;quot; and make it executable.  Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name.  The two variables in the script, NAME1 and NAME2, represent the name of your &amp;quot;.rst&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively.  In the example above, NAME1=job_name (that is, the same name as the .rst file that contains the $VIB data and that was created in the /gamess-scrsatch/ directory) and NAME2=job_name_restart (that is, the name of the new input file that you have prepared and want to copy the $VIB data into).&lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 NAME1=job_name NAME2=job_name_restart ./vib.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub vib.sh -v NAME1=job_name,NAME2=job_name_restart &lt;br /&gt;
&lt;br /&gt;
-special thanks to Ramses for help with this&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  30 September 2010&lt;br /&gt;
&lt;br /&gt;
====Most Commonly Used Headers in The Fekl Lab====&lt;br /&gt;
&lt;br /&gt;
After about a year of using GAMESS(US), we have found that we are most often doing optimizations, frequency analyses, transition state searches and IRC calculations using DFT methods.  Here are the input decks thatwe found have worked well for inorganic and organometallic compounds.&lt;br /&gt;
&lt;br /&gt;
=====Optimization Plus Frequency (for a neutral, singlet)=====&lt;br /&gt;
 &lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=OPTIMIZE DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $STATPT OPTTOL=0.00001 NSTEP=500 HSSEND=.t. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Frequency Only (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=HESSIAN DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2800 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. DAMP=.T. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PROJCT=.T. PURIFY=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====Transition State Search (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=SADPOINT DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. DIIS=.T. SOSCF=.F. $END&lt;br /&gt;
 $STATPT STSTEP=0.05 OPTTOL=0.00001 NSTEP=500 HESS=CALC HSSEND=.t. &lt;br /&gt;
  STPT=.FALSE. $END&lt;br /&gt;
 $FORCE METHOD=SEMINUM VIBANL=.TRUE. PURIFY=.T. PROJCT=.T. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
=====IRC (Intrinsic Reaction Coordinate following forward reaction) Calculation (for a neutral, singlet)=====&lt;br /&gt;
&lt;br /&gt;
 $CONTRL SCFTYP=RHF RUNTYP=IRC DFTTYP=''FILL_IN_YOUR_PREFEENCE_HERE'' MAXIT=199 MULT=1 NOSYM=1&lt;br /&gt;
  ECP=READ $END&lt;br /&gt;
 $IRC OPTTOL=0.00001 STRIDE=0.05 NPOINT=5000 SADDLE=.TRUE. FORWRD=.F.&lt;br /&gt;
 $END&lt;br /&gt;
 $SYSTEM TIMLIM=2850 MWORDS=20 MEMDDI=50 PARALL=.TRUE. $END&lt;br /&gt;
 $SCF DIRSCF=.TRUE. FDIFF=.f. $END&lt;br /&gt;
 $FORCE TEMP=298.15 PURIFY=.t. PROJCT=.t. $END&lt;br /&gt;
 $DATA&lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 September 2010&lt;br /&gt;
&lt;br /&gt;
====How to Run an IRC Calculation Using GAMESS(US)====&lt;br /&gt;
&lt;br /&gt;
An IRC or Intrinsic Reaction Coordinate calculation follows the imaginary mode of the vibrational analysis of a transition state calculation.  In GAMESS(US), you can choose to follow the forward (towards the products) or backward (toward the reactants) direction.  As shown above in the IRC header that we use, the direction of the IRC calculation is controlled by the &amp;quot;FORWRD&amp;quot; key word.  Using &amp;quot;FORWRD=.T.&amp;quot; means that the IRC is following the forward direction, while using &amp;quot;FORWRD=.F.&amp;quot; means that the IRC calculation is following the backward direction.&lt;br /&gt;
&lt;br /&gt;
Let us say we want to perform an IRC.  In order to perform an IRC calculation, you must first perform a vibrational analysis of you molecule and check to ensure there is only 1 negative frequency.  If that is the case, then the vibrational analysis completed successfully and there will be a file, let us call it &amp;quot;job_name.dat&amp;quot; in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; directory (where $USER is your user name) with the extension &amp;quot;.dat&amp;quot;.  In this file is data that is required for the IRC input file.&lt;br /&gt;
&lt;br /&gt;
To prepare your IRC input file, prepare an input file using the coordinates of the optimized structure of the transition state.  This can be from ChemCraft or Avogadro or MacMolPlt - what ever you prefer to use.  Then copy and paste the IRC header above or use your own parameters. Call it whatever you want, as long as it has an &amp;quot;.inp&amp;quot; extension. Let us call in &amp;quot;irc_job.inp&amp;quot;.  &lt;br /&gt;
&lt;br /&gt;
For example, the &amp;quot;STRIDE&amp;quot; value determines the &amp;quot;size&amp;quot; of the steps between each point on the IRC graph.  If you increase the value of the stride, say from 0.05 to 0.1, then the steps in between each point become larger and you will approach the minimum faster (this will give you fewer data points should you chose to plot the IRC data).  Decreasing the stride value, say from 0.05 to 0.01 will make the steps in between each point become smaller and you may not reach the minimum of the reaction coordinate in the alloted time period.&lt;br /&gt;
&lt;br /&gt;
You should now have an input file with an IRC header, the coordinates of the transition state and basis set and ECP information called &amp;quot;irc_job.inp&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Now you need to use the &amp;quot;job_name.dat&amp;quot; file in the &amp;quot;/users/$USER/gamess-scratch/&amp;quot; In this file are a number of blocks of data that are sandwiched between a line that contains only &amp;quot; $HESS&amp;quot; and a line that contains only &amp;quot; $END&amp;quot;.  What you need is the LAST of these blocks of text and it has to be copied and pasted directly below the last entry of your input file.&lt;br /&gt;
&lt;br /&gt;
This can be difficult and time consuming, as the .dat files can be very large (sometimes over 150 MB) and cumbersome to navigate through.  However, we have written a script, similar to the .vib.sh script, that can help you out with this.  Basically, this script does all the copying and pasting for you.  &lt;br /&gt;
&lt;br /&gt;
Here it is:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # script to add hessian data for an IRC calculation&lt;br /&gt;
 &lt;br /&gt;
 awk '/\$HESS/{arr=&amp;quot;&amp;quot;;f=1} f {arr=(arr)?arr ORS $0:$0} /\$END/{f=0} END {print arr}' /scratch/$USER/gamess-scratch/$DAT.dat &amp;gt;&amp;gt; $IN.inp&lt;br /&gt;
&lt;br /&gt;
To use it, simply copy it into a new text file with the name &amp;quot;irc.sh&amp;quot; and make it executable. Also, you will need to edit the location of the &amp;quot;/scratch/user/gamess-scratch/ directory to match your user name. The two variables in the script, $DAT and $IN, represent the name of your &amp;quot;.dat&amp;quot; file and your new &amp;quot;.inp&amp;quot; file, respectively. Using our current example, $DAT=job_name and In the example above, $IN=irc_job (that is, the same name as the .dat file that contains the $HESS data and that was created in the /gamess-scrsatch/ directory) and IN=irc_job (that is, the name of the new input file that you have prepared and want to copy the $HESS data into). &lt;br /&gt;
&lt;br /&gt;
To run it on a gpc node without submitting it to the job queue, type:&lt;br /&gt;
&lt;br /&gt;
 DAT=job_name IN=irc_job ./irc.sh&lt;br /&gt;
&lt;br /&gt;
To run it in the queue, type:&lt;br /&gt;
&lt;br /&gt;
 qsub irc.sh -v DAT=job_name,IN=irc_job &lt;br /&gt;
&lt;br /&gt;
-- [[User:M.Zimmer-De Iuliis|mzd]]  21 October 2010&lt;br /&gt;
&lt;br /&gt;
===Vienna Ab-initio Simulation Package (VASP)===&lt;br /&gt;
Please refer to the VASP page.&lt;br /&gt;
&lt;br /&gt;
User supplied content below.&lt;br /&gt;
&lt;br /&gt;
====Tips from the Polanyi Lab====&lt;br /&gt;
Using VASP on SciNet&lt;br /&gt;
&lt;br /&gt;
Logon using SSH&lt;br /&gt;
login.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
then ssh to the TCS cluster&lt;br /&gt;
ssh tcs01&lt;br /&gt;
&lt;br /&gt;
change directory to &lt;br /&gt;
cd /scratch/imcnab/test/Si111 - or whatever other directory is convenient.&lt;br /&gt;
&lt;br /&gt;
VASP is contained in the directory imcnab/bin&lt;br /&gt;
&lt;br /&gt;
To submit a job, first edit (at least) the POSCAR file and other VASP&lt;br /&gt;
input files as necessary.&lt;br /&gt;
&lt;br /&gt;
=====Input Files=====&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR''' - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script. The job script name is &amp;quot;vasp.script&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is run in steps, leaving the WAVECAR file on the working directory is an efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using llcancel tcs-fXXnYY.$PID where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
INPUT FILES ==&lt;br /&gt;
&lt;br /&gt;
The minimum set of input files is:&lt;br /&gt;
&lt;br /&gt;
'''vasp.script''' - script file telling TCS to run a VASP job - must be edited to run in current working directory.&lt;br /&gt;
&lt;br /&gt;
'''POSCAR''' - specifies supercell geometry and &amp;quot;ionic&amp;quot; positions (i.e. atomic centres) and whether relaxation allowed. Ionic positions may be given in cartesion coordinates (x,y,z in A) or &amp;quot;absolute&amp;quot;, which are fractions of the unit cell vectors. CONTCAR is always in absolute coords, so after the first run of any job, you'll find yourself running in absolute coords. VMD can be used to change these back to cartesian coordinates.&lt;br /&gt;
&lt;br /&gt;
'''INCAR'''  - specifies parameters to run the job. INCAR is free format - can put input commands in ANY order.&lt;br /&gt;
&lt;br /&gt;
'''POTCAR''' - specifies the potentials to use for each atomic type. Must be in the same order as the atoms are first met in POSCAR&lt;br /&gt;
&lt;br /&gt;
'''KPOINTS''' - specifies the number and position of K-points to use in the calculation.&lt;br /&gt;
&lt;br /&gt;
Any change of name or directory needs to be edited into the job script.&lt;br /&gt;
The job script name is &amp;quot;'''vasp.script'''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
VASP attempts to read initial wavefunctions from WAVECAR, so if a job is&lt;br /&gt;
run in steps, leaving the WAVECAR file on the working directory is an &lt;br /&gt;
efficient way to start the next stage of the calculation&lt;br /&gt;
&lt;br /&gt;
VASP also writes CONTCAR which is of the same format as POSCAR, and can&lt;br /&gt;
simply be renamed if it is to be used as the starting point for a new job.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Submit the job to load-leveller with the command&lt;br /&gt;
llsubmit ./vasp.script from the correct working directory.&lt;br /&gt;
&lt;br /&gt;
can check the status of a job with &lt;br /&gt;
llq&lt;br /&gt;
&lt;br /&gt;
can cancel a job using&lt;br /&gt;
llcancel tcs-fXXnYY.$PID    where tcs number etc is shown by llq&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== GENERAL NOTES =====&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use ISPIN=1, no-spin (corresponds to RHF, rather than &lt;br /&gt;
ISPIN=2 which corresponds to URHF). So far, I've not found a system where the atom positions differ, or where the calculated electronic energy differs by more than 1E-4, which is the convergence &lt;br /&gt;
criteria set.&lt;br /&gt;
&lt;br /&gt;
MUCH faster to use real space LREAL = A, NSIM=4. &lt;br /&gt;
&lt;br /&gt;
So, ''always'' optimize in real space first, then re-optimize in reciprocal space. This does NOT guarantee, a one-step optimization in reciprocal space. May still need to progressively&lt;br /&gt;
relax a large system.&lt;br /&gt;
&lt;br /&gt;
'''Relaxing a large system.'''&lt;br /&gt;
If you attempt to relax a large system in one step, it will usually fail.&lt;br /&gt;
&lt;br /&gt;
The starting geometry is usually an unrelaxed molecule above an unrelaxed surface.&lt;br /&gt;
The bottom plane of the surface will NEVER be relaxed, because this corresponds to the fixed boundary condition of REALITY. &lt;br /&gt;
&lt;br /&gt;
First, relax the molecule alone (assuming you have already found a good starting position from single point calcultions, place the molecule closer to the surface than you think it should be (say 0.9 VdW radii away).&lt;br /&gt;
&lt;br /&gt;
Then ALSO allow the top layer of the surface to relax.&lt;br /&gt;
Then ALSO allow the second top layer of the surface to relax... etc... etc.&lt;br /&gt;
&lt;br /&gt;
If this DOESN'T WORK: Then relax X,Y and Z separately in iterations.&lt;br /&gt;
Example. For the following problem, representing layers of the crystal going DOWN from the top (Z pointing to the top of the screen)&lt;br /&gt;
&lt;br /&gt;
Molecule&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can try the following relaxation schemes:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Successive relaxation, Layer by Layer:&amp;lt;br /&amp;gt;&lt;br /&gt;
(1) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer.&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
etc. etc... if this works then you're fine. However, it can happen that even by Layer 2, you're running into real problems, and the ionic relaxation NEVER converges. In which case, I have found the following scheme (and variations thereof) useful:&lt;br /&gt;
&lt;br /&gt;
(1)&amp;lt;br /&amp;gt; &lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(2) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(3) &amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IF (3) DOESN'T converge THEN TRY&lt;br /&gt;
&lt;br /&gt;
(2')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  Z   Relax, XY FIXED&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- you are allowing the top layers to move only UP or DOWN, while allowing the intermediate&lt;br /&gt;
layer 2 to fully relax (actually, there is no way of telling VASP to move ALL atoms by the SAME deltaZ, but that appears to be the effect.&lt;br /&gt;
Followed by&lt;br /&gt;
&lt;br /&gt;
(2&amp;quot;)&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If (2&amp;quot;) doesn't work, you need to go back to the output of (2') and vary the cycle - perhaps something like:&lt;br /&gt;
(2&amp;quot;')&amp;lt;br /&amp;gt;&lt;br /&gt;
Molecule XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 1  XYZ Relax&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 2  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 3  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 4  XYZ fixed&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 5 - fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
Layer 6 - Valence H's, fixed layer&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then try (2&amp;quot;) again.&lt;br /&gt;
&lt;br /&gt;
Repeat as necessary. This scheme does appear to work quite well for big unit cells. It can be very difficult to relax as many layers as necessary in a big unit cell.&lt;br /&gt;
&lt;br /&gt;
Experience on the One Per Corner Hole problem shows that it may be necessary to have a large number of UNRELAXED (i.e. BULK silicon) layers underneath the relaxed layers in order to get physically meaningful answers. This is because silicon is so elastic.&lt;br /&gt;
&lt;br /&gt;
===== Problems and solutions: =====&lt;br /&gt;
&lt;br /&gt;
If getting ZBRENT errors, try changing ALGO. Usually use ALGO = Fast, change to ALGO = Normal. With ALGO = Normal, NFREE now DOES correspond to degrees of freedom (maximum suggested setting is 20). Haven't found this terribly helpful.&lt;br /&gt;
&lt;br /&gt;
Many calculations seem to fail after 20 or 30 ionic steps. I suspect a memory leak.&lt;br /&gt;
&lt;br /&gt;
Sometimes the calculation appears to lose WAVECAR... this is not a disaster, just means a slight increase in start time as the first wavefunction is calculated.&lt;br /&gt;
&lt;br /&gt;
If calculation does not finish nicely, can force a WAVECAR generation by doing a purely electronic calculation (these are pretty fast).&lt;br /&gt;
&lt;br /&gt;
VASP is VERY slow at relaxing molecules at surfaces. This is because it doesn't know a molecule is a connected entity. It treats every atom independently. &lt;br /&gt;
&lt;br /&gt;
THEREFORE, MUCH MUCH faster to try molecular positions by hand first. &lt;br /&gt;
Do some sample calculations at a few geometries to find a good starting point.&lt;br /&gt;
&lt;br /&gt;
ALSO, once you think you know where the molecule is to be placed, put it too close to the surface, and let it relax outwards... the forces close to the surface are repulsive, and much steeper, so relaxation is FASTER in this direction.&lt;br /&gt;
&lt;br /&gt;
=='''Climate Modelling'''==&lt;br /&gt;
&lt;br /&gt;
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states.&lt;br /&gt;
&lt;br /&gt;
Development of a comprehensive CESM that accurately represents the principal components of the climate system and their couplings requires both wide intellectual participation and computing capabilities beyond those available to most U.S. institutions. The CESM, therefore, must include an improved framework for coupling existing and future component models developed at multiple institutions, to permit rapid exploration of alternate formulations. This framework must be amenable to components of varying complexity and at varying resolutions, in accordance with a balance of scientific needs and resource demands. In particular, the CESM must accommodate an active program of simulations and evaluations, using an evolving model to address scientific issues and problems of national and international policy interest.&lt;br /&gt;
&lt;br /&gt;
User guides and information on each version of the model can be found at the following links:&lt;br /&gt;
&lt;br /&gt;
CCSM3: http://www.cesm.ucar.edu/models/ccsm3.0/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/&lt;br /&gt;
&lt;br /&gt;
===[[Installing CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Running CCSM4]]===&lt;br /&gt;
&lt;br /&gt;
===[[Post Processing CCSM Output]]===&lt;br /&gt;
&lt;br /&gt;
===[[CCSM4/CESM1 TCS Simulation List]]===&lt;br /&gt;
&lt;br /&gt;
==Medicine/Bio==&lt;br /&gt;
&lt;br /&gt;
==High Energy Physics==&lt;br /&gt;
&lt;br /&gt;
==Structural Biology==&lt;br /&gt;
Molecular simulation of proteins, lipids, carbohydrates, and other biologically relevant molecules.&lt;br /&gt;
===Molecular Dynamics (MD) simulation===&lt;br /&gt;
====GROMACS====&lt;br /&gt;
Please refer to the [[gromacs|GROMACS]] page&lt;br /&gt;
====AMBER====&lt;br /&gt;
Please refer to the [[amber|AMBER]] page&lt;br /&gt;
====NAMD====&lt;br /&gt;
NAMD is one of the better scaling MD packages out there. With sufficiently large systems, it is able to scale to hundreds or thousands of cores on Scinet. Below are details for compiling and running NAMD on Scinet.&lt;br /&gt;
&lt;br /&gt;
More information regarding performance and different compile options coming soon...&lt;br /&gt;
&lt;br /&gt;
=====Compiling NAMD for GPC=====&lt;br /&gt;
Ensure the proper compiler/mpi modules are loaded.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
module load intel&lt;br /&gt;
module load openmpi/1.3.3-intel-v11.0-ofed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Compile Charm++ and NAMD'''&lt;br /&gt;
&amp;lt;source lang=&amp;quot;sh&amp;quot;&amp;gt;&lt;br /&gt;
#Unpack source files and get required support libraries&lt;br /&gt;
tar -xzf NAMD_2.7b1_Source.tar.gz&lt;br /&gt;
cd NAMD_2.7b1_Source&lt;br /&gt;
tar -xf charm-6.1.tar&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz&lt;br /&gt;
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz&lt;br /&gt;
tar -xzf fftw-linux-x86_64.tar.gz; mv linux-x86_64 fftw&lt;br /&gt;
tar -xzf tcl-linux-x86_64.tar.gz; mv linux-x86_64 tcl&lt;br /&gt;
#Compile Charm++&lt;br /&gt;
cd charm-6.1&lt;br /&gt;
./build charm++ mpi-linux-x86_64 icc --basedir /scinet/gpc/mpi/openmpi/1.3.3-intel-v11.0-ofed/ --no-shared -O -DCMK_OPTIMIZE=1&lt;br /&gt;
cd ..&lt;br /&gt;
#Compile NAMD. &lt;br /&gt;
#Edit arch/Linux-x86_64-icc.arch and add &amp;quot;-lmpi&amp;quot; to the end of the CXXOPTS and COPTS line.&lt;br /&gt;
#Make a builds directory if you want different versions of NAMD compiled at the same time.&lt;br /&gt;
mkdir builds&lt;br /&gt;
./config builds/Linux-x86_64-icc --charm-arch mpi-linux-x86_64-icc&lt;br /&gt;
cd builds/Linux-x86_64-icc/&lt;br /&gt;
make -j4 namd2 # Adjust value of j as desired to specify number of simultaneous make targets. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
--[[User:Cmadill|Cmadill]] 16:18, 27 August 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
=====Running Fortran=====&lt;br /&gt;
On the development nodes, there is an old gcc. The associated libraries are not on the compute nodes. Ensure the line:&lt;br /&gt;
&lt;br /&gt;
module load gcc&lt;br /&gt;
&lt;br /&gt;
is in your .bashrc file.&lt;br /&gt;
&lt;br /&gt;
====LAMMPS====&lt;br /&gt;
[[Image:StrongScalingLAMMPS.png|thumb|320px|right|Strong scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
[[Image:WeakScalingLAMMPS.png|thumb|320px|right|Weak scaling test on GPC with OpenMPI and IntelMPI on Ethernet and InfiniBand]]&lt;br /&gt;
LAMMPS is a parallel MD code that can be found [http://lammps.sandia.gov/ here].&lt;br /&gt;
&lt;br /&gt;
'''Scaling Tests on GPC'''&lt;br /&gt;
&lt;br /&gt;
Results from strong scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 4,000,000 atoms.&lt;br /&gt;
&lt;br /&gt;
Results from weak scaling tests for LAMMPS using EAM potentials on GPC are shown in the graph on the right.  Test simulation ran 500 timesteps for 32,000 atoms per processor.&lt;br /&gt;
&lt;br /&gt;
OpenMPI version used: openmpi/1.4.1-intel-v11.0-ofed&lt;br /&gt;
&lt;br /&gt;
IntelMPI version used: intelmpi/impi-4.0.0.013&lt;br /&gt;
&lt;br /&gt;
LAMMPS version used: 15 Jan 2010&lt;br /&gt;
&lt;br /&gt;
'''Summary of Scaling Tests'''&lt;br /&gt;
&lt;br /&gt;
Results show good scaling for both OpenMPI and IntelMPI on Ethernet up to 16 processors, after which performance begins to suffer.  On Infiniband, excellent scaling is maintained to 512 processors.&lt;br /&gt;
&lt;br /&gt;
IntelMPI shows slightly better performance compared to OpenMPI when running with Infiniband.&lt;br /&gt;
&lt;br /&gt;
--[[User:jchu|jchu]] 14:08 Feb 2, 2010&lt;br /&gt;
&lt;br /&gt;
===Monte Carlo (MC) simulation===&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2327</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2327"/>
		<updated>2010-12-13T21:35:02Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES -scratchroot $SCRATCH &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2316</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2316"/>
		<updated>2010-12-09T00:09:24Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                             CESM1.0 README&lt;br /&gt;
  &lt;br /&gt;
 For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
 a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
 http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
  &lt;br /&gt;
 IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
  &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2315</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2315"/>
		<updated>2010-12-09T00:08:37Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
'''&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                            CESM1.0 README&lt;br /&gt;
 &lt;br /&gt;
For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
 &lt;br /&gt;
IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
 &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2314</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2314"/>
		<updated>2010-12-09T00:07:46Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                            CESM1.0 README&lt;br /&gt;
 &lt;br /&gt;
For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
 &lt;br /&gt;
IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
 &lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
   &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
 &lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
 &lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
 &lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
 &lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
 &lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
	<entry>
		<id>https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2313</id>
		<title>Running CCSM4</title>
		<link rel="alternate" type="text/html" href="https://oldwiki.scinet.utoronto.ca/index.php?title=Running_CCSM4&amp;diff=2313"/>
		<updated>2010-12-09T00:06:55Z</updated>

		<summary type="html">&lt;p&gt;Guido: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It is important to point out that all updates to the model system will only occur with CESM1.0 updates, not with CCSM4.0. It is also important to note that CCSM4 is a subset of CESM1. Although CESM1 supersedes CCSM4, users can run all CCSM4 experiments from the CESM1 code base.&lt;br /&gt;
&lt;br /&gt;
The scientifically validated CESM1 runs are found in the list below (including a complete list of the model resolutions):&lt;br /&gt;
&lt;br /&gt;
/project/ccsm/cesm1_current/scripts/create_newcase --list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
                            CESM1.0 README&lt;br /&gt;
&lt;br /&gt;
For both a quick start as well as a detailed summary of creating and running &lt;br /&gt;
a CESM model case, see the CESM1.0 User's Guide at&lt;br /&gt;
http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
IMPORTANT INFORMATION ABOUT SCIENTIFIC VALIDATION&lt;br /&gt;
&lt;br /&gt;
   CESM1.0 has the flexibility to configure cases with many different &lt;br /&gt;
   combinations of component models, grids, and model settings, but this &lt;br /&gt;
   version of CESM has only been validated scientifically for the following &lt;br /&gt;
   fully active configurations:&lt;br /&gt;
&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_RAMPCO2_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_CN&lt;br /&gt;
&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_CAM5&lt;br /&gt;
&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_RAMPCO2_CN&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN&lt;br /&gt;
&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_BGC-BDRD&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BPRP&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_BGC-BDRD&lt;br /&gt;
&lt;br /&gt;
      0.9x1.25_gx1v6  B_1850_CN_CHEM &lt;br /&gt;
      0.9x1.25_gx1v6  B_1850-2000_CN_CHEM&lt;br /&gt;
&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850_WACCM_CN&lt;br /&gt;
      1.9x2.5_gx1v6   B_1850-2000_WACCM_CN&lt;br /&gt;
  &lt;br /&gt;
      T31_gx3v7       B_1850_CN&lt;br /&gt;
&lt;br /&gt;
   If the user is interested in running a &amp;quot;stand-alone&amp;quot; component configuration, &lt;br /&gt;
   the following model configurations have been validated scientifically and &lt;br /&gt;
   have associated diagnostic output as part of the release:&lt;br /&gt;
&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_2000_WACCM&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CAM5&lt;br /&gt;
      1.9x2.5_1.9x2.5    F_AMIP_CN&lt;br /&gt;
      0.9x1.25_0.9x1.25  F_AMIP_CN&lt;br /&gt;
&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000&lt;br /&gt;
      0.9x1.25_gx1v6  I_2000_CN&lt;br /&gt;
&lt;br /&gt;
      T62_gx1v6       C_NORMAL_YEAR&lt;br /&gt;
&lt;br /&gt;
   For more information regarding alternative component configurations, &lt;br /&gt;
   please refer to the individual component web pages at&lt;br /&gt;
   http://www.cesm.ucar.edu/models/cesm1.0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
  CESM1 RESOLUTIONS:  name (shortname) &lt;br /&gt;
    pt1_pt1 (pt1)  &lt;br /&gt;
    0.23x0.31_0.23x0.31 (f02_f02)  &lt;br /&gt;
    0.23x0.31_gx1v6 (f02_g16)  &lt;br /&gt;
    0.23x0.31_tx0.1v2 (f02_t12)  &lt;br /&gt;
    0.47x0.63_0.47x0.63 (f05_f05)  &lt;br /&gt;
    0.47x0.63_gx1v6 (f05_g16)  &lt;br /&gt;
    0.47x0.63_tx0.1v2 (f05_t12)  &lt;br /&gt;
    0.9x1.25_0.9x1.25 (f09_f09)  &lt;br /&gt;
    0.9x1.25_gx1v6 (f09_g16)  &lt;br /&gt;
    1.9x2.5_1.9x2.5 (f19_f19)  &lt;br /&gt;
    1.9x2.5_gx1v6 (f19_g16)  &lt;br /&gt;
    4x5_4x5 (f45_f45)  &lt;br /&gt;
    4x5_gx3v7 (f45_g37)  &lt;br /&gt;
    T62_gx3v7 (T62_g37)  &lt;br /&gt;
    T62_tx0.1v2 (T62_t12)  &lt;br /&gt;
    T62_gx1v6 (T62_g16)  &lt;br /&gt;
    T31_T31 (T31_T31)  &lt;br /&gt;
    T31_gx3v7 (T31_g37)  &lt;br /&gt;
    T42_T42 (T42_T42)  &lt;br /&gt;
    10x15_10x15 (f10_f10)  &lt;br /&gt;
    ne30np4_1.9x2.5_gx1v6 (ne30_f19_g16)  &lt;br /&gt;
    ne240np4_0.23x0.31_gx1v6 (ne240_f02_g16)  &lt;br /&gt;
    T85_T85 (T85_T85)  &lt;br /&gt;
  &lt;br /&gt;
  COMPSETS:  name (shortname): description &lt;br /&gt;
    A_PRESENT_DAY (A) &lt;br /&gt;
         Description: All data model  &lt;br /&gt;
    A_GLC (AG) &lt;br /&gt;
         Description: All data model plus glc (glacier model)  &lt;br /&gt;
    B_2000 (B) &lt;br /&gt;
         Description: All active components, present day  &lt;br /&gt;
    B_2000_CN (BCN) &lt;br /&gt;
         Description: all active components, present day, with CN (Carbon Nitrogen) in clm  &lt;br /&gt;
    B_1850_CAM5 (B1850C5) &lt;br /&gt;
         Description: All active components, pre-industrial, cam5 physics  &lt;br /&gt;
    B_1850 (B1850) &lt;br /&gt;
         Description: All active components, pre-industrial  &lt;br /&gt;
    B_1850_CN (B1850CN) &lt;br /&gt;
         Description: all active components, pre-industrial, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_2000_CN_CHEM (B2000CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_CN_CHEM (B1850CNCHM) &lt;br /&gt;
         Description: All active components, pre-industrial, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850_RAMPCO2_CN (B1850RMCN) &lt;br /&gt;
         Description: All active components, pre-industirial with co2 ramp, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000 (B20TR) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient  &lt;br /&gt;
    B_1850-2000_CN (B20TRCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1850-2000_CN_CHEM (B20TRCNCHM) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, with CN (Carbon Nitrogen) in CLM and super_fast_llnl chem in atm  &lt;br /&gt;
    B_1850-2000_CAM5 (B20TRC5) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, cam5 physics  &lt;br /&gt;
    B_2000_GLC (BG) &lt;br /&gt;
         Description: all active components, with active glc  &lt;br /&gt;
    B_2000_TROP_MOZART (BMOZ) &lt;br /&gt;
         Description: All active components, with trop_mozart  &lt;br /&gt;
    B_1850_WACCM (B1850W) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm  &lt;br /&gt;
    B_1850_WACCM_CN (B1850WCN) &lt;br /&gt;
         Description: all active components, pre-industrial, with waccm and CN  &lt;br /&gt;
    B_1850-2000_WACCM_CN (B20TRWCN) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, WACCM with CN (Carbon Nitrogen) in CLM  &lt;br /&gt;
    B_1955-2005_WACCM_CN (B55TRWCN) &lt;br /&gt;
         Description: All active components, 1955 to 2000 transient, WACCM with  daily solar data and SPEs, CLM with CN  &lt;br /&gt;
    B_1850_BGC-BPRP (B1850BPRP) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850_BGC-BDRD (B1850BDRD) &lt;br /&gt;
         Description: All active components, pre-industrial, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    B_1850-2000_BGC-BPRP (B20TRBPRP) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=prog, rad CO2=prog  &lt;br /&gt;
    B_1850-2000_BGC-BDRD (B20TRBDRD) &lt;br /&gt;
         Description: All active components, 1850 to 2000 transient, CN in CLM, ECO in POP, BGC CO2=diag, rad CO2=diag  &lt;br /&gt;
    C_NORMAL_YEAR_ECOSYS (CECO) &lt;br /&gt;
         Description: Active ocean model with ecosys and with COREv2 normal year forcing  &lt;br /&gt;
    C_NORMAL_YEAR (C) &lt;br /&gt;
         Description: Active ocean model with COREv2 normal year forcing  &lt;br /&gt;
    D_NORMAL_YEAR (D) &lt;br /&gt;
         Description: Active ice model with COREv2 normal year forcing  &lt;br /&gt;
    E_2000 (E) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean, present day  &lt;br /&gt;
    E_2000_GLC (EG) &lt;br /&gt;
         Description: Fully active cam and ice with som ocean and glc, present day  &lt;br /&gt;
    E_1850_CN (E1850CN) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, with CN  &lt;br /&gt;
    E_1850_CAM5 (E1850C5) &lt;br /&gt;
         Description: Pre-industrial fully active ice and som ocean, cam5 physics   &lt;br /&gt;
    F_AMIP_CN (FAMIPCN) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol - valid only for 1 degree cam/clm/pres-cice  &lt;br /&gt;
    F_AMIP_CAM5 (FAMIPC5) &lt;br /&gt;
         Description: AMIP run for CMIP5 protocol with cam5  &lt;br /&gt;
    F_1850 (F1850) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_1850_CAM5 (F1850C5) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn, cam5 physics  &lt;br /&gt;
    F_2000 (F) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice  &lt;br /&gt;
    F_2000_CAM5 (FC5) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, cam5 physics  &lt;br /&gt;
    F_2000_CN (FCN) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice with CN  &lt;br /&gt;
    F_1850-2000_CN (F20TRCN) &lt;br /&gt;
         Description: 20th Century transient stand-alone cam default, prescribed ocn/ice, with CN  &lt;br /&gt;
    F_2000_GLC (FG) &lt;br /&gt;
         Description: Stand-alone cam default, prescribed ocn/ice, glc (glacier model)  &lt;br /&gt;
    F_1850_CN_CHEM (F1850CNCHM) &lt;br /&gt;
         Description: stand-alone cam/clm, pre-industrial, with CN in CLM, super_fast_llnl chem in cam  &lt;br /&gt;
    F_1850_WACCM (F1850W) &lt;br /&gt;
         Description: Pre-industrial cam/clm with prescribed ice/ocn  &lt;br /&gt;
    F_2000_WACCM (FW) &lt;br /&gt;
         Description: present-day cam/clm with prescribed ice/ocn  &lt;br /&gt;
    G_1850_ECOSYS (G1850ECO) &lt;br /&gt;
         Description: 1850 control for pop-ecosystem/cice/datm7/dlnd-rx1  &lt;br /&gt;
    G_NORMAL_YEAR (G) &lt;br /&gt;
         Description: Coupled ocean ice with COREv2 normal year forcing  &lt;br /&gt;
    H_PRESENT_DAY (H) &lt;br /&gt;
         Description: Coupled ocean ice slnd  &lt;br /&gt;
    I_2000 (I) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850 (I1850) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and Satellite phenology (SP), CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_2000_GLC (IG) &lt;br /&gt;
         Description: Active glacier model and active land model with QIAN atm input data for 2003 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1948-2004 (I4804) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and Satellite phenology (SP), CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000 (I8520) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and transient Satellite phenology (SP), and Aerosol deposition from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    I_2000_CN (ICN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 2003 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850_CN (I1850CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 1850  &lt;br /&gt;
    I_1948-2004_CN (I4804CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 2004 and CN (Carbon Nitrogen) biogeochemistry, CO2 level and Aerosol deposition for 2000  &lt;br /&gt;
    I_1850-2000_CN (I8520CN) &lt;br /&gt;
         Description: Active land model with QIAN atm input data for 1948 to 1972 and transient CN, Aerosol dep from 1850 to 2000 and 2000 CO2 level  &lt;br /&gt;
    S_PRESENT_DAY (S) &lt;br /&gt;
         Description: All stub models plus xatm  &lt;br /&gt;
    X_PRESENT_DAY (X) &lt;br /&gt;
         Description: All dead model  &lt;br /&gt;
    XG_PRESENT_DAY (XG) &lt;br /&gt;
         Description: All dead model and cism  &lt;br /&gt;
  &lt;br /&gt;
  MACHINES:  name (description)&lt;br /&gt;
    tcs (U of T IBM p6, os is AIX, 32 pes/node, batch system is Moab/LoadLeveler) &lt;br /&gt;
    gpc (U of T iDataPlex intel cluster, os is linux, 8 pes/node, batch system is Moab/Torque) &lt;br /&gt;
    bluefire (NCAR IBM p6, os is AIX, 32 pes/node, batch system is LSF) &lt;br /&gt;
    brutus_po (Brutus Linux Cluster ETH (pgi/9.0-1 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_pm (Brutus Linux Cluster ETH (pgi/9.0-1 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_io (Brutus Linux Cluster ETH (intel/10.1.018 with open_mpi/1.4.1), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    brutus_im (Brutus Linux Cluster ETH (intel/10.1.018 with mvapich2/1.4rc2), 16 pes/node, batch system LSF, added by UB) &lt;br /&gt;
    edinburgh_lahey (NCAR CGD Linux Cluster (lahey), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_pgi (NCAR CGD Linux Cluster (pgi), 8 pes/node, batch system is PBS) &lt;br /&gt;
    edinburgh_intel (NCAR CGD Linux Cluster (intel), 8 pes/node, batch system is PBS) &lt;br /&gt;
    franklin (NERSC XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    hadley (UCB Linux Cluster, os is Linux (ia64), batch system is PBS) &lt;br /&gt;
    hopper (NERSC XT5, os is CNL, 8 pes/node, batch system is PBS) &lt;br /&gt;
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt) &lt;br /&gt;
    jaguar (ORNL XT4, os is CNL, 4 pes/node, batch system is PBS) &lt;br /&gt;
    jaguarpf (ORNL XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    kraken (NICS/UT/teragrid XT5, os is CNL, 12 pes/node) &lt;br /&gt;
    lynx_pgi (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS) &lt;br /&gt;
    midnight (ARSC Sun Cluster, os is Linux (pgi), batch system is PBS) &lt;br /&gt;
    pleiades (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS) &lt;br /&gt;
    pleiades_wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS) &lt;br /&gt;
    prototype_atlas (LLNL Linux Cluster, Linux (pgi), 8 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab) &lt;br /&gt;
    prototype_columbia (NASA Ames Linux Cluster, Linux (ia64), 2 pes/node, batch system is PBS) &lt;br /&gt;
    prototype_frost (NCAR IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_nyblue (SUNY IBM BG/L, os is BGL, 8 pes/node, batch system is cobalt) &lt;br /&gt;
    prototype_ranger (TACC Linux Cluster, Linux (pgi), 1 pes/node, batch system is SGE) &lt;br /&gt;
    prototype_ubgl (LLNL IBM BG/L, os is BGL, 2 pes/node, batch system is Moab) &lt;br /&gt;
    generic_ibm (generic ibm power system, os is AIX, batch system is LoadLeveler, user-defined) &lt;br /&gt;
    generic_xt (generic CRAY XT, os is CNL, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pgi (generic linux (pgi), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_lahey (generic linux (lahey), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_intel (generic linux (intel), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
    generic_linux_pathscale (generic linux (pathscale), os is Linux, batch system is PBS, user-defined) &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Initializing the Model Setup:''&lt;br /&gt;
&lt;br /&gt;
The initial setup of the model on TCS is simplified with the short script below&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/$USER&lt;br /&gt;
 export MACH=tcs&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f19_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' CCSMROOT should point to the model code version in ''/project/ccsm'' with the &amp;quot;_current&amp;quot; after it. The same for CESM1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script creates an 1850 control with all components of the model fully active and carbon nitrogen cycling in the land component, The resolution is 1.9x2.5 in the atmosphere and x1 in the ocean. The file is created in the ~/run directory:&lt;br /&gt;
&lt;br /&gt;
For valid component sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/a2967.html &lt;br /&gt;
For information on resolution sets see: http://www.cesm.ucar.edu/models/ccsm4.0/ccsm_doc/x42.html#ccsm_grids &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Load Balancing:''&lt;br /&gt;
&lt;br /&gt;
For the NCAR bluefire load balancing table for a select set of simulations see:&lt;br /&gt;
CESM1: http://www.cesm.ucar.edu/models/cesm1.0/timing/&lt;br /&gt;
CCSM4: http://www.cesm.ucar.edu/models/ccsm4.0/timing/&lt;br /&gt;
&lt;br /&gt;
 cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
&lt;br /&gt;
edit env_mach_pes.xml&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ATM&amp;quot;   value=&amp;quot;448&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ATM&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_LND&amp;quot;   value=&amp;quot;320&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_LND&amp;quot;   value=&amp;quot;160&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_ICE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_ICE&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_OCN&amp;quot;   value=&amp;quot;256&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_OCN&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_CPL&amp;quot;   value=&amp;quot;224&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_CPL&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTASKS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;NTHRDS_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;ROOTPE_GLC&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ATM&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_LND&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_ICE&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_OCN&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_CPL&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PSTRID_GLC&amp;quot;   value=&amp;quot;1&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this file is modified you can configure the case&lt;br /&gt;
&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 &lt;br /&gt;
You will notice that configure will change the file the you just edited and you can see the total processors used by the simulation (704  or 11 nodes in this case):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;TOTALPES&amp;quot;   value=&amp;quot;704&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_LEVEL&amp;quot;   value=&amp;quot;1r&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;64&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_PCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_TCOST&amp;quot;   value=&amp;quot;0&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;CCSM_ESTCOST&amp;quot;   value=&amp;quot;-3&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Rather than modifying the load balancing manually, NCAR has written a script that resides in your $CASE running directory that allows you to modify the individual component CPU allocation without playing with the env_mach_pes.xml file:&lt;br /&gt;
&lt;br /&gt;
To try a different configuration we might want 8 cpus running the OCN component continually and the remaining 24 cpus running atm on 24 then LND, ICE and CPL on 8 each. To set this up you enter;&lt;br /&gt;
&lt;br /&gt;
 configure -cleanmach&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 24&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 8 &lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_ICE -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_CPL -val 16&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 8&lt;br /&gt;
 xmlchange -file env_mach_pes.xml -id ROOTPE_OCN -val 24&lt;br /&gt;
 configure -case&lt;br /&gt;
&lt;br /&gt;
Then build and resubmit&lt;br /&gt;
&lt;br /&gt;
The task geometry used by loadleveler on TCS is located in the file: ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Ensure that the proper modules are loaded:&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) ncl/5.1.1               3) netcdf/4.0.1_nc3        5) xlf/13.1&lt;br /&gt;
  2) nco/3.9.6               4) parallel-netcdf/1.1.1   6) vacpp/11.1&lt;br /&gt;
&lt;br /&gt;
Now compile the model with:&lt;br /&gt;
&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
One of the pre-processing steps in this build sequence is to fetch inputdat sets (initial and boundary conditions) from the NCAR SVN server. You may want to do this yourself before you build on the ''datamover1'' node if there is a large amount of initial condition data to transfer from the NCAR repository. ''datamover1'' has a high bandwidth connection to the outside.&lt;br /&gt;
Note: We have most of the input data on /project/ccsm already so this step will not be required for the more common configurations.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; ssh datamover1&lt;br /&gt;
 Last login: Wed Jul  7 16:38:14 2010 from tcs-f11n06-gpfs&lt;br /&gt;
 user@gpc-logindm01:~&amp;gt;cd ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;&lt;br /&gt;
 user@gpc-logindm01:~/runs/ccsm4_comp-B_1850_CN_res-f19_g16&amp;gt;./check_input_data -inputdata /project/ccsm/inputdata -export&lt;br /&gt;
 Input Data List Files Found:&lt;br /&gt;
 ./Buildconf/cam.input_data_list&lt;br /&gt;
 ./Buildconf/clm.input_data_list&lt;br /&gt;
 ./Buildconf/cice.input_data_list&lt;br /&gt;
 ./Buildconf/pop2.input_data_list&lt;br /&gt;
 ./Buildconf/cpl.input_data_list&lt;br /&gt;
 export https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc /project/ccsm/inputdata/atm/cam/chem/trop_mozart_aero/aero/aero_1.9x2.5_L26_1850clim_c091112.nc ..... success&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Setting the Simulation Length:''&lt;br /&gt;
&lt;br /&gt;
The amount of time that you would like to run the model can be set by editing ''env_run.xml'' at anytime in the setup sequence&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;!--&amp;quot;if RESUBMIT is greater than 0, then case will automatically resubmit (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;RESUBMIT&amp;quot;   value=&amp;quot;10&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_N and STOP_DATE, valid values: none,never,nsteps,nstep,nseconds,nsecond,nminutes,nminute,nhours,nhour,ndays,nday,nmonths,nmonth,nyears,nyear,date,ifdays0,end (char) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_OPTION&amp;quot;   value=&amp;quot;nmonths&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;!--&amp;quot;sets the run length in conjuction with STOP_OPTION and STOP_DATE (integer) &amp;quot; --&amp;gt;&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;STOP_N&amp;quot;   value=&amp;quot;12&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
These settings will tell the model to checkpoint after each model year (12 months) and run for a total of 10 years (10 checkpoints)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on the Distributed System (TCS):''&lt;br /&gt;
&lt;br /&gt;
The model is now ready to be submitted to the TCS batch queue&lt;br /&gt;
&lt;br /&gt;
 llsubmit ccsm4_comp-B_1850_CN_res-f19_g16.tcs.run&lt;br /&gt;
&lt;br /&gt;
Once the model has run through a checkpoint timing information on the simulation will be found in:&lt;br /&gt;
 ~/runs/ccsm4_comp-B_1850_CN_res-f19_g16/timing&lt;br /&gt;
&lt;br /&gt;
Standard output from the model can be followed during runtime by going to:&lt;br /&gt;
 /scratch/guido/ccsm4_comp-B_1850_CN_res-f19_g16/run&lt;br /&gt;
and running&lt;br /&gt;
 tail -f &amp;lt;component_log_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model will archive the NetCDF output in:&lt;br /&gt;
 /scratch/$USER/archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Cloning Simulations''&lt;br /&gt;
&lt;br /&gt;
A useful command that allow for the setup of multiple runs quickly is the clone command. It allows for the cloning of a case quickly (so there is no need to run the setup script above every time)&lt;br /&gt;
 cd ~/runs&lt;br /&gt;
 /project/ccsm/ccsm4_0_current/scripts/create_clone -clone ccsm4_comp-B_1850_CN_res-f09_g16 -case ccsm4_comp-B_1850_CN_res-f09_g16_clone -v&lt;br /&gt;
&lt;br /&gt;
To change the load balancing (env_mach_pes.xml) in a current simulation setup or other parameters you can do a clean build to make sure the model is rebuilt properly:&lt;br /&gt;
 ./configure -cleanmach&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.clean_build&lt;br /&gt;
 ./configure -case&lt;br /&gt;
 ./ccsm4_comp-B_1850_CN_res-f19_g16.tcs.build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Running CCSM4 on GPC''&lt;br /&gt;
&lt;br /&gt;
The setup script is almost identical:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 export CCSMROOT=/project/ccsm/ccsm4_0_current&lt;br /&gt;
 export SCRATCH=/scratch/guido&lt;br /&gt;
 export MACH=gpc&lt;br /&gt;
 export COMPSET=B_1850_CN&lt;br /&gt;
 export RES=f09_g16&lt;br /&gt;
 export CASEROOT=~/runs/ccsm4gpc_comp-${COMPSET}_res-${RES}&lt;br /&gt;
 &lt;br /&gt;
 cd $CCSMROOT/scripts&lt;br /&gt;
 ./create_newcase -verbose -case $CASEROOT -mach $MACH -compset $COMPSET -res $RES &lt;br /&gt;
&lt;br /&gt;
To load balance and run the model follow the steps above:&lt;br /&gt;
The env_mach_pes.xml configuration files needs to be modified as follows:&lt;br /&gt;
 &amp;lt;entry id=&amp;quot;MAX_TASKS_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
 &amp;lt;entry id=&amp;quot;PES_PER_NODE&amp;quot;   value=&amp;quot;8&amp;quot;  /&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
Use qsub to submit the model to the GPC cluster:&lt;br /&gt;
 qsub ccsm4gpc_comp-B_1850_CN_res-f19_g16.tcs.run&lt;/div&gt;</summary>
		<author><name>Guido</name></author>
	</entry>
</feed>