User Tips
Running single node MPI jobs
In order to run GROMACS on a single node, the following two things are *essential*. If you do not include these two things, then some of your jobs will rune fine, but others will run slowly and others will produce only the beginning of a short log file and will produce no further output, even though they will continue to occupy the resources fully.
- add :compute-eth: to your #PBS -l line
<source lang="sh"> e.g. #PBS -l nodes=1:compute-eth:ppn=8,walltime=3:00:00,os=centos53computeA </source>
- add --mca btl self,tcp to the mpirun arguments
<source lang="sh"> e.g. /scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun --mca btl self,tcp -np $(wc -l $PBS_NODEFILE | gawk '{print $1}') -machinefile $PBS_NODEFILE /scratch/cneale/GPC/exe/intel/gromacs-4.0.5/exec/bin/mdrun_openmpi -deffnm test </source>
Cneale September 14 2009
Benchmarking
Ensuring that you get non-IB nodes
You can specify gigE only nodes using a "compute-eth" flag
nodes=2:compute-eth:ppn=8
and this will only allow the code to run on "gigabit only nodes. So even if IB nodes are available it will sit in the queue.
By default (ie no property feature for the node) the scheduler (moab) is setup to use the gigE nodes first then the IB nodes. The scheduler configuration is ongoing but explicitly putting either "compute-eth" for ethernet or "ib" for infiniband nodes will guarantee the right type of node is used.
Also you can specify the type of interconnect directly on the mpirun line using mpirun --mca btl self,tcp for ethernet, so even if it was on an IB node it would still use ethernet for communication. Since the nodes are exactly the same except for the IB card, any benchmarking would still be valid.
Scott August 27 2009