Difference between revisions of "User Tips"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 3: Line 3:
 
In order to run GROMACS on a single node, the following two things are '''essential'''. If you do not include these two things, then some of your jobs will rune fine, but others will run slowly and others will produce only the beginning of a short log file and will produce no further output, even though they will continue to occupy the resources fully.
 
In order to run GROMACS on a single node, the following two things are '''essential'''. If you do not include these two things, then some of your jobs will rune fine, but others will run slowly and others will produce only the beginning of a short log file and will produce no further output, even though they will continue to occupy the resources fully.
  
# add :compute-eth: to your #PBS -l line
+
1. add :compute-eth: to your #PBS -l line
 
<source lang="sh">
 
<source lang="sh">
 
#PBS -l nodes=1:compute-eth:ppn=8,walltime=3:00:00,os=centos53computeA
 
#PBS -l nodes=1:compute-eth:ppn=8,walltime=3:00:00,os=centos53computeA
 
</source>
 
</source>
# add --mca btl self,tcp to the mpirun arguments
+
2. add --mca btl self,tcp to the mpirun arguments
 
<source lang="sh">
 
<source lang="sh">
 
/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun --mca btl self,tcp  
 
/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun --mca btl self,tcp  

Revision as of 23:33, 14 September 2009

Running single node MPI jobs

In order to run GROMACS on a single node, the following two things are essential. If you do not include these two things, then some of your jobs will rune fine, but others will run slowly and others will produce only the beginning of a short log file and will produce no further output, even though they will continue to occupy the resources fully.

1. add :compute-eth: to your #PBS -l line <source lang="sh">

  1. PBS -l nodes=1:compute-eth:ppn=8,walltime=3:00:00,os=centos53computeA

</source> 2. add --mca btl self,tcp to the mpirun arguments <source lang="sh"> /scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun --mca btl self,tcp -np $(wc -l $PBS_NODEFILE | gawk '{print $1}') -machinefile $PBS_NODEFILE /scratch/cneale/GPC/exe/intel/gromacs-4.0.5/exec/bin/mdrun_openmpi -deffnm test </source>

We are not exactly sure why this is required, or if it is required for programs other than GROMACS. However, you are strongly recommended to add this to any such script as it should only force you to get what you intend to get in any event. Refer to the section entitled "Ensuring that you get non-IB nodes" below for more information about what these commands do.

cneale September 14 2009

Benchmarking

Ensuring that you get non-IB nodes

You can specify gigE only nodes using a "compute-eth" flag

nodes=2:compute-eth:ppn=8

and this will only allow the code to run on "gigabit only nodes. So even if IB nodes are available it will sit in the queue.

By default (ie no property feature for the node) the scheduler (moab) is setup to use the gigE nodes first then the IB nodes. The scheduler configuration is ongoing but explicitly putting either "compute-eth" for ethernet or "ib" for infiniband nodes will guarantee the right type of node is used.

Also you can specify the type of interconnect directly on the mpirun line using mpirun --mca btl self,tcp for ethernet, so even if it was on an IB node it would still use ethernet for communication. Since the nodes are exactly the same except for the IB card, any benchmarking would still be valid.

Scott August 27 2009